DPDK patches and discussions
 help / color / mirror / Atom feed
Search results ordered by [date|relevance]  view[summary|nested|Atom feed]
thread overview below | download: 
* [dpdk-dev] [PATCH v2 6/7] kni: replace unused variable definition with reserved bytes
  @ 2021-09-18  2:24  3%   ` Chenbo Xia
  2021-09-18  2:24  2%   ` [dpdk-dev] [PATCH v2 7/7] bus/pci: remove ABIs in PCI bus Chenbo Xia
    2 siblings, 0 replies; 200+ results
From: Chenbo Xia @ 2021-09-18  2:24 UTC (permalink / raw)
  To: dev, david.marchand; +Cc: Ferruh Yigit

PCI ID and address in structure rte_kni_conf are never used. And in
order not to break ABI, replace these variables with reserved bytes.

Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
---
 lib/kni/rte_kni.h | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/lib/kni/rte_kni.h b/lib/kni/rte_kni.h
index b0eaf46104..2281abbf6a 100644
--- a/lib/kni/rte_kni.h
+++ b/lib/kni/rte_kni.h
@@ -17,7 +17,6 @@
  * and burst transmit packets to KNI interfaces.
  */
 
-#include <rte_pci.h>
 #include <rte_memory.h>
 #include <rte_mempool.h>
 #include <rte_ether.h>
@@ -66,8 +65,7 @@ struct rte_kni_conf {
 	uint32_t core_id;   /* Core ID to bind kernel thread on */
 	uint16_t group_id;  /* Group ID */
 	unsigned mbuf_size; /* mbuf size */
-	struct rte_pci_addr addr; /* depreciated */
-	struct rte_pci_id id; /* depreciated */
+	uint8_t rsvd[20];
 
 	__extension__
 	uint8_t force_bind : 1; /* Flag to bind kernel thread */
-- 
2.17.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v2 7/7] bus/pci: remove ABIs in PCI bus
    2021-09-18  2:24  3%   ` [dpdk-dev] [PATCH v2 6/7] kni: replace unused variable definition with reserved bytes Chenbo Xia
@ 2021-09-18  2:24  2%   ` Chenbo Xia
    2 siblings, 0 replies; 200+ results
From: Chenbo Xia @ 2021-09-18  2:24 UTC (permalink / raw)
  To: dev, david.marchand
  Cc: Nicolas Chautru, Ferruh Yigit, Anatoly Burakov, Ray Kinsella,
	Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
	Matan Azrad, Viacheslav Ovsiienko, Jerin Jacob, Anoob Joseph,
	Fiona Trahe, John Griffin, Deepak Kumar Jain, Andrew Rybchenko,
	Ashish Gupta, Somalapuram Amaranath, Ankur Dwivedi,
	Tejasree Kondoj, Nagadheeraj Rottela, Srikanth Jampala, Jay Zhou,
	Timothy McDaniel, Pavan Nikhilesh, Ashwin Sekhar T K,
	Harman Kalra, Shepard Siegel, Ed Czeck, John Miller,
	Steven Webster, Matt Peters, Rasesh Mody, Shahed Shaikh,
	Ajit Khaparde, Somnath Kotur, Chas Williams, Min Hu (Connor),
	Rahul Lakkireddy, Haiyue Wang, Marcin Wojtas, Michal Krawczyk,
	Shai Brandes, Evgeny Schemeilin, Igor Chauskin, John Daley,
	Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
	Yisen Zhuang, Lijun Ou, Beilei Xing, Andrew Boyer, Rosen Xu,
	Stephen Hemminger, Long Li, Devendra Singh Rawat, Maciej Czekaj,
	Jiawen Wu, Jian Wang, Maxime Coquelin, Yong Wang, Jakub Palider,
	Tomasz Duszynski, Tianfei zhang, Bruce Richardson, Xiaoyun Li,
	Jingjing Wu, Radha Mohan Chintakuntla, Veerasenareddy Burru,
	Ori Kam, Xiao Wang, Thomas Monjalon

As announced in the deprecation note, most of ABIs in PCI bus are
removed in this patch. Only the function rte_pci_dump is still ABI
and experimental APIs are kept for future promotion.

This patch creates a new file named pci_driver.h and moves most of
the content in original rte_bus_pci.h to it. After that, pci_driver.h
is considered the interface for drivers and rte_bus_pci.h for
applications. pci_driver.h is defined as driver_sdk_headers so that
out-of-tree drivers can use it.

Then this patch replaces the including of rte_bus_pci.h with pci_driver.h
in all related drivers.

Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Rosen Xu <rosen.xu@intel.com>
---
 app/test/virtual_pmd.c                        |   2 +-
 doc/guides/rel_notes/release_21_11.rst        |   2 +
 drivers/baseband/acc100/rte_acc100_pmd.c      |   2 +-
 .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c         |   2 +-
 drivers/baseband/fpga_lte_fec/fpga_lte_fec.c  |   2 +-
 drivers/bus/pci/bsd/pci.c                     |   1 -
 drivers/bus/pci/linux/pci.c                   |   1 -
 drivers/bus/pci/linux/pci_uio.c               |   1 -
 drivers/bus/pci/linux/pci_vfio.c              |   1 -
 drivers/bus/pci/meson.build                   |   4 +
 drivers/bus/pci/pci_common_uio.c              |   1 -
 drivers/bus/pci/pci_driver.h                  | 402 ++++++++++++++++++
 drivers/bus/pci/pci_params.c                  |   1 -
 drivers/bus/pci/private.h                     |   3 +-
 drivers/bus/pci/rte_bus_pci.h                 | 375 +---------------
 drivers/bus/pci/version.map                   |  32 +-
 drivers/common/cnxk/roc_platform.h            |   2 +-
 drivers/common/mlx5/linux/mlx5_common_verbs.c |   2 +-
 drivers/common/mlx5/mlx5_common_pci.c         |   2 +-
 drivers/common/octeontx2/otx2_dev.h           |   2 +-
 drivers/common/octeontx2/otx2_sec_idev.c      |   2 +-
 drivers/common/qat/qat_device.h               |   2 +-
 drivers/common/qat/qat_qp.c                   |   2 +-
 drivers/common/sfc_efx/sfc_efx.h              |   2 +-
 drivers/compress/mlx5/mlx5_compress.c         |   2 +-
 drivers/compress/octeontx/otx_zip.h           |   2 +-
 drivers/compress/qat/qat_comp.c               |   2 +-
 drivers/crypto/ccp/ccp_dev.h                  |   2 +-
 drivers/crypto/ccp/ccp_pci.h                  |   2 +-
 drivers/crypto/ccp/rte_ccp_pmd.c              |   2 +-
 drivers/crypto/cnxk/cn10k_cryptodev.c         |   2 +-
 drivers/crypto/cnxk/cn9k_cryptodev.c          |   2 +-
 drivers/crypto/mlx5/mlx5_crypto.c             |   2 +-
 drivers/crypto/nitrox/nitrox_device.h         |   2 +-
 drivers/crypto/octeontx/otx_cryptodev.c       |   2 +-
 drivers/crypto/octeontx/otx_cryptodev_ops.c   |   2 +-
 drivers/crypto/octeontx2/otx2_cryptodev.c     |   2 +-
 drivers/crypto/qat/qat_sym.c                  |   2 +-
 drivers/crypto/qat/qat_sym_pmd.c              |   2 +-
 drivers/crypto/virtio/virtio_cryptodev.c      |   2 +-
 drivers/crypto/virtio/virtio_pci.h            |   2 +-
 drivers/event/dlb2/pf/dlb2_main.h             |   2 +-
 drivers/event/dlb2/pf/dlb2_pf.c               |   2 +-
 drivers/event/octeontx/ssovf_probe.c          |   2 +-
 drivers/event/octeontx/timvf_probe.c          |   2 +-
 drivers/event/octeontx2/otx2_evdev.c          |   2 +-
 drivers/mempool/cnxk/cnxk_mempool.c           |   2 +-
 drivers/mempool/octeontx/octeontx_fpavf.c     |   2 +-
 drivers/mempool/octeontx2/otx2_mempool.c      |   2 +-
 drivers/mempool/octeontx2/otx2_mempool.h      |   2 +-
 drivers/mempool/octeontx2/otx2_mempool_irq.c  |   2 +-
 drivers/meson.build                           |   4 +
 drivers/net/ark/ark_ethdev.c                  |   2 +-
 drivers/net/avp/avp_ethdev.c                  |   2 +-
 drivers/net/bnx2x/bnx2x.h                     |   2 +-
 drivers/net/bnxt/bnxt.h                       |   2 +-
 drivers/net/bonding/rte_eth_bond_args.c       |   2 +-
 drivers/net/cxgbe/base/adapter.h              |   2 +-
 drivers/net/cxgbe/cxgbe_ethdev.c              |   2 +-
 drivers/net/e1000/em_ethdev.c                 |   2 +-
 drivers/net/e1000/em_rxtx.c                   |   2 +-
 drivers/net/e1000/igb_ethdev.c                |   2 +-
 drivers/net/e1000/igb_pf.c                    |   2 +-
 drivers/net/ena/ena_ethdev.h                  |   2 +-
 drivers/net/enic/base/vnic_dev.h              |   2 +-
 drivers/net/enic/enic_ethdev.c                |   2 +-
 drivers/net/enic/enic_main.c                  |   2 +-
 drivers/net/enic/enic_vf_representor.c        |   2 +-
 drivers/net/hinic/base/hinic_pmd_hwdev.c      |   2 +-
 drivers/net/hinic/base/hinic_pmd_hwif.c       |   2 +-
 drivers/net/hinic/base/hinic_pmd_nicio.c      |   2 +-
 drivers/net/hinic/hinic_pmd_ethdev.c          |   2 +-
 drivers/net/hns3/hns3_ethdev.c                |   2 +-
 drivers/net/hns3/hns3_rxtx.c                  |   2 +-
 drivers/net/i40e/i40e_ethdev.c                |   2 +-
 drivers/net/i40e/i40e_ethdev_vf.c             |   2 +-
 drivers/net/i40e/i40e_vf_representor.c        |   2 +-
 drivers/net/igc/igc_ethdev.c                  |   2 +-
 drivers/net/ionic/ionic.h                     |   2 +-
 drivers/net/ionic/ionic_ethdev.c              |   2 +-
 drivers/net/ipn3ke/ipn3ke_ethdev.c            |   2 +-
 drivers/net/ipn3ke/ipn3ke_representor.c       |   2 +-
 drivers/net/ipn3ke/ipn3ke_tm.c                |   2 +-
 drivers/net/ixgbe/ixgbe_ethdev.c              |   2 +-
 drivers/net/ixgbe/ixgbe_ethdev.h              |   2 +-
 drivers/net/mlx4/mlx4_ethdev.c                |   2 +-
 drivers/net/mlx5/linux/mlx5_ethdev_os.c       |   2 +-
 drivers/net/mlx5/linux/mlx5_os.c              |   2 +-
 drivers/net/mlx5/mlx5.c                       |   2 +-
 drivers/net/mlx5/mlx5_ethdev.c                |   2 +-
 drivers/net/mlx5/mlx5_txq.c                   |   2 +-
 drivers/net/netvsc/hn_vf.c                    |   2 +-
 drivers/net/octeontx/base/octeontx_pkivf.c    |   2 +-
 drivers/net/octeontx/base/octeontx_pkovf.c    |   2 +-
 drivers/net/octeontx2/otx2_ethdev_irq.c       |   2 +-
 drivers/net/qede/base/bcm_osal.h              |   2 +-
 drivers/net/sfc/sfc.h                         |   2 +-
 drivers/net/sfc/sfc_ethdev.c                  |   2 +-
 drivers/net/sfc/sfc_sriov.c                   |   2 +-
 drivers/net/thunderx/nicvf_ethdev.c           |   2 +-
 drivers/net/txgbe/txgbe_ethdev.h              |   2 +-
 drivers/net/txgbe/txgbe_flow.c                |   2 +-
 drivers/net/txgbe/txgbe_pf.c                  |   2 +-
 drivers/net/virtio/virtio_pci.h               |   2 +-
 drivers/net/virtio/virtio_pci_ethdev.c        |   2 +-
 drivers/net/vmxnet3/vmxnet3_ethdev.c          |   2 +-
 drivers/raw/cnxk_bphy/cnxk_bphy.c             |   2 +-
 drivers/raw/cnxk_bphy/cnxk_bphy_cgx.c         |   2 +-
 drivers/raw/cnxk_bphy/cnxk_bphy_irq.c         |   2 +-
 drivers/raw/cnxk_bphy/cnxk_bphy_irq.h         |   2 +-
 drivers/raw/ifpga/ifpga_rawdev.c              |   2 +-
 drivers/raw/ifpga/rte_pmd_ifpga.c             |   2 +-
 drivers/raw/ioat/idxd_pci.c                   |   2 +-
 drivers/raw/ioat/ioat_rawdev.c                |   2 +-
 drivers/raw/ntb/ntb.c                         |   2 +-
 drivers/raw/ntb/ntb_hw_intel.c                |   2 +-
 drivers/raw/octeontx2_dma/otx2_dpi_rawdev.c   |   2 +-
 drivers/raw/octeontx2_ep/otx2_ep_enqdeq.c     |   2 +-
 drivers/raw/octeontx2_ep/otx2_ep_rawdev.c     |   2 +-
 drivers/regex/mlx5/mlx5_regex.c               |   2 +-
 drivers/regex/mlx5/mlx5_regex_fastpath.c      |   2 +-
 drivers/vdpa/ifc/base/ifcvf_osdep.h           |   2 +-
 drivers/vdpa/ifc/ifcvf_vdpa.c                 |   2 +-
 drivers/vdpa/mlx5/mlx5_vdpa.c                 |   2 +-
 lib/ethdev/ethdev_pci.h                       |   2 +-
 lib/eventdev/eventdev_pmd_pci.h               |   2 +-
 126 files changed, 546 insertions(+), 508 deletions(-)
 create mode 100644 drivers/bus/pci/pci_driver.h

diff --git a/app/test/virtual_pmd.c b/app/test/virtual_pmd.c
index 7036f401ed..555f2969ab 100644
--- a/app/test/virtual_pmd.c
+++ b/app/test/virtual_pmd.c
@@ -6,7 +6,7 @@
 #include <rte_ethdev.h>
 #include <ethdev_driver.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_malloc.h>
 #include <rte_memcpy.h>
 #include <rte_memory.h>
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index ce3f554e10..864069f9c2 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -153,6 +153,8 @@ ABI Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
+* pci: Removed all ABIs defined in rte_bus_pci.h except the function
+  ``rte_pci_dump()``.
 
 Known Issues
 ------------
diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c b/drivers/baseband/acc100/rte_acc100_pmd.c
index 68ba523ea9..72734784c9 100644
--- a/drivers/baseband/acc100/rte_acc100_pmd.c
+++ b/drivers/baseband/acc100/rte_acc100_pmd.c
@@ -14,7 +14,7 @@
 #include <rte_branch_prediction.h>
 #include <rte_hexdump.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #ifdef RTE_BBDEV_OFFLOAD_COST
 #include <rte_cycles.h>
 #endif
diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
index 6485cc824a..31cb7e5605 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
+++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
@@ -11,7 +11,7 @@
 #include <rte_mempool.h>
 #include <rte_errno.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_byteorder.h>
 #ifdef RTE_BBDEV_OFFLOAD_COST
 #include <rte_cycles.h>
diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
index 350c4248eb..0dc1417dde 100644
--- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
+++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
@@ -11,7 +11,7 @@
 #include <rte_mempool.h>
 #include <rte_errno.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_byteorder.h>
 #ifdef RTE_BBDEV_OFFLOAD_COST
 #include <rte_cycles.h>
diff --git a/drivers/bus/pci/bsd/pci.c b/drivers/bus/pci/bsd/pci.c
index d189bff311..b7d3a8df33 100644
--- a/drivers/bus/pci/bsd/pci.c
+++ b/drivers/bus/pci/bsd/pci.c
@@ -28,7 +28,6 @@
 #include <rte_interrupts.h>
 #include <rte_log.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
 #include <rte_common.h>
 #include <rte_launch.h>
 #include <rte_memory.h>
diff --git a/drivers/bus/pci/linux/pci.c b/drivers/bus/pci/linux/pci.c
index 4d261b55ee..8f91e0233c 100644
--- a/drivers/bus/pci/linux/pci.c
+++ b/drivers/bus/pci/linux/pci.c
@@ -8,7 +8,6 @@
 #include <rte_log.h>
 #include <rte_bus.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
 #include <rte_malloc.h>
 #include <rte_devargs.h>
 #include <rte_memcpy.h>
diff --git a/drivers/bus/pci/linux/pci_uio.c b/drivers/bus/pci/linux/pci_uio.c
index 39ebeac2a0..de028e4874 100644
--- a/drivers/bus/pci/linux/pci_uio.c
+++ b/drivers/bus/pci/linux/pci_uio.c
@@ -19,7 +19,6 @@
 #include <rte_string_fns.h>
 #include <rte_log.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
 #include <rte_common.h>
 #include <rte_malloc.h>
 
diff --git a/drivers/bus/pci/linux/pci_vfio.c b/drivers/bus/pci/linux/pci_vfio.c
index a024269140..2ccd0fa101 100644
--- a/drivers/bus/pci/linux/pci_vfio.c
+++ b/drivers/bus/pci/linux/pci_vfio.c
@@ -13,7 +13,6 @@
 
 #include <rte_log.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
 #include <rte_eal_paging.h>
 #include <rte_malloc.h>
 #include <rte_vfio.h>
diff --git a/drivers/bus/pci/meson.build b/drivers/bus/pci/meson.build
index 81c7e94c00..33c09e2622 100644
--- a/drivers/bus/pci/meson.build
+++ b/drivers/bus/pci/meson.build
@@ -29,4 +29,8 @@ if is_windows
     includes += include_directories('windows')
 endif
 
+driver_sdk_headers += files(
+        'pci_driver.h',
+)
+
 deps += ['kvargs']
diff --git a/drivers/bus/pci/pci_common_uio.c b/drivers/bus/pci/pci_common_uio.c
index 318f9a1d55..00ef86c1dd 100644
--- a/drivers/bus/pci/pci_common_uio.c
+++ b/drivers/bus/pci/pci_common_uio.c
@@ -11,7 +11,6 @@
 
 #include <rte_eal.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
 #include <rte_tailq.h>
 #include <rte_log.h>
 #include <rte_malloc.h>
diff --git a/drivers/bus/pci/pci_driver.h b/drivers/bus/pci/pci_driver.h
new file mode 100644
index 0000000000..7a913d54c5
--- /dev/null
+++ b/drivers/bus/pci/pci_driver.h
@@ -0,0 +1,402 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2015 Intel Corporation.
+ * Copyright 2013-2014 6WIND S.A.
+ */
+
+#ifndef _PCI_DRIVER_H_
+#define _PCI_DRIVER_H_
+
+/**
+ * @file
+ * PCI device & driver interface
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <limits.h>
+#include <errno.h>
+#include <sys/queue.h>
+#include <stdint.h>
+#include <inttypes.h>
+
+#include <rte_debug.h>
+#include <rte_interrupts.h>
+#include <rte_dev.h>
+#include <rte_bus.h>
+#include <rte_pci.h>
+
+/** Pathname of PCI devices directory. */
+__rte_internal
+const char *rte_pci_get_sysfs_path(void);
+
+/* Forward declarations */
+struct rte_pci_device;
+struct rte_pci_driver;
+
+/** List of PCI devices */
+TAILQ_HEAD(rte_pci_device_list, rte_pci_device);
+/** List of PCI drivers */
+TAILQ_HEAD(rte_pci_driver_list, rte_pci_driver);
+
+/* PCI Bus iterators */
+#define FOREACH_DEVICE_ON_PCIBUS(p)	\
+		TAILQ_FOREACH(p, &(rte_pci_bus.device_list), next)
+
+#define FOREACH_DRIVER_ON_PCIBUS(p)	\
+		TAILQ_FOREACH(p, &(rte_pci_bus.driver_list), next)
+
+struct rte_devargs;
+
+enum rte_pci_kernel_driver {
+	RTE_PCI_KDRV_UNKNOWN = 0,  /* may be misc UIO or bifurcated driver */
+	RTE_PCI_KDRV_IGB_UIO,      /* igb_uio for Linux */
+	RTE_PCI_KDRV_VFIO,         /* VFIO for Linux */
+	RTE_PCI_KDRV_UIO_GENERIC,  /* uio_pci_generic for Linux */
+	RTE_PCI_KDRV_NIC_UIO,      /* nic_uio for FreeBSD */
+	RTE_PCI_KDRV_NONE,         /* no attached driver */
+	RTE_PCI_KDRV_NET_UIO,      /* NetUIO for Windows */
+};
+
+/**
+ * A structure describing a PCI device.
+ */
+struct rte_pci_device {
+	TAILQ_ENTRY(rte_pci_device) next;   /**< Next probed PCI device. */
+	struct rte_device device;           /**< Inherit core device */
+	struct rte_pci_addr addr;           /**< PCI location. */
+	struct rte_pci_id id;               /**< PCI ID. */
+	struct rte_mem_resource mem_resource[PCI_MAX_RESOURCE];
+					    /**< PCI Memory Resource */
+	struct rte_intr_handle intr_handle; /**< Interrupt handle */
+	struct rte_pci_driver *driver;      /**< PCI driver used in probing */
+	uint16_t max_vfs;                   /**< sriov enable if not zero */
+	enum rte_pci_kernel_driver kdrv;    /**< Kernel driver passthrough */
+	char name[PCI_PRI_STR_SIZE+1];      /**< PCI location (ASCII) */
+	struct rte_intr_handle vfio_req_intr_handle;
+				/**< Handler of VFIO request interrupt */
+};
+
+/**
+ * @internal
+ * Helper macro for drivers that need to convert to struct rte_pci_device.
+ */
+#define RTE_DEV_TO_PCI(ptr) container_of(ptr, struct rte_pci_device, device)
+
+#define RTE_DEV_TO_PCI_CONST(ptr) \
+	container_of(ptr, const struct rte_pci_device, device)
+
+#define RTE_ETH_DEV_TO_PCI(eth_dev)	RTE_DEV_TO_PCI((eth_dev)->device)
+
+#ifdef __cplusplus
+/** C++ macro used to help building up tables of device IDs */
+#define RTE_PCI_DEVICE(vend, dev) \
+	RTE_CLASS_ANY_ID,         \
+	(vend),                   \
+	(dev),                    \
+	RTE_PCI_ANY_ID,           \
+	RTE_PCI_ANY_ID
+#else
+/** Macro used to help building up tables of device IDs */
+#define RTE_PCI_DEVICE(vend, dev)          \
+	.class_id = RTE_CLASS_ANY_ID,      \
+	.vendor_id = (vend),               \
+	.device_id = (dev),                \
+	.subsystem_vendor_id = RTE_PCI_ANY_ID, \
+	.subsystem_device_id = RTE_PCI_ANY_ID
+#endif
+
+/**
+ * Initialisation function for the driver called during PCI probing.
+ */
+typedef int (rte_pci_probe_t)(struct rte_pci_driver *, struct rte_pci_device *);
+
+/**
+ * Uninitialisation function for the driver called during hotplugging.
+ */
+typedef int (rte_pci_remove_t)(struct rte_pci_device *);
+
+/**
+ * Driver-specific DMA mapping. After a successful call the device
+ * will be able to read/write from/to this segment.
+ *
+ * @param dev
+ *   Pointer to the PCI device.
+ * @param addr
+ *   Starting virtual address of memory to be mapped.
+ * @param iova
+ *   Starting IOVA address of memory to be mapped.
+ * @param len
+ *   Length of memory segment being mapped.
+ * @return
+ *   - 0 On success.
+ *   - Negative value and rte_errno is set otherwise.
+ */
+typedef int (pci_dma_map_t)(struct rte_pci_device *dev, void *addr,
+			    uint64_t iova, size_t len);
+
+/**
+ * Driver-specific DMA un-mapping. After a successful call the device
+ * will not be able to read/write from/to this segment.
+ *
+ * @param dev
+ *   Pointer to the PCI device.
+ * @param addr
+ *   Starting virtual address of memory to be unmapped.
+ * @param iova
+ *   Starting IOVA address of memory to be unmapped.
+ * @param len
+ *   Length of memory segment being unmapped.
+ * @return
+ *   - 0 On success.
+ *   - Negative value and rte_errno is set otherwise.
+ */
+typedef int (pci_dma_unmap_t)(struct rte_pci_device *dev, void *addr,
+			      uint64_t iova, size_t len);
+
+/**
+ * A structure describing a PCI driver.
+ */
+struct rte_pci_driver {
+	TAILQ_ENTRY(rte_pci_driver) next;  /**< Next in list. */
+	struct rte_driver driver;          /**< Inherit core driver. */
+	struct rte_pci_bus *bus;           /**< PCI bus reference. */
+	rte_pci_probe_t *probe;            /**< Device probe function. */
+	rte_pci_remove_t *remove;          /**< Device remove function. */
+	pci_dma_map_t *dma_map;		   /**< device dma map function. */
+	pci_dma_unmap_t *dma_unmap;	   /**< device dma unmap function. */
+	const struct rte_pci_id *id_table; /**< ID table, NULL terminated. */
+	uint32_t drv_flags;                /**< Flags RTE_PCI_DRV_*. */
+};
+
+/**
+ * Structure describing the PCI bus
+ */
+struct rte_pci_bus {
+	struct rte_bus bus;               /**< Inherit the generic class */
+	struct rte_pci_device_list device_list;  /**< List of PCI devices */
+	struct rte_pci_driver_list driver_list;  /**< List of PCI drivers */
+};
+
+/** Device needs PCI BAR mapping (done with either IGB_UIO or VFIO) */
+#define RTE_PCI_DRV_NEED_MAPPING 0x0001
+/** Device needs PCI BAR mapping with enabled write combining (wc) */
+#define RTE_PCI_DRV_WC_ACTIVATE 0x0002
+/** Device already probed can be probed again to check for new ports. */
+#define RTE_PCI_DRV_PROBE_AGAIN 0x0004
+/** Device driver supports link state interrupt */
+#define RTE_PCI_DRV_INTR_LSC	0x0008
+/** Device driver supports device removal interrupt */
+#define RTE_PCI_DRV_INTR_RMV 0x0010
+/** Device driver needs to keep mapped resources if unsupported dev detected */
+#define RTE_PCI_DRV_KEEP_MAPPED_RES 0x0020
+/** Device driver needs IOVA as VA and cannot work with IOVA as PA */
+#define RTE_PCI_DRV_NEED_IOVA_AS_VA 0x0040
+
+/**
+ * Map the PCI device resources in user space virtual memory address
+ *
+ * Note that driver should not call this function when flag
+ * RTE_PCI_DRV_NEED_MAPPING is set, as EAL will do that for
+ * you when it's on.
+ *
+ * @param dev
+ *   A pointer to a rte_pci_device structure describing the device
+ *   to use
+ *
+ * @return
+ *   0 on success, negative on error and positive if no driver
+ *   is found for the device.
+ */
+__rte_internal
+int rte_pci_map_device(struct rte_pci_device *dev);
+
+/**
+ * Unmap this device
+ *
+ * @param dev
+ *   A pointer to a rte_pci_device structure describing the device
+ *   to use
+ */
+__rte_internal
+void rte_pci_unmap_device(struct rte_pci_device *dev);
+
+/**
+ * Find device's extended PCI capability.
+ *
+ *  @param dev
+ *    A pointer to rte_pci_device structure.
+ *
+ *  @param cap
+ *    Extended capability to be found, which can be any from
+ *    RTE_PCI_EXT_CAP_ID_*, defined in librte_pci.
+ *
+ *  @return
+ *  > 0: The offset of the next matching extended capability structure
+ *       within the device's PCI configuration space.
+ *  < 0: An error in PCI config space read.
+ *  = 0: Device does not support it.
+ */
+__rte_internal
+off_t rte_pci_find_ext_capability(struct rte_pci_device *dev, uint32_t cap);
+
+/**
+ * Enables/Disables Bus Master for device's PCI command register.
+ *
+ *  @param dev
+ *    A pointer to rte_pci_device structure.
+ *  @param enable
+ *    Enable or disable Bus Master.
+ *
+ *  @return
+ *  0 on success, -1 on error in PCI config space read/write.
+ */
+__rte_internal
+int rte_pci_set_bus_master(struct rte_pci_device *dev, bool enable);
+
+/**
+ * Register a PCI driver.
+ *
+ * @param driver
+ *   A pointer to a rte_pci_driver structure describing the driver
+ *   to be registered.
+ */
+__rte_internal
+void rte_pci_register(struct rte_pci_driver *driver);
+
+/** Helper for PCI device registration from driver (eth, crypto) instance */
+#define RTE_PMD_REGISTER_PCI(nm, pci_drv) \
+RTE_INIT(pciinitfn_ ##nm) \
+{\
+	(pci_drv).driver.name = RTE_STR(nm);\
+	rte_pci_register(&pci_drv); \
+} \
+RTE_PMD_EXPORT_NAME(nm, __COUNTER__)
+
+/**
+ * Unregister a PCI driver.
+ *
+ * @param driver
+ *   A pointer to a rte_pci_driver structure describing the driver
+ *   to be unregistered.
+ */
+__rte_internal
+void rte_pci_unregister(struct rte_pci_driver *driver);
+
+/**
+ * Read PCI config space.
+ *
+ * @param device
+ *   A pointer to a rte_pci_device structure describing the device
+ *   to use
+ * @param buf
+ *   A data buffer where the bytes should be read into
+ * @param len
+ *   The length of the data buffer.
+ * @param offset
+ *   The offset into PCI config space
+ * @return
+ *  Number of bytes read on success, negative on error.
+ */
+__rte_internal
+int rte_pci_read_config(const struct rte_pci_device *device,
+		void *buf, size_t len, off_t offset);
+
+/**
+ * Write PCI config space.
+ *
+ * @param device
+ *   A pointer to a rte_pci_device structure describing the device
+ *   to use
+ * @param buf
+ *   A data buffer containing the bytes should be written
+ * @param len
+ *   The length of the data buffer.
+ * @param offset
+ *   The offset into PCI config space
+ */
+__rte_internal
+int rte_pci_write_config(const struct rte_pci_device *device,
+		const void *buf, size_t len, off_t offset);
+
+/**
+ * A structure used to access io resources for a pci device.
+ * rte_pci_ioport is arch, os, driver specific, and should not be used outside
+ * of pci ioport api.
+ */
+struct rte_pci_ioport {
+	struct rte_pci_device *dev;
+	uint64_t base;
+	uint64_t len; /* only filled for memory mapped ports */
+};
+
+/**
+ * Initialize a rte_pci_ioport object for a pci device io resource.
+ *
+ * This object is then used to gain access to those io resources (see below).
+ *
+ * @param dev
+ *   A pointer to a rte_pci_device structure describing the device
+ *   to use.
+ * @param bar
+ *   Index of the io pci resource we want to access.
+ * @param p
+ *   The rte_pci_ioport object to be initialized.
+ * @return
+ *  0 on success, negative on error.
+ */
+__rte_internal
+int rte_pci_ioport_map(struct rte_pci_device *dev, int bar,
+		struct rte_pci_ioport *p);
+
+/**
+ * Release any resources used in a rte_pci_ioport object.
+ *
+ * @param p
+ *   The rte_pci_ioport object to be uninitialized.
+ * @return
+ *  0 on success, negative on error.
+ */
+__rte_internal
+int rte_pci_ioport_unmap(struct rte_pci_ioport *p);
+
+/**
+ * Read from a io pci resource.
+ *
+ * @param p
+ *   The rte_pci_ioport object from which we want to read.
+ * @param data
+ *   A data buffer where the bytes should be read into
+ * @param len
+ *   The length of the data buffer.
+ * @param offset
+ *   The offset into the pci io resource.
+ */
+__rte_internal
+void rte_pci_ioport_read(struct rte_pci_ioport *p,
+		void *data, size_t len, off_t offset);
+
+/**
+ * Write to a io pci resource.
+ *
+ * @param p
+ *   The rte_pci_ioport object to which we want to write.
+ * @param data
+ *   A data buffer where the bytes should be read into
+ * @param len
+ *   The length of the data buffer.
+ * @param offset
+ *   The offset into the pci io resource.
+ */
+__rte_internal
+void rte_pci_ioport_write(struct rte_pci_ioport *p,
+		const void *data, size_t len, off_t offset);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _PCI_DRIVER_H_ */
diff --git a/drivers/bus/pci/pci_params.c b/drivers/bus/pci/pci_params.c
index 691b5ea018..3f4f94cc72 100644
--- a/drivers/bus/pci/pci_params.c
+++ b/drivers/bus/pci/pci_params.c
@@ -3,7 +3,6 @@
  */
 
 #include <rte_bus.h>
-#include <rte_bus_pci.h>
 #include <rte_dev.h>
 #include <rte_errno.h>
 #include <rte_kvargs.h>
diff --git a/drivers/bus/pci/private.h b/drivers/bus/pci/private.h
index 0fbef8e1d8..dd42baf516 100644
--- a/drivers/bus/pci/private.h
+++ b/drivers/bus/pci/private.h
@@ -8,10 +8,11 @@
 #include <stdbool.h>
 #include <stdio.h>
 
-#include <rte_bus_pci.h>
 #include <rte_os_shim.h>
 #include <rte_pci.h>
 
+#include "pci_driver.h"
+
 extern struct rte_pci_bus rte_pci_bus;
 
 struct rte_pci_driver;
diff --git a/drivers/bus/pci/rte_bus_pci.h b/drivers/bus/pci/rte_bus_pci.h
index 21d9dd4289..2fb35801f8 100644
--- a/drivers/bus/pci/rte_bus_pci.h
+++ b/drivers/bus/pci/rte_bus_pci.h
@@ -1,225 +1,17 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2015 Intel Corporation.
- * Copyright 2013-2014 6WIND S.A.
+ * Copyright(c) 2021 Intel Corporation.
  */
 
 #ifndef _RTE_BUS_PCI_H_
 #define _RTE_BUS_PCI_H_
 
-/**
- * @file
- * PCI device & driver interface
- */
-
 #ifdef __cplusplus
 extern "C" {
 #endif
 
-#include <stdio.h>
-#include <stdlib.h>
-#include <limits.h>
-#include <errno.h>
-#include <sys/queue.h>
 #include <stdint.h>
-#include <inttypes.h>
-
-#include <rte_debug.h>
-#include <rte_interrupts.h>
-#include <rte_dev.h>
-#include <rte_bus.h>
-#include <rte_pci.h>
-
-/** Pathname of PCI devices directory. */
-const char *rte_pci_get_sysfs_path(void);
-
-/* Forward declarations */
-struct rte_pci_device;
-struct rte_pci_driver;
-
-/** List of PCI devices */
-TAILQ_HEAD(rte_pci_device_list, rte_pci_device);
-/** List of PCI drivers */
-TAILQ_HEAD(rte_pci_driver_list, rte_pci_driver);
-
-/* PCI Bus iterators */
-#define FOREACH_DEVICE_ON_PCIBUS(p)	\
-		TAILQ_FOREACH(p, &(rte_pci_bus.device_list), next)
 
-#define FOREACH_DRIVER_ON_PCIBUS(p)	\
-		TAILQ_FOREACH(p, &(rte_pci_bus.driver_list), next)
-
-struct rte_devargs;
-
-enum rte_pci_kernel_driver {
-	RTE_PCI_KDRV_UNKNOWN = 0,  /* may be misc UIO or bifurcated driver */
-	RTE_PCI_KDRV_IGB_UIO,      /* igb_uio for Linux */
-	RTE_PCI_KDRV_VFIO,         /* VFIO for Linux */
-	RTE_PCI_KDRV_UIO_GENERIC,  /* uio_pci_generic for Linux */
-	RTE_PCI_KDRV_NIC_UIO,      /* nic_uio for FreeBSD */
-	RTE_PCI_KDRV_NONE,         /* no attached driver */
-	RTE_PCI_KDRV_NET_UIO,      /* NetUIO for Windows */
-};
-
-/**
- * A structure describing a PCI device.
- */
-struct rte_pci_device {
-	TAILQ_ENTRY(rte_pci_device) next;   /**< Next probed PCI device. */
-	struct rte_device device;           /**< Inherit core device */
-	struct rte_pci_addr addr;           /**< PCI location. */
-	struct rte_pci_id id;               /**< PCI ID. */
-	struct rte_mem_resource mem_resource[PCI_MAX_RESOURCE];
-					    /**< PCI Memory Resource */
-	struct rte_intr_handle intr_handle; /**< Interrupt handle */
-	struct rte_pci_driver *driver;      /**< PCI driver used in probing */
-	uint16_t max_vfs;                   /**< sriov enable if not zero */
-	enum rte_pci_kernel_driver kdrv;    /**< Kernel driver passthrough */
-	char name[PCI_PRI_STR_SIZE+1];      /**< PCI location (ASCII) */
-	struct rte_intr_handle vfio_req_intr_handle;
-				/**< Handler of VFIO request interrupt */
-};
-
-/**
- * @internal
- * Helper macro for drivers that need to convert to struct rte_pci_device.
- */
-#define RTE_DEV_TO_PCI(ptr) container_of(ptr, struct rte_pci_device, device)
-
-#define RTE_DEV_TO_PCI_CONST(ptr) \
-	container_of(ptr, const struct rte_pci_device, device)
-
-#define RTE_ETH_DEV_TO_PCI(eth_dev)	RTE_DEV_TO_PCI((eth_dev)->device)
-
-#ifdef __cplusplus
-/** C++ macro used to help building up tables of device IDs */
-#define RTE_PCI_DEVICE(vend, dev) \
-	RTE_CLASS_ANY_ID,         \
-	(vend),                   \
-	(dev),                    \
-	RTE_PCI_ANY_ID,           \
-	RTE_PCI_ANY_ID
-#else
-/** Macro used to help building up tables of device IDs */
-#define RTE_PCI_DEVICE(vend, dev)          \
-	.class_id = RTE_CLASS_ANY_ID,      \
-	.vendor_id = (vend),               \
-	.device_id = (dev),                \
-	.subsystem_vendor_id = RTE_PCI_ANY_ID, \
-	.subsystem_device_id = RTE_PCI_ANY_ID
-#endif
-
-/**
- * Initialisation function for the driver called during PCI probing.
- */
-typedef int (rte_pci_probe_t)(struct rte_pci_driver *, struct rte_pci_device *);
-
-/**
- * Uninitialisation function for the driver called during hotplugging.
- */
-typedef int (rte_pci_remove_t)(struct rte_pci_device *);
-
-/**
- * Driver-specific DMA mapping. After a successful call the device
- * will be able to read/write from/to this segment.
- *
- * @param dev
- *   Pointer to the PCI device.
- * @param addr
- *   Starting virtual address of memory to be mapped.
- * @param iova
- *   Starting IOVA address of memory to be mapped.
- * @param len
- *   Length of memory segment being mapped.
- * @return
- *   - 0 On success.
- *   - Negative value and rte_errno is set otherwise.
- */
-typedef int (pci_dma_map_t)(struct rte_pci_device *dev, void *addr,
-			    uint64_t iova, size_t len);
-
-/**
- * Driver-specific DMA un-mapping. After a successful call the device
- * will not be able to read/write from/to this segment.
- *
- * @param dev
- *   Pointer to the PCI device.
- * @param addr
- *   Starting virtual address of memory to be unmapped.
- * @param iova
- *   Starting IOVA address of memory to be unmapped.
- * @param len
- *   Length of memory segment being unmapped.
- * @return
- *   - 0 On success.
- *   - Negative value and rte_errno is set otherwise.
- */
-typedef int (pci_dma_unmap_t)(struct rte_pci_device *dev, void *addr,
-			      uint64_t iova, size_t len);
-
-/**
- * A structure describing a PCI driver.
- */
-struct rte_pci_driver {
-	TAILQ_ENTRY(rte_pci_driver) next;  /**< Next in list. */
-	struct rte_driver driver;          /**< Inherit core driver. */
-	struct rte_pci_bus *bus;           /**< PCI bus reference. */
-	rte_pci_probe_t *probe;            /**< Device probe function. */
-	rte_pci_remove_t *remove;          /**< Device remove function. */
-	pci_dma_map_t *dma_map;		   /**< device dma map function. */
-	pci_dma_unmap_t *dma_unmap;	   /**< device dma unmap function. */
-	const struct rte_pci_id *id_table; /**< ID table, NULL terminated. */
-	uint32_t drv_flags;                /**< Flags RTE_PCI_DRV_*. */
-};
-
-/**
- * Structure describing the PCI bus
- */
-struct rte_pci_bus {
-	struct rte_bus bus;               /**< Inherit the generic class */
-	struct rte_pci_device_list device_list;  /**< List of PCI devices */
-	struct rte_pci_driver_list driver_list;  /**< List of PCI drivers */
-};
-
-/** Device needs PCI BAR mapping (done with either IGB_UIO or VFIO) */
-#define RTE_PCI_DRV_NEED_MAPPING 0x0001
-/** Device needs PCI BAR mapping with enabled write combining (wc) */
-#define RTE_PCI_DRV_WC_ACTIVATE 0x0002
-/** Device already probed can be probed again to check for new ports. */
-#define RTE_PCI_DRV_PROBE_AGAIN 0x0004
-/** Device driver supports link state interrupt */
-#define RTE_PCI_DRV_INTR_LSC	0x0008
-/** Device driver supports device removal interrupt */
-#define RTE_PCI_DRV_INTR_RMV 0x0010
-/** Device driver needs to keep mapped resources if unsupported dev detected */
-#define RTE_PCI_DRV_KEEP_MAPPED_RES 0x0020
-/** Device driver needs IOVA as VA and cannot work with IOVA as PA */
-#define RTE_PCI_DRV_NEED_IOVA_AS_VA 0x0040
-
-/**
- * Map the PCI device resources in user space virtual memory address
- *
- * Note that driver should not call this function when flag
- * RTE_PCI_DRV_NEED_MAPPING is set, as EAL will do that for
- * you when it's on.
- *
- * @param dev
- *   A pointer to a rte_pci_device structure describing the device
- *   to use
- *
- * @return
- *   0 on success, negative on error and positive if no driver
- *   is found for the device.
- */
-int rte_pci_map_device(struct rte_pci_device *dev);
-
-/**
- * Unmap this device
- *
- * @param dev
- *   A pointer to a rte_pci_device structure describing the device
- *   to use
- */
-void rte_pci_unmap_device(struct rte_pci_device *dev);
+#include <rte_compat.h>
 
 /**
  * Dump the content of the PCI bus.
@@ -229,169 +21,6 @@ void rte_pci_unmap_device(struct rte_pci_device *dev);
  */
 void rte_pci_dump(FILE *f);
 
-/**
- * Find device's extended PCI capability.
- *
- *  @param dev
- *    A pointer to rte_pci_device structure.
- *
- *  @param cap
- *    Extended capability to be found, which can be any from
- *    RTE_PCI_EXT_CAP_ID_*, defined in librte_pci.
- *
- *  @return
- *  > 0: The offset of the next matching extended capability structure
- *       within the device's PCI configuration space.
- *  < 0: An error in PCI config space read.
- *  = 0: Device does not support it.
- */
-__rte_experimental
-off_t rte_pci_find_ext_capability(struct rte_pci_device *dev, uint32_t cap);
-
-/**
- * Enables/Disables Bus Master for device's PCI command register.
- *
- *  @param dev
- *    A pointer to rte_pci_device structure.
- *  @param enable
- *    Enable or disable Bus Master.
- *
- *  @return
- *  0 on success, -1 on error in PCI config space read/write.
- */
-__rte_experimental
-int rte_pci_set_bus_master(struct rte_pci_device *dev, bool enable);
-
-/**
- * Register a PCI driver.
- *
- * @param driver
- *   A pointer to a rte_pci_driver structure describing the driver
- *   to be registered.
- */
-void rte_pci_register(struct rte_pci_driver *driver);
-
-/** Helper for PCI device registration from driver (eth, crypto) instance */
-#define RTE_PMD_REGISTER_PCI(nm, pci_drv) \
-RTE_INIT(pciinitfn_ ##nm) \
-{\
-	(pci_drv).driver.name = RTE_STR(nm);\
-	rte_pci_register(&pci_drv); \
-} \
-RTE_PMD_EXPORT_NAME(nm, __COUNTER__)
-
-/**
- * Unregister a PCI driver.
- *
- * @param driver
- *   A pointer to a rte_pci_driver structure describing the driver
- *   to be unregistered.
- */
-void rte_pci_unregister(struct rte_pci_driver *driver);
-
-/**
- * Read PCI config space.
- *
- * @param device
- *   A pointer to a rte_pci_device structure describing the device
- *   to use
- * @param buf
- *   A data buffer where the bytes should be read into
- * @param len
- *   The length of the data buffer.
- * @param offset
- *   The offset into PCI config space
- * @return
- *  Number of bytes read on success, negative on error.
- */
-int rte_pci_read_config(const struct rte_pci_device *device,
-		void *buf, size_t len, off_t offset);
-
-/**
- * Write PCI config space.
- *
- * @param device
- *   A pointer to a rte_pci_device structure describing the device
- *   to use
- * @param buf
- *   A data buffer containing the bytes should be written
- * @param len
- *   The length of the data buffer.
- * @param offset
- *   The offset into PCI config space
- */
-int rte_pci_write_config(const struct rte_pci_device *device,
-		const void *buf, size_t len, off_t offset);
-
-/**
- * A structure used to access io resources for a pci device.
- * rte_pci_ioport is arch, os, driver specific, and should not be used outside
- * of pci ioport api.
- */
-struct rte_pci_ioport {
-	struct rte_pci_device *dev;
-	uint64_t base;
-	uint64_t len; /* only filled for memory mapped ports */
-};
-
-/**
- * Initialize a rte_pci_ioport object for a pci device io resource.
- *
- * This object is then used to gain access to those io resources (see below).
- *
- * @param dev
- *   A pointer to a rte_pci_device structure describing the device
- *   to use.
- * @param bar
- *   Index of the io pci resource we want to access.
- * @param p
- *   The rte_pci_ioport object to be initialized.
- * @return
- *  0 on success, negative on error.
- */
-int rte_pci_ioport_map(struct rte_pci_device *dev, int bar,
-		struct rte_pci_ioport *p);
-
-/**
- * Release any resources used in a rte_pci_ioport object.
- *
- * @param p
- *   The rte_pci_ioport object to be uninitialized.
- * @return
- *  0 on success, negative on error.
- */
-int rte_pci_ioport_unmap(struct rte_pci_ioport *p);
-
-/**
- * Read from a io pci resource.
- *
- * @param p
- *   The rte_pci_ioport object from which we want to read.
- * @param data
- *   A data buffer where the bytes should be read into
- * @param len
- *   The length of the data buffer.
- * @param offset
- *   The offset into the pci io resource.
- */
-void rte_pci_ioport_read(struct rte_pci_ioport *p,
-		void *data, size_t len, off_t offset);
-
-/**
- * Write to a io pci resource.
- *
- * @param p
- *   The rte_pci_ioport object to which we want to write.
- * @param data
- *   A data buffer where the bytes should be read into
- * @param len
- *   The length of the data buffer.
- * @param offset
- *   The offset into the pci io resource.
- */
-void rte_pci_ioport_write(struct rte_pci_ioport *p,
-		const void *data, size_t len, off_t offset);
-
 /**
  * Read 4 bytes from PCI memory resource.
  *
diff --git a/drivers/bus/pci/version.map b/drivers/bus/pci/version.map
index 01ec836559..73cf881778 100644
--- a/drivers/bus/pci/version.map
+++ b/drivers/bus/pci/version.map
@@ -2,6 +2,22 @@ DPDK_22 {
 	global:
 
 	rte_pci_dump;
+
+	local: *;
+};
+
+EXPERIMENTAL {
+	global:
+
+	# added in 21.11
+	rte_pci_mem_rd32;
+	rte_pci_mem_wr32;
+};
+
+INTERNAL {
+	global:
+
+	rte_pci_find_ext_capability;
 	rte_pci_get_sysfs_path;
 	rte_pci_ioport_map;
 	rte_pci_ioport_read;
@@ -10,22 +26,8 @@ DPDK_22 {
 	rte_pci_map_device;
 	rte_pci_read_config;
 	rte_pci_register;
+	rte_pci_set_bus_master;
 	rte_pci_unmap_device;
 	rte_pci_unregister;
 	rte_pci_write_config;
-
-	local: *;
-};
-
-EXPERIMENTAL {
-	global:
-
-	rte_pci_find_ext_capability;
-
-	# added in 21.08
-	rte_pci_set_bus_master;
-
-	# added in 21.11
-	rte_pci_mem_rd32;
-	rte_pci_mem_wr32;
 };
diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h
index 285b24b82d..fa8a6ef0af 100644
--- a/drivers/common/cnxk/roc_platform.h
+++ b/drivers/common/cnxk/roc_platform.h
@@ -7,7 +7,7 @@
 
 #include <rte_alarm.h>
 #include <rte_bitmap.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_byteorder.h>
 #include <rte_common.h>
 #include <rte_cycles.h>
diff --git a/drivers/common/mlx5/linux/mlx5_common_verbs.c b/drivers/common/mlx5/linux/mlx5_common_verbs.c
index 9080bd3e87..65d795643b 100644
--- a/drivers/common/mlx5/linux/mlx5_common_verbs.c
+++ b/drivers/common/mlx5/linux/mlx5_common_verbs.c
@@ -11,7 +11,7 @@
 #include <inttypes.h>
 
 #include <rte_errno.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_bus_auxiliary.h>
 
 #include "mlx5_common_utils.h"
diff --git a/drivers/common/mlx5/mlx5_common_pci.c b/drivers/common/mlx5/mlx5_common_pci.c
index 8b38091d87..eaa9a0fff5 100644
--- a/drivers/common/mlx5/mlx5_common_pci.c
+++ b/drivers/common/mlx5/mlx5_common_pci.c
@@ -9,7 +9,7 @@
 #include <rte_errno.h>
 #include <rte_class.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 
 #include "mlx5_common_log.h"
 #include "mlx5_common_private.h"
diff --git a/drivers/common/octeontx2/otx2_dev.h b/drivers/common/octeontx2/otx2_dev.h
index d5b2b0d9af..1f36f24cb0 100644
--- a/drivers/common/octeontx2/otx2_dev.h
+++ b/drivers/common/octeontx2/otx2_dev.h
@@ -5,7 +5,7 @@
 #ifndef _OTX2_DEV_H
 #define _OTX2_DEV_H
 
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 
 #include "otx2_common.h"
 #include "otx2_irq.h"
diff --git a/drivers/common/octeontx2/otx2_sec_idev.c b/drivers/common/octeontx2/otx2_sec_idev.c
index 6e9643c383..a73c27c970 100644
--- a/drivers/common/octeontx2/otx2_sec_idev.c
+++ b/drivers/common/octeontx2/otx2_sec_idev.c
@@ -3,7 +3,7 @@
  */
 
 #include <rte_atomic.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_ethdev.h>
 #include <rte_spinlock.h>
 
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index 228c057d1e..8d18365bae 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -4,7 +4,7 @@
 #ifndef _QAT_DEVICE_H_
 #define _QAT_DEVICE_H_
 
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 
 #include "qat_common.h"
 #include "qat_logs.h"
diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
index 026ea5ee01..a964231d2c 100644
--- a/drivers/common/qat/qat_qp.c
+++ b/drivers/common/qat/qat_qp.c
@@ -8,7 +8,7 @@
 #include <rte_malloc.h>
 #include <rte_memzone.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_atomic.h>
 #include <rte_prefetch.h>
 
diff --git a/drivers/common/sfc_efx/sfc_efx.h b/drivers/common/sfc_efx/sfc_efx.h
index c16eca60f3..9829ed0e96 100644
--- a/drivers/common/sfc_efx/sfc_efx.h
+++ b/drivers/common/sfc_efx/sfc_efx.h
@@ -10,7 +10,7 @@
 #ifndef _SFC_EFX_H_
 #define _SFC_EFX_H_
 
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 
 #include "efx.h"
 #include "efsys.h"
diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c
index c5e0a83a8c..46b888c127 100644
--- a/drivers/compress/mlx5/mlx5_compress.c
+++ b/drivers/compress/mlx5/mlx5_compress.c
@@ -5,7 +5,7 @@
 #include <rte_malloc.h>
 #include <rte_log.h>
 #include <rte_errno.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_spinlock.h>
 #include <rte_comp.h>
 #include <rte_compressdev.h>
diff --git a/drivers/compress/octeontx/otx_zip.h b/drivers/compress/octeontx/otx_zip.h
index e43f7f5c3e..56136ce734 100644
--- a/drivers/compress/octeontx/otx_zip.h
+++ b/drivers/compress/octeontx/otx_zip.h
@@ -7,7 +7,7 @@
 
 #include <unistd.h>
 
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_comp.h>
 #include <rte_compressdev.h>
 #include <rte_compressdev_pmd.h>
diff --git a/drivers/compress/qat/qat_comp.c b/drivers/compress/qat/qat_comp.c
index 7ac25a3b4c..6fa82ad9a8 100644
--- a/drivers/compress/qat/qat_comp.c
+++ b/drivers/compress/qat/qat_comp.c
@@ -6,7 +6,7 @@
 #include <rte_mbuf.h>
 #include <rte_hexdump.h>
 #include <rte_comp.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_byteorder.h>
 #include <rte_memcpy.h>
 #include <rte_common.h>
diff --git a/drivers/crypto/ccp/ccp_dev.h b/drivers/crypto/ccp/ccp_dev.h
index ca5145c278..eb8cd94716 100644
--- a/drivers/crypto/ccp/ccp_dev.h
+++ b/drivers/crypto/ccp/ccp_dev.h
@@ -10,7 +10,7 @@
 #include <stdint.h>
 #include <string.h>
 
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_atomic.h>
 #include <rte_byteorder.h>
 #include <rte_io.h>
diff --git a/drivers/crypto/ccp/ccp_pci.h b/drivers/crypto/ccp/ccp_pci.h
index 7ed3bac406..c8ba9f8789 100644
--- a/drivers/crypto/ccp/ccp_pci.h
+++ b/drivers/crypto/ccp/ccp_pci.h
@@ -7,7 +7,7 @@
 
 #include <stdint.h>
 
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 
 #define SYSFS_PCI_DEVICES "/sys/bus/pci/devices"
 #define PROC_MODULES "/proc/modules"
diff --git a/drivers/crypto/ccp/rte_ccp_pmd.c b/drivers/crypto/ccp/rte_ccp_pmd.c
index a54d81de46..1dd8edceba 100644
--- a/drivers/crypto/ccp/rte_ccp_pmd.c
+++ b/drivers/crypto/ccp/rte_ccp_pmd.c
@@ -3,7 +3,7 @@
  */
 
 #include <rte_string_fns.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_bus_vdev.h>
 #include <rte_common.h>
 #include <rte_cryptodev.h>
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev.c b/drivers/crypto/cnxk/cn10k_cryptodev.c
index 012eb0c051..969442bb11 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev.c
@@ -2,7 +2,7 @@
  * Copyright(C) 2021 Marvell.
  */
 
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_common.h>
 #include <rte_crypto.h>
 #include <rte_cryptodev.h>
diff --git a/drivers/crypto/cnxk/cn9k_cryptodev.c b/drivers/crypto/cnxk/cn9k_cryptodev.c
index 6b8cb01a12..2dcd04cb2e 100644
--- a/drivers/crypto/cnxk/cn9k_cryptodev.c
+++ b/drivers/crypto/cnxk/cn9k_cryptodev.c
@@ -2,7 +2,7 @@
  * Copyright(C) 2021 Marvell.
  */
 
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_common.h>
 #include <rte_crypto.h>
 #include <rte_cryptodev.h>
diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c
index e01be15ade..7957a2242f 100644
--- a/drivers/crypto/mlx5/mlx5_crypto.c
+++ b/drivers/crypto/mlx5/mlx5_crypto.c
@@ -6,7 +6,7 @@
 #include <rte_mempool.h>
 #include <rte_errno.h>
 #include <rte_log.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_memory.h>
 
 #include <mlx5_glue.h>
diff --git a/drivers/crypto/nitrox/nitrox_device.h b/drivers/crypto/nitrox/nitrox_device.h
index 6b8095f42b..91b15d9913 100644
--- a/drivers/crypto/nitrox/nitrox_device.h
+++ b/drivers/crypto/nitrox/nitrox_device.h
@@ -5,7 +5,7 @@
 #ifndef _NITROX_DEVICE_H_
 #define _NITROX_DEVICE_H_
 
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_cryptodev.h>
 
 struct nitrox_sym_device;
diff --git a/drivers/crypto/octeontx/otx_cryptodev.c b/drivers/crypto/octeontx/otx_cryptodev.c
index c294f86d79..ca92e56a0d 100644
--- a/drivers/crypto/octeontx/otx_cryptodev.c
+++ b/drivers/crypto/octeontx/otx_cryptodev.c
@@ -2,7 +2,7 @@
  * Copyright(c) 2018 Cavium, Inc
  */
 
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_common.h>
 #include <rte_cryptodev.h>
 #include <cryptodev_pmd.h>
diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c b/drivers/crypto/octeontx/otx_cryptodev_ops.c
index 9b5bde53f8..e7558079db 100644
--- a/drivers/crypto/octeontx/otx_cryptodev_ops.c
+++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c
@@ -3,7 +3,7 @@
  */
 
 #include <rte_alarm.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_cryptodev.h>
 #include <cryptodev_pmd.h>
 #include <rte_eventdev.h>
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev.c b/drivers/crypto/octeontx2/otx2_cryptodev.c
index 85b1f00263..1ea9b54f41 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev.c
@@ -2,7 +2,7 @@
  * Copyright (C) 2019 Marvell International Ltd.
  */
 
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_common.h>
 #include <rte_crypto.h>
 #include <rte_cryptodev.h>
diff --git a/drivers/crypto/qat/qat_sym.c b/drivers/crypto/qat/qat_sym.c
index 93b257522b..574e3d577d 100644
--- a/drivers/crypto/qat/qat_sym.c
+++ b/drivers/crypto/qat/qat_sym.c
@@ -7,7 +7,7 @@
 #include <rte_mempool.h>
 #include <rte_mbuf.h>
 #include <rte_crypto_sym.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_byteorder.h>
 
 #include "qat_sym.h"
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index efda921c05..50acd5d189 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -2,7 +2,7 @@
  * Copyright(c) 2015-2018 Intel Corporation
  */
 
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_common.h>
 #include <rte_dev.h>
 #include <rte_malloc.h>
diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c
index 8faa39df4a..7929c71249 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.c
+++ b/drivers/crypto/virtio/virtio_cryptodev.c
@@ -7,7 +7,7 @@
 #include <rte_common.h>
 #include <rte_errno.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_cryptodev.h>
 #include <cryptodev_pmd.h>
 #include <rte_eal.h>
diff --git a/drivers/crypto/virtio/virtio_pci.h b/drivers/crypto/virtio/virtio_pci.h
index 0a7ea1bb64..889c263555 100644
--- a/drivers/crypto/virtio/virtio_pci.h
+++ b/drivers/crypto/virtio/virtio_pci.h
@@ -9,7 +9,7 @@
 
 #include <rte_eal_paging.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_cryptodev.h>
 
 #include "virtio_crypto.h"
diff --git a/drivers/event/dlb2/pf/dlb2_main.h b/drivers/event/dlb2/pf/dlb2_main.h
index 9eeda482a3..40178e0dda 100644
--- a/drivers/event/dlb2/pf/dlb2_main.h
+++ b/drivers/event/dlb2/pf/dlb2_main.h
@@ -9,7 +9,7 @@
 #include <rte_log.h>
 #include <rte_spinlock.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_eal_paging.h>
 
 #include "base/dlb2_hw_types.h"
diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c
index e9da89d650..8f8c742286 100644
--- a/drivers/event/dlb2/pf/dlb2_pf.c
+++ b/drivers/event/dlb2/pf/dlb2_pf.c
@@ -25,7 +25,7 @@
 #include <rte_cycles.h>
 #include <rte_io.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_eventdev.h>
 #include <eventdev_pmd.h>
 #include <eventdev_pmd_pci.h>
diff --git a/drivers/event/octeontx/ssovf_probe.c b/drivers/event/octeontx/ssovf_probe.c
index 4da7d1ae45..f45f005e33 100644
--- a/drivers/event/octeontx/ssovf_probe.c
+++ b/drivers/event/octeontx/ssovf_probe.c
@@ -7,7 +7,7 @@
 #include <rte_eal.h>
 #include <rte_io.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 
 #include "octeontx_mbox.h"
 #include "ssovf_evdev.h"
diff --git a/drivers/event/octeontx/timvf_probe.c b/drivers/event/octeontx/timvf_probe.c
index 59bba31e8e..5a75494b12 100644
--- a/drivers/event/octeontx/timvf_probe.c
+++ b/drivers/event/octeontx/timvf_probe.c
@@ -5,7 +5,7 @@
 #include <rte_eal.h>
 #include <rte_io.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 
 #include <octeontx_mbox.h>
 
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index 38a6b651d9..5db1880455 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -4,7 +4,7 @@
 
 #include <inttypes.h>
 
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_common.h>
 #include <rte_eal.h>
 #include <eventdev_pmd_pci.h>
diff --git a/drivers/mempool/cnxk/cnxk_mempool.c b/drivers/mempool/cnxk/cnxk_mempool.c
index dd4d74ca05..175bad355f 100644
--- a/drivers/mempool/cnxk/cnxk_mempool.c
+++ b/drivers/mempool/cnxk/cnxk_mempool.c
@@ -3,7 +3,7 @@
  */
 
 #include <rte_atomic.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_common.h>
 #include <rte_devargs.h>
 #include <rte_eal.h>
diff --git a/drivers/mempool/octeontx/octeontx_fpavf.c b/drivers/mempool/octeontx/octeontx_fpavf.c
index 94dc5cd815..a7a09f7872 100644
--- a/drivers/mempool/octeontx/octeontx_fpavf.c
+++ b/drivers/mempool/octeontx/octeontx_fpavf.c
@@ -13,7 +13,7 @@
 
 #include <rte_atomic.h>
 #include <rte_eal.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_errno.h>
 #include <rte_memory.h>
 #include <rte_malloc.h>
diff --git a/drivers/mempool/octeontx2/otx2_mempool.c b/drivers/mempool/octeontx2/otx2_mempool.c
index fb630fecf8..6c3969fcfb 100644
--- a/drivers/mempool/octeontx2/otx2_mempool.c
+++ b/drivers/mempool/octeontx2/otx2_mempool.c
@@ -3,7 +3,7 @@
  */
 
 #include <rte_atomic.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_common.h>
 #include <rte_eal.h>
 #include <rte_io.h>
diff --git a/drivers/mempool/octeontx2/otx2_mempool.h b/drivers/mempool/octeontx2/otx2_mempool.h
index 8aa548248d..b2ed9a56e8 100644
--- a/drivers/mempool/octeontx2/otx2_mempool.h
+++ b/drivers/mempool/octeontx2/otx2_mempool.h
@@ -6,7 +6,7 @@
 #define __OTX2_MEMPOOL_H__
 
 #include <rte_bitmap.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_devargs.h>
 #include <rte_mempool.h>
 
diff --git a/drivers/mempool/octeontx2/otx2_mempool_irq.c b/drivers/mempool/octeontx2/otx2_mempool_irq.c
index 5fa22b9612..c7c9b0d35d 100644
--- a/drivers/mempool/octeontx2/otx2_mempool_irq.c
+++ b/drivers/mempool/octeontx2/otx2_mempool_irq.c
@@ -5,7 +5,7 @@
 #include <inttypes.h>
 
 #include <rte_common.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 
 #include "otx2_common.h"
 #include "otx2_irq.h"
diff --git a/drivers/meson.build b/drivers/meson.build
index 3d08540581..0cb82e01ca 100644
--- a/drivers/meson.build
+++ b/drivers/meson.build
@@ -85,6 +85,7 @@ foreach subpath:subdirs
         name = drv
         sources = []
         headers = []
+        driver_sdk_headers = [] # public headers included by drivers
         objs = []
         cflags = default_cflags
         includes = [include_directories(drv_path)]
@@ -153,6 +154,9 @@ foreach subpath:subdirs
         dpdk_extra_ldflags += pkgconfig_extra_libs
 
         install_headers(headers)
+        if get_option('enable_driver_sdk')
+            install_headers(driver_sdk_headers)
+        endif
 
         # generate pmdinfo sources by building a temporary
         # lib and then running pmdinfogen on the contents of
diff --git a/drivers/net/ark/ark_ethdev.c b/drivers/net/ark/ark_ethdev.c
index 377299b14c..4c564ce21e 100644
--- a/drivers/net/ark/ark_ethdev.c
+++ b/drivers/net/ark/ark_ethdev.c
@@ -6,7 +6,7 @@
 #include <sys/stat.h>
 #include <dlfcn.h>
 
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <ethdev_pci.h>
 #include <rte_kvargs.h>
 
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 623fa5e5ff..87e29519e6 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -16,7 +16,7 @@
 #include <rte_atomic.h>
 #include <rte_branch_prediction.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_ether.h>
 #include <rte_common.h>
 #include <rte_cycles.h>
diff --git a/drivers/net/bnx2x/bnx2x.h b/drivers/net/bnx2x/bnx2x.h
index 80d19cbfd6..425acaa5ac 100644
--- a/drivers/net/bnx2x/bnx2x.h
+++ b/drivers/net/bnx2x/bnx2x.h
@@ -16,7 +16,7 @@
 
 #include <rte_byteorder.h>
 #include <rte_spinlock.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_io.h>
 
 #include "bnx2x_osal.h"
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index a64b138bc3..381593cbfd 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -11,7 +11,7 @@
 #include <sys/queue.h>
 
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <ethdev_driver.h>
 #include <rte_memory.h>
 #include <rte_lcore.h>
diff --git a/drivers/net/bonding/rte_eth_bond_args.c b/drivers/net/bonding/rte_eth_bond_args.c
index 5406e1c934..24ff649ca7 100644
--- a/drivers/net/bonding/rte_eth_bond_args.c
+++ b/drivers/net/bonding/rte_eth_bond_args.c
@@ -4,7 +4,7 @@
 
 #include <rte_devargs.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_kvargs.h>
 
 #include "rte_eth_bond.h"
diff --git a/drivers/net/cxgbe/base/adapter.h b/drivers/net/cxgbe/base/adapter.h
index 01a2a9d147..80809d7cd6 100644
--- a/drivers/net/cxgbe/base/adapter.h
+++ b/drivers/net/cxgbe/base/adapter.h
@@ -8,7 +8,7 @@
 #ifndef __T4_ADAPTER_H__
 #define __T4_ADAPTER_H__
 
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_mbuf.h>
 #include <rte_io.h>
 #include <rte_rwlock.h>
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 177eca3976..fb03d69d66 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -20,7 +20,7 @@
 #include <rte_log.h>
 #include <rte_debug.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_branch_prediction.h>
 #include <rte_memory.h>
 #include <rte_tailq.h>
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index a0ca371b02..657de61aa4 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -13,7 +13,7 @@
 #include <rte_byteorder.h>
 #include <rte_debug.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_ether.h>
 #include <ethdev_driver.h>
 #include <ethdev_pci.h>
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index dfd8f2fd00..c75be0d1e5 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -18,7 +18,7 @@
 #include <rte_log.h>
 #include <rte_debug.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_memory.h>
 #include <rte_memcpy.h>
 #include <rte_memzone.h>
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index d80fad01e3..4a598db052 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -15,7 +15,7 @@
 #include <rte_log.h>
 #include <rte_debug.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_ether.h>
 #include <ethdev_driver.h>
 #include <ethdev_pci.h>
diff --git a/drivers/net/e1000/igb_pf.c b/drivers/net/e1000/igb_pf.c
index 2ce74dd5a9..294fb75dc3 100644
--- a/drivers/net/e1000/igb_pf.c
+++ b/drivers/net/e1000/igb_pf.c
@@ -10,7 +10,7 @@
 #include <stdarg.h>
 #include <inttypes.h>
 
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_interrupts.h>
 #include <rte_log.h>
 #include <rte_debug.h>
diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h
index 06ac8b06b5..a3d76383e0 100644
--- a/drivers/net/ena/ena_ethdev.h
+++ b/drivers/net/ena/ena_ethdev.h
@@ -12,7 +12,7 @@
 #include <ethdev_pci.h>
 #include <rte_cycles.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_timer.h>
 #include <rte_dev.h>
 #include <rte_net.h>
diff --git a/drivers/net/enic/base/vnic_dev.h b/drivers/net/enic/base/vnic_dev.h
index 4b9f75b65f..db9b2af3cd 100644
--- a/drivers/net/enic/base/vnic_dev.h
+++ b/drivers/net/enic/base/vnic_dev.h
@@ -9,7 +9,7 @@
 #include <stdbool.h>
 
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 
 #include "enic_compat.h"
 #include "vnic_resource.h"
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 8d5797523b..1202fcd4f9 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -8,7 +8,7 @@
 
 #include <rte_dev.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <ethdev_driver.h>
 #include <ethdev_pci.h>
 #include <rte_geneve.h>
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 2affd380c6..f5c1131839 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -10,7 +10,7 @@
 #include <fcntl.h>
 
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_memzone.h>
 #include <rte_malloc.h>
 #include <rte_mbuf.h>
diff --git a/drivers/net/enic/enic_vf_representor.c b/drivers/net/enic/enic_vf_representor.c
index 79dd6e5640..f44f6c013e 100644
--- a/drivers/net/enic/enic_vf_representor.c
+++ b/drivers/net/enic/enic_vf_representor.c
@@ -5,7 +5,7 @@
 #include <stdint.h>
 #include <stdio.h>
 
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_common.h>
 #include <rte_dev.h>
 #include <ethdev_driver.h>
diff --git a/drivers/net/hinic/base/hinic_pmd_hwdev.c b/drivers/net/hinic/base/hinic_pmd_hwdev.c
index cb9cf6efa2..49a6b523f5 100644
--- a/drivers/net/hinic/base/hinic_pmd_hwdev.c
+++ b/drivers/net/hinic/base/hinic_pmd_hwdev.c
@@ -3,7 +3,7 @@
  */
 
 #include<ethdev_driver.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_hash.h>
 #include <rte_jhash.h>
 
diff --git a/drivers/net/hinic/base/hinic_pmd_hwif.c b/drivers/net/hinic/base/hinic_pmd_hwif.c
index 26fa1e27d4..9b1b41ca95 100644
--- a/drivers/net/hinic/base/hinic_pmd_hwif.c
+++ b/drivers/net/hinic/base/hinic_pmd_hwif.c
@@ -2,7 +2,7 @@
  * Copyright(c) 2017 Huawei Technologies Co., Ltd
  */
 
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 
 #include "hinic_compat.h"
 #include "hinic_csr.h"
diff --git a/drivers/net/hinic/base/hinic_pmd_nicio.c b/drivers/net/hinic/base/hinic_pmd_nicio.c
index ad5db9f1de..234b433c90 100644
--- a/drivers/net/hinic/base/hinic_pmd_nicio.c
+++ b/drivers/net/hinic/base/hinic_pmd_nicio.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright(c) 2017 Huawei Technologies Co., Ltd
  */
-#include<rte_bus_pci.h>
+#include <pci_driver.h>
 
 #include "hinic_compat.h"
 #include "hinic_pmd_hwdev.h"
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index c01e2ec1d4..9820e8aa0b 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -3,7 +3,7 @@
  */
 
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <ethdev_pci.h>
 #include <rte_mbuf.h>
 #include <rte_malloc.h>
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 7d37004972..d5d5bc8e0f 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -3,7 +3,7 @@
  */
 
 #include <rte_alarm.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <ethdev_pci.h>
 #include <rte_pci.h>
 #include <rte_kvargs.h>
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 481872e395..9a8a1289c6 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -2,7 +2,7 @@
  * Copyright(c) 2018-2021 HiSilicon Limited.
  */
 
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_common.h>
 #include <rte_cycles.h>
 #include <rte_geneve.h>
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 7b230e2ed1..51fface85e 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -15,7 +15,7 @@
 #include <rte_eal.h>
 #include <rte_string_fns.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_ether.h>
 #include <ethdev_driver.h>
 #include <ethdev_pci.h>
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index e8dd6d1dab..0ddff5e343 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -18,7 +18,7 @@
 #include <rte_log.h>
 #include <rte_debug.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_atomic.h>
 #include <rte_branch_prediction.h>
 #include <rte_memory.h>
diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
index 0481b55381..650d8ad502 100644
--- a/drivers/net/i40e/i40e_vf_representor.c
+++ b/drivers/net/i40e/i40e_vf_representor.c
@@ -2,7 +2,7 @@
  * Copyright(c) 2018 Intel Corporation.
  */
 
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_ethdev.h>
 #include <rte_pci.h>
 #include <rte_malloc.h>
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 224a095483..3d3d0d6f3c 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -7,7 +7,7 @@
 
 #include <rte_string_fns.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <ethdev_driver.h>
 #include <ethdev_pci.h>
 #include <rte_malloc.h>
diff --git a/drivers/net/ionic/ionic.h b/drivers/net/ionic/ionic.h
index 49b90d1b7c..e946d92c29 100644
--- a/drivers/net/ionic/ionic.h
+++ b/drivers/net/ionic/ionic.h
@@ -8,7 +8,7 @@
 #include <stdint.h>
 #include <inttypes.h>
 
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 
 #include "ionic_dev.h"
 #include "ionic_if.h"
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index e620793966..e8bcccccb4 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -3,7 +3,7 @@
  */
 
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_ethdev.h>
 #include <ethdev_driver.h>
 #include <rte_malloc.h>
diff --git a/drivers/net/ipn3ke/ipn3ke_ethdev.c b/drivers/net/ipn3ke/ipn3ke_ethdev.c
index 964506c6db..2493df2bcd 100644
--- a/drivers/net/ipn3ke/ipn3ke_ethdev.c
+++ b/drivers/net/ipn3ke/ipn3ke_ethdev.c
@@ -4,7 +4,7 @@
 
 #include <stdint.h>
 
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_ethdev.h>
 #include <rte_pci.h>
 #include <rte_malloc.h>
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 589d9fa587..f288bf09d8 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -5,7 +5,7 @@
 #include <stdint.h>
 #include <unistd.h>
 
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_ethdev.h>
 #include <rte_pci.h>
 #include <rte_malloc.h>
diff --git a/drivers/net/ipn3ke/ipn3ke_tm.c b/drivers/net/ipn3ke/ipn3ke_tm.c
index 6a9b98fd7f..532d232dbf 100644
--- a/drivers/net/ipn3ke/ipn3ke_tm.c
+++ b/drivers/net/ipn3ke/ipn3ke_tm.c
@@ -6,7 +6,7 @@
 #include <stdlib.h>
 #include <string.h>
 
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_ethdev.h>
 #include <rte_pci.h>
 #include <rte_malloc.h>
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 47693c0c47..ab277bc0dd 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -20,7 +20,7 @@
 #include <rte_log.h>
 #include <rte_debug.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_branch_prediction.h>
 #include <rte_memory.h>
 #include <rte_kvargs.h>
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index a0ce18ca24..7b0c1ad542 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -19,7 +19,7 @@
 #include <rte_time.h>
 #include <rte_hash.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_tm_driver.h>
 
 /* need update link, bit flag */
diff --git a/drivers/net/mlx4/mlx4_ethdev.c b/drivers/net/mlx4/mlx4_ethdev.c
index 783ff94dce..b86f65e32a 100644
--- a/drivers/net/mlx4/mlx4_ethdev.c
+++ b/drivers/net/mlx4/mlx4_ethdev.c
@@ -32,7 +32,7 @@
 #pragma GCC diagnostic error "-Wpedantic"
 #endif
 
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_errno.h>
 #include <ethdev_driver.h>
 #include <rte_ether.h>
diff --git a/drivers/net/mlx5/linux/mlx5_ethdev_os.c b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
index f34133e2c6..2e0a40f0cc 100644
--- a/drivers/net/mlx5/linux/mlx5_ethdev_os.c
+++ b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
@@ -25,7 +25,7 @@
 #include <time.h>
 
 #include <ethdev_driver.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_mbuf.h>
 #include <rte_common.h>
 #include <rte_interrupts.h>
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 470b16cb9a..e263818139 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -19,7 +19,7 @@
 #include <ethdev_driver.h>
 #include <ethdev_pci.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_bus_auxiliary.h>
 #include <rte_common.h>
 #include <rte_kvargs.h>
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index f84e061fe7..bbcdec0f49 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -13,7 +13,7 @@
 #include <rte_malloc.h>
 #include <ethdev_driver.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_common.h>
 #include <rte_kvargs.h>
 #include <rte_rwlock.h>
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 82e2284d98..8888702fc8 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -11,7 +11,7 @@
 #include <errno.h>
 
 #include <ethdev_driver.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_mbuf.h>
 #include <rte_common.h>
 #include <rte_interrupts.h>
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index eb4d34ca55..4d8c26f7bc 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -13,7 +13,7 @@
 #include <rte_mbuf.h>
 #include <rte_malloc.h>
 #include <ethdev_driver.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_common.h>
 #include <rte_eal_paging.h>
 
diff --git a/drivers/net/netvsc/hn_vf.c b/drivers/net/netvsc/hn_vf.c
index 75192e6319..1da4eacb6f 100644
--- a/drivers/net/netvsc/hn_vf.c
+++ b/drivers/net/netvsc/hn_vf.c
@@ -21,7 +21,7 @@
 #include <rte_memory.h>
 #include <rte_bus_vmbus.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_log.h>
 #include <rte_string_fns.h>
 #include <rte_alarm.h>
diff --git a/drivers/net/octeontx/base/octeontx_pkivf.c b/drivers/net/octeontx/base/octeontx_pkivf.c
index 0ddff54886..9ff5b78fd0 100644
--- a/drivers/net/octeontx/base/octeontx_pkivf.c
+++ b/drivers/net/octeontx/base/octeontx_pkivf.c
@@ -5,7 +5,7 @@
 #include <string.h>
 
 #include <rte_eal.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 
 #include "../octeontx_logs.h"
 #include "octeontx_io.h"
diff --git a/drivers/net/octeontx/base/octeontx_pkovf.c b/drivers/net/octeontx/base/octeontx_pkovf.c
index bf28bc7992..a125acadce 100644
--- a/drivers/net/octeontx/base/octeontx_pkovf.c
+++ b/drivers/net/octeontx/base/octeontx_pkovf.c
@@ -10,7 +10,7 @@
 #include <rte_cycles.h>
 #include <rte_malloc.h>
 #include <rte_memory.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_spinlock.h>
 
 #include "../octeontx_logs.h"
diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c
index b121488faf..40541c9280 100644
--- a/drivers/net/octeontx2/otx2_ethdev_irq.c
+++ b/drivers/net/octeontx2/otx2_ethdev_irq.c
@@ -4,7 +4,7 @@
 
 #include <inttypes.h>
 
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_malloc.h>
 
 #include "otx2_ethdev.h"
diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index c5b5399282..10ec1e2fb7 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -21,7 +21,7 @@
 #include <rte_ether.h>
 #include <rte_io.h>
 #include <rte_version.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 
 /* Forward declaration */
 struct ecore_dev;
diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h
index 331e06bac6..f57da8fc96 100644
--- a/drivers/net/sfc/sfc.h
+++ b/drivers/net/sfc/sfc.h
@@ -13,7 +13,7 @@
 #include <stdbool.h>
 
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <ethdev_driver.h>
 #include <rte_kvargs.h>
 #include <rte_spinlock.h>
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 2db0d000c3..7f8b052845 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -11,7 +11,7 @@
 #include <ethdev_driver.h>
 #include <ethdev_pci.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_errno.h>
 #include <rte_string_fns.h>
 #include <rte_ether.h>
diff --git a/drivers/net/sfc/sfc_sriov.c b/drivers/net/sfc/sfc_sriov.c
index baa0242433..48c64c8311 100644
--- a/drivers/net/sfc/sfc_sriov.c
+++ b/drivers/net/sfc/sfc_sriov.c
@@ -8,7 +8,7 @@
  */
 
 #include <rte_common.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 
 #include "sfc.h"
 #include "sfc_log.h"
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 561a98fc81..929ce86535 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -32,7 +32,7 @@
 #include <rte_malloc.h>
 #include <rte_random.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_tailq.h>
 #include <rte_devargs.h>
 #include <rte_kvargs.h>
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 3021933965..faf0c58bd5 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -20,7 +20,7 @@
 #include <rte_ethdev_core.h>
 #include <rte_hash.h>
 #include <rte_hash_crc.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_tm_driver.h>
 
 /* need update link, bit flag */
diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c
index eae400b141..f70a9309d9 100644
--- a/drivers/net/txgbe/txgbe_flow.c
+++ b/drivers/net/txgbe/txgbe_flow.c
@@ -4,7 +4,7 @@
  */
 
 #include <sys/queue.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_malloc.h>
 #include <rte_flow.h>
 #include <rte_flow_driver.h>
diff --git a/drivers/net/txgbe/txgbe_pf.c b/drivers/net/txgbe/txgbe_pf.c
index 494d779a3c..58159bc226 100644
--- a/drivers/net/txgbe/txgbe_pf.c
+++ b/drivers/net/txgbe/txgbe_pf.c
@@ -20,7 +20,7 @@
 #include <rte_memcpy.h>
 #include <rte_malloc.h>
 #include <rte_random.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 
 #include "base/txgbe.h"
 #include "txgbe_ethdev.h"
diff --git a/drivers/net/virtio/virtio_pci.h b/drivers/net/virtio/virtio_pci.h
index 11e25a0142..a9016534aa 100644
--- a/drivers/net/virtio/virtio_pci.h
+++ b/drivers/net/virtio/virtio_pci.h
@@ -9,7 +9,7 @@
 #include <stdbool.h>
 
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <ethdev_driver.h>
 
 #include "virtio.h"
diff --git a/drivers/net/virtio/virtio_pci_ethdev.c b/drivers/net/virtio/virtio_pci_ethdev.c
index 4083853c48..13dbc0c419 100644
--- a/drivers/net/virtio/virtio_pci_ethdev.c
+++ b/drivers/net/virtio/virtio_pci_ethdev.c
@@ -11,7 +11,7 @@
 #include <ethdev_driver.h>
 #include <ethdev_pci.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_errno.h>
 
 #include <rte_memory.h>
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index 2f40ae907d..e73f63e8ce 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -19,7 +19,7 @@
 #include <rte_log.h>
 #include <rte_debug.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_branch_prediction.h>
 #include <rte_memory.h>
 #include <rte_memzone.h>
diff --git a/drivers/raw/cnxk_bphy/cnxk_bphy.c b/drivers/raw/cnxk_bphy/cnxk_bphy.c
index 9cb3f8d332..61c60ed363 100644
--- a/drivers/raw/cnxk_bphy/cnxk_bphy.c
+++ b/drivers/raw/cnxk_bphy/cnxk_bphy.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright(C) 2021 Marvell.
  */
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_common.h>
 #include <rte_dev.h>
 #include <rte_eal.h>
diff --git a/drivers/raw/cnxk_bphy/cnxk_bphy_cgx.c b/drivers/raw/cnxk_bphy/cnxk_bphy_cgx.c
index ade45ab741..f9d7353a18 100644
--- a/drivers/raw/cnxk_bphy/cnxk_bphy_cgx.c
+++ b/drivers/raw/cnxk_bphy/cnxk_bphy_cgx.c
@@ -3,7 +3,7 @@
  */
 #include <string.h>
 
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_rawdev.h>
 #include <rte_rawdev_pmd.h>
 
diff --git a/drivers/raw/cnxk_bphy/cnxk_bphy_irq.c b/drivers/raw/cnxk_bphy/cnxk_bphy_irq.c
index bbcc285a7a..6014992934 100644
--- a/drivers/raw/cnxk_bphy/cnxk_bphy_irq.c
+++ b/drivers/raw/cnxk_bphy/cnxk_bphy_irq.c
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright(C) 2021 Marvell.
  */
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_pci.h>
 #include <rte_rawdev.h>
 #include <rte_rawdev_pmd.h>
diff --git a/drivers/raw/cnxk_bphy/cnxk_bphy_irq.h b/drivers/raw/cnxk_bphy/cnxk_bphy_irq.h
index b55147b93e..7a623b48c8 100644
--- a/drivers/raw/cnxk_bphy/cnxk_bphy_irq.h
+++ b/drivers/raw/cnxk_bphy/cnxk_bphy_irq.h
@@ -5,7 +5,7 @@
 #ifndef _CNXK_BPHY_IRQ_
 #define _CNXK_BPHY_IRQ_
 
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_dev.h>
 
 #include <roc_api.h>
diff --git a/drivers/raw/ifpga/ifpga_rawdev.c b/drivers/raw/ifpga/ifpga_rawdev.c
index 76e6a8530b..eb0c6e271f 100644
--- a/drivers/raw/ifpga/ifpga_rawdev.c
+++ b/drivers/raw/ifpga/ifpga_rawdev.c
@@ -16,7 +16,7 @@
 #include <rte_devargs.h>
 #include <rte_memcpy.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_kvargs.h>
 #include <rte_alarm.h>
 #include <rte_interrupts.h>
diff --git a/drivers/raw/ifpga/rte_pmd_ifpga.c b/drivers/raw/ifpga/rte_pmd_ifpga.c
index 23146432c2..959aad414d 100644
--- a/drivers/raw/ifpga/rte_pmd_ifpga.c
+++ b/drivers/raw/ifpga/rte_pmd_ifpga.c
@@ -3,7 +3,7 @@
  */
 
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_rawdev.h>
 #include <rte_rawdev_pmd.h>
 #include "rte_pmd_ifpga.h"
diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 13515dbc6c..257e4c004a 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -2,7 +2,7 @@
  * Copyright(c) 2020 Intel Corporation
  */
 
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_memzone.h>
 #include <rte_devargs.h>
 
diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 5396671d4f..f1d4220b69 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -3,7 +3,7 @@
  */
 
 #include <rte_cycles.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_memzone.h>
 #include <rte_string_fns.h>
 #include <rte_rawdev_pmd.h>
diff --git a/drivers/raw/ntb/ntb.c b/drivers/raw/ntb/ntb.c
index 78cfcd79f7..a74a42f8d2 100644
--- a/drivers/raw/ntb/ntb.c
+++ b/drivers/raw/ntb/ntb.c
@@ -13,7 +13,7 @@
 #include <rte_log.h>
 #include <rte_pci.h>
 #include <rte_mbuf.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_memzone.h>
 #include <rte_memcpy.h>
 #include <rte_rawdev.h>
diff --git a/drivers/raw/ntb/ntb_hw_intel.c b/drivers/raw/ntb/ntb_hw_intel.c
index a742e8fbb9..072c2dd7b9 100644
--- a/drivers/raw/ntb/ntb_hw_intel.c
+++ b/drivers/raw/ntb/ntb_hw_intel.c
@@ -8,7 +8,7 @@
 #include <rte_io.h>
 #include <rte_eal.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_rawdev.h>
 #include <rte_rawdev_pmd.h>
 
diff --git a/drivers/raw/octeontx2_dma/otx2_dpi_rawdev.c b/drivers/raw/octeontx2_dma/otx2_dpi_rawdev.c
index 8c01f25ec7..179e3e6dbb 100644
--- a/drivers/raw/octeontx2_dma/otx2_dpi_rawdev.c
+++ b/drivers/raw/octeontx2_dma/otx2_dpi_rawdev.c
@@ -6,7 +6,7 @@
 #include <unistd.h>
 
 #include <rte_bus.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_common.h>
 #include <rte_eal.h>
 #include <rte_lcore.h>
diff --git a/drivers/raw/octeontx2_ep/otx2_ep_enqdeq.c b/drivers/raw/octeontx2_ep/otx2_ep_enqdeq.c
index d04e957d82..214fd83169 100644
--- a/drivers/raw/octeontx2_ep/otx2_ep_enqdeq.c
+++ b/drivers/raw/octeontx2_ep/otx2_ep_enqdeq.c
@@ -8,7 +8,7 @@
 #include <fcntl.h>
 
 #include <rte_bus.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_eal.h>
 #include <rte_lcore.h>
 #include <rte_mempool.h>
diff --git a/drivers/raw/octeontx2_ep/otx2_ep_rawdev.c b/drivers/raw/octeontx2_ep/otx2_ep_rawdev.c
index b2ccdda83e..560a8bbf03 100644
--- a/drivers/raw/octeontx2_ep/otx2_ep_rawdev.c
+++ b/drivers/raw/octeontx2_ep/otx2_ep_rawdev.c
@@ -5,7 +5,7 @@
 #include <unistd.h>
 
 #include <rte_bus.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_eal.h>
 #include <rte_lcore.h>
 #include <rte_mempool.h>
diff --git a/drivers/regex/mlx5/mlx5_regex.c b/drivers/regex/mlx5/mlx5_regex.c
index f17b6df47f..0926aec5bc 100644
--- a/drivers/regex/mlx5/mlx5_regex.c
+++ b/drivers/regex/mlx5/mlx5_regex.c
@@ -9,7 +9,7 @@
 #include <rte_regexdev.h>
 #include <rte_regexdev_core.h>
 #include <rte_regexdev_driver.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 
 #include <mlx5_common.h>
 #include <mlx5_common_mr.h>
diff --git a/drivers/regex/mlx5/mlx5_regex_fastpath.c b/drivers/regex/mlx5/mlx5_regex_fastpath.c
index c79445ce7d..1ba12cd576 100644
--- a/drivers/regex/mlx5/mlx5_regex_fastpath.c
+++ b/drivers/regex/mlx5/mlx5_regex_fastpath.c
@@ -10,7 +10,7 @@
 #include <rte_malloc.h>
 #include <rte_log.h>
 #include <rte_errno.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_pci.h>
 #include <rte_regexdev_driver.h>
 #include <rte_mbuf.h>
diff --git a/drivers/vdpa/ifc/base/ifcvf_osdep.h b/drivers/vdpa/ifc/base/ifcvf_osdep.h
index 6aef25ea45..3f3eb46f3f 100644
--- a/drivers/vdpa/ifc/base/ifcvf_osdep.h
+++ b/drivers/vdpa/ifc/base/ifcvf_osdep.h
@@ -10,7 +10,7 @@
 
 #include <rte_cycles.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_log.h>
 #include <rte_io.h>
 
diff --git a/drivers/vdpa/ifc/ifcvf_vdpa.c b/drivers/vdpa/ifc/ifcvf_vdpa.c
index 1dc813d0a3..968dea5055 100644
--- a/drivers/vdpa/ifc/ifcvf_vdpa.c
+++ b/drivers/vdpa/ifc/ifcvf_vdpa.c
@@ -14,7 +14,7 @@
 #include <rte_eal_paging.h>
 #include <rte_malloc.h>
 #include <rte_memory.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_vhost.h>
 #include <rte_vdpa.h>
 #include <rte_vdpa_dev.h>
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c
index 6d17d7a6f3..81c7e738f5 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.c
@@ -12,7 +12,7 @@
 #include <rte_log.h>
 #include <rte_errno.h>
 #include <rte_string_fns.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 
 #include <mlx5_glue.h>
 #include <mlx5_common.h>
diff --git a/lib/ethdev/ethdev_pci.h b/lib/ethdev/ethdev_pci.h
index 8edca82ce8..76779faa6c 100644
--- a/lib/ethdev/ethdev_pci.h
+++ b/lib/ethdev/ethdev_pci.h
@@ -8,7 +8,7 @@
 
 #include <rte_malloc.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 #include <rte_config.h>
 #include <ethdev_driver.h>
 
diff --git a/lib/eventdev/eventdev_pmd_pci.h b/lib/eventdev/eventdev_pmd_pci.h
index d14ea634b8..260362aacb 100644
--- a/lib/eventdev/eventdev_pmd_pci.h
+++ b/lib/eventdev/eventdev_pmd_pci.h
@@ -24,7 +24,7 @@ extern "C" {
 #include <rte_eal.h>
 #include <rte_lcore.h>
 #include <rte_pci.h>
-#include <rte_bus_pci.h>
+#include <pci_driver.h>
 
 #include "eventdev_pmd.h"
 
-- 
2.17.1


^ permalink raw reply	[relevance 2%]

* Re: [dpdk-dev] [PATCH v3] eventdev: update crypto adapter metadata structures
  @ 2021-09-19 14:25  0%                 ` Gujjar, Abhinandan S
  2021-09-19 18:49  3%                   ` Akhil Goyal
  0 siblings, 1 reply; 200+ results
From: Gujjar, Abhinandan S @ 2021-09-19 14:25 UTC (permalink / raw)
  To: Anoob Joseph, Shijith Thotton
  Cc: Jerin Jacob Kollanukkaran, Pavan Nikhilesh Bhagavatula,
	Akhil Goyal, Ray Kinsella, Ankur Dwivedi, dev

Hi Shijith & Anoob,

> -----Original Message-----
> From: Anoob Joseph <anoobj@marvell.com>
> Sent: Tuesday, September 14, 2021 10:16 AM
> To: Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>
> Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Pavan Nikhilesh
> Bhagavatula <pbhagavatula@marvell.com>; Akhil Goyal
> <gakhil@marvell.com>; Ray Kinsella <mdr@ashroe.eu>; Ankur Dwivedi
> <adwivedi@marvell.com>; Shijith Thotton <sthotton@marvell.com>;
> dev@dpdk.org
> Subject: RE: [PATCH v3] eventdev: update crypto adapter metadata
> structures
> 
> Hi Abhinandan, Shijith,
> 
> Please see inline.
> 
> Thanks,
> Anoob
> 
> > -----Original Message-----
> > From: Shijith Thotton <sthotton@marvell.com>
> > Sent: Tuesday, September 14, 2021 10:07 AM
> > To: Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>; dev@dpdk.org
> > Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Anoob Joseph
> > <anoobj@marvell.com>; Pavan Nikhilesh Bhagavatula
> > <pbhagavatula@marvell.com>; Akhil Goyal <gakhil@marvell.com>; Ray
> > Kinsella <mdr@ashroe.eu>; Ankur Dwivedi <adwivedi@marvell.com>
> > Subject: RE: [PATCH v3] eventdev: update crypto adapter metadata
> > structures
> >
> > >> >>
> > >> >> >> In crypto adapter metadata, reserved bytes in request info
> > >> >> >> structure is a space holder for response info. It enforces an
> > >> >> >> order of operation if the structures are updated using memcpy
> > >> >> >> to avoid overwriting response info. It is logical to move the
> > >> >> >> reserved space out of request info. It also solves the
> > >> >> >> ordering issue
> > >> mentioned before.
> > >> >> >I would like to understand what kind of ordering issue you have
> > >> >> >faced with the current approach. Could you please give an
> > >> >> >example/sequence
> > >> >> and explain?
> > >> >> >
> > >> >>
> > >> >> I have seen this issue with crypto adapter autotest (#n215).
> > >> >>
> > >> >> Example:
> > >> >> rte_memcpy(&m_data.response_info, &response_info,
> > >> >> sizeof(response_info)); rte_memcpy(&m_data.request_info,
> > >> >> &request_info, sizeof(request_info));
> > >> >>
> > >> >> Here response info is getting overwritten by request info.
> > >> >> Above lines can reordered to fix the issue, but can be ignored
> > >> >> with this
> > >> patch.
> > >> >There is a reason for designing the metadata in this way.
> > >> >Right now, sizeof (union rte_event_crypto_metadata) is 16 bytes.
> > >> >So, the session based case needs just 16 bytes to store the data.
> > >> >Whereas, for sessionless case each crypto_ops requires another 16
> > bytes.
> > >> >
> > >> >By changing the struct in the following way you are doubling the
> > >> >memory requirement.
> > >> >With the changes, for sessionless case, each crypto op requires 32
> > >> >bytes of space instead of 16 bytes and the mempool will be bigger.
> > >> >This will have the perf impact too!
> > >> >
> > >> >You can just copy the individual members(cdev_id & queue_pair_id)
> > >> >after the response_info.
> > >> >OR You have a better way?
> > >> >
> > >>
> > >> I missed the second word of event structure. It adds an extra 8
> > >> bytes, which is not required.
> > >I guess you meant not adding, it is overlapping with request information.
> > >> Let me know, what you think of below change to metadata structure.
> > >>
> > >> struct rte_event_crypto_metadata {
> > >> 	uint64_t response_info; // 8 bytes
> > >With this, it lags clarity saying it is an event information.
> > >> 	struct rte_event_crypto_request request_info; // 8 bytes };
> > >>
> > >> Total structure size is 16 bytes.
> > >>
> > >> Response info can be set like below in test app:
> > >> 	m_data.response_info = response_info.event;
> > >>
> > >> The main aim of this patch is to decouple response info from request
> info.
> > >The decoupling was required as it was doing memcpy.
> > >If you copy the individual members of request info(after response),
> > >you don't require it.
> >
> > With this change, the application will be free of such constraints.
> 
> [Anoob] Why don't we make it like,
> 
> struct rte_event_crypto_metadata {
>  	uint64_t ev_w0_template;
>               /**< Template of the first word of ``struct rte_event``
> (rte_event.event) to be set in the events generated by crypto adapter in
> both RTE_EVENT_CRYPTO_ADAPTER_OP_NEW and
> 	 * RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD modes.
> 	*/
>  	struct rte_event_crypto_request request_info; };
> 
> This way, the usage would become more obvious and we will not have
> additional space reserved as well.

Thanks for the suggestion. At this point, it is like an application without understanding the data structure/ internals
has used memcpy and we are trying to fix it by changing the actual data structure instead of fixing the application!
With this patch, the other applications(outside of dpdk) which are using the data structures in a right are forced to change!
I don't think, this is the right way to handle this. I think, we should be updating the test case and documentation for this rather
than changing the data structure.  I expect the next version of this patch should have those changes. Thanks!

> 
> >
> > >>
> > >> >>
> > >> >> >>
> > >> >> >> This patch removes the reserve field from request info and
> > >> >> >> makes event crypto metadata type to structure from union to
> > >> >> >> make space for response info.
> > >> >> >>
> > >> >> >> App and drivers are updated as per metadata change.
> > >> >> >>
> > >> >> >> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
> > >> >> >> Acked-by: Anoob Joseph <anoobj@marvell.com>
> > >> >> >> ---
> > >> >> >> v3:
> > >> >> >> * Updated ABI section of release notes.
> > >> >> >>
> > >> >> >> v2:
> > >> >> >> * Updated deprecation notice.
> > >> >> >>
> > >> >> >> v1:
> > >> >> >> * Rebased.
> > >> >> >>
> > >> >> >>  app/test/test_event_crypto_adapter.c              | 14 +++++++------
> -
> > >> >> >>  doc/guides/rel_notes/deprecation.rst              |  6 ------
> > >> >> >>  doc/guides/rel_notes/release_21_11.rst            |  2 ++
> > >> >> >>  drivers/crypto/octeontx/otx_cryptodev_ops.c       |  8 ++++----
> > >> >> >>  drivers/crypto/octeontx2/otx2_cryptodev_ops.c     |  4 ++--
> > >> >> >>  .../event/octeontx2/otx2_evdev_crypto_adptr_tx.h  |  4 ++--
> > >> >> >>  lib/eventdev/rte_event_crypto_adapter.c           |  8 ++++----
> > >> >> >>  lib/eventdev/rte_event_crypto_adapter.h           | 15 +++++-------
> --
> > -
> > >> >> >>  8 files changed, 26 insertions(+), 35 deletions(-)
> > >> >> >>
> > >> >> >> diff --git a/app/test/test_event_crypto_adapter.c
> > >> >> >> b/app/test/test_event_crypto_adapter.c
> > >> >> >> index 3ad20921e2..0d73694d3a 100644
> > >> >> >> --- a/app/test/test_event_crypto_adapter.c
> > >> >> >> +++ b/app/test/test_event_crypto_adapter.c
> > >> >> >> @@ -168,7 +168,7 @@ test_op_forward_mode(uint8_t
> > session_less)
> > >> {
> > >> >> >>  	struct rte_crypto_sym_xform cipher_xform;
> > >> >> >>  	struct rte_cryptodev_sym_session *sess;
> > >> >> >> -	union rte_event_crypto_metadata m_data;
> > >> >> >> +	struct rte_event_crypto_metadata m_data;
> > >> >> >>  	struct rte_crypto_sym_op *sym_op;
> > >> >> >>  	struct rte_crypto_op *op;
> > >> >> >>  	struct rte_mbuf *m;
> > >> >> >> @@ -368,7 +368,7 @@ test_op_new_mode(uint8_t session_less)
> {
> > >> >> >>  	struct rte_crypto_sym_xform cipher_xform;
> > >> >> >>  	struct rte_cryptodev_sym_session *sess;
> > >> >> >> -	union rte_event_crypto_metadata m_data;
> > >> >> >> +	struct rte_event_crypto_metadata m_data;
> > >> >> >>  	struct rte_crypto_sym_op *sym_op;
> > >> >> >>  	struct rte_crypto_op *op;
> > >> >> >>  	struct rte_mbuf *m;
> > >> >> >> @@ -406,7 +406,7 @@ test_op_new_mode(uint8_t session_less)
> > >> >> >>  		if (cap &
> > >> >> >> RTE_EVENT_CRYPTO_ADAPTER_CAP_SESSION_PRIVATE_DATA) {
> > >> >> >>  			/* Fill in private user data information */
> > >> >> >>  			rte_memcpy(&m_data.response_info,
> > >> >> &response_info,
> > >> >> >> -				   sizeof(m_data));
> > >> >> >> +				   sizeof(response_info));
> > >> >> >>
> > 	rte_cryptodev_sym_session_set_user_data(sess,
> > >> >> >>  						&m_data,
> > sizeof(m_data));
> > >> >> >>  		}
> > >> >> >> @@ -426,7 +426,7 @@ test_op_new_mode(uint8_t session_less)
> > >> >> >>  		op->private_data_offset = len;
> > >> >> >>  		/* Fill in private data information */
> > >> >> >>  		rte_memcpy(&m_data.response_info,
> > &response_info,
> > >> >> >> -			   sizeof(m_data));
> > >> >> >> +			   sizeof(response_info));
> > >> >> >>  		rte_memcpy((uint8_t *)op + len, &m_data,
> > >sizeof(m_data));
> > >> >> >>  	}
> > >> >> >>
> > >> >> >> @@ -519,7 +519,7 @@ configure_cryptodev(void)
> > >> >> >>  			DEFAULT_NUM_XFORMS *
> > >> >> >>  			sizeof(struct rte_crypto_sym_xform) +
> > >> >> >>  			MAXIMUM_IV_LENGTH +
> > >> >> >> -			sizeof(union rte_event_crypto_metadata),
> > >> >> >> +			sizeof(struct rte_event_crypto_metadata),
> > >> >> >>  			rte_socket_id());
> > >> >> >>  	if (params.op_mpool == NULL) {
> > >> >> >>  		RTE_LOG(ERR, USER1, "Can't create
> > >CRYPTO_OP_POOL\n");
> > >> >> @@ -549,12
> > >> >> >> +549,12 @@ configure_cryptodev(void)
> > >> >> >>  	 * to include the session headers & private data
> > >> >> >>  	 */
> > >> >> >>  	session_size =
> > >> >> >> rte_cryptodev_sym_get_private_session_size(TEST_CDEV_ID);
> > >> >> >> -	session_size += sizeof(union rte_event_crypto_metadata);
> > >> >> >> +	session_size += sizeof(struct rte_event_crypto_metadata);
> > >> >> >>
> > >> >> >>  	params.session_mpool =
> > >rte_cryptodev_sym_session_pool_create(
> > >> >> >>  			"CRYPTO_ADAPTER_SESSION_MP",
> > >> >> >>  			MAX_NB_SESSIONS, 0, 0,
> > >> >> >> -			sizeof(union rte_event_crypto_metadata),
> > >> >> >> +			sizeof(struct rte_event_crypto_metadata),
> > >> >> >>  			SOCKET_ID_ANY);
> > >> >> >>  	TEST_ASSERT_NOT_NULL(params.session_mpool,
> > >> >> >>  			"session mempool allocation failed\n"); diff --
> > git
> > >> >> >> a/doc/guides/rel_notes/deprecation.rst
> > >> >> >> b/doc/guides/rel_notes/deprecation.rst
> > >> >> >> index 76a4abfd6b..58ee95c020 100644
> > >> >> >> --- a/doc/guides/rel_notes/deprecation.rst
> > >> >> >> +++ b/doc/guides/rel_notes/deprecation.rst
> > >> >> >> @@ -266,12 +266,6 @@ Deprecation Notices
> > >> >> >>    values to the function
> > >> >> >> ``rte_event_eth_rx_adapter_queue_add``
> > >> using
> > >> >> >>    the structure ``rte_event_eth_rx_adapter_queue_add``.
> > >> >> >>
> > >> >> >> -* eventdev: Reserved bytes of ``rte_event_crypto_request``
> > >> >> >> is a space holder
> > >> >> >> -  for ``response_info``. Both should be decoupled for better
> clarity.
> > >> >> >> -  New space for ``response_info`` can be made by changing
> > >> >> >> -  ``rte_event_crypto_metadata`` type to structure from union.
> > >> >> >> -  This change is targeted for DPDK 21.11.
> > >> >> >> -
> > >> >> >>  * metrics: The function ``rte_metrics_init`` will have a
> > >> >> >> non-void
> > return
> > >> >> >>    in order to notify errors instead of calling ``rte_exit``.
> > >> >> >>
> > >> >> >> diff --git a/doc/guides/rel_notes/release_21_11.rst
> > >> >> >> b/doc/guides/rel_notes/release_21_11.rst
> > >> >> >> index d707a554ef..ab76d5dd55 100644
> > >> >> >> --- a/doc/guides/rel_notes/release_21_11.rst
> > >> >> >> +++ b/doc/guides/rel_notes/release_21_11.rst
> > >> >> >> @@ -100,6 +100,8 @@ ABI Changes
> > >> >> >>     Also, make sure to start the actual text at the margin.
> > >> >> >>
> > >> >>
> > =======================================================
> > >> >> >>
> > >> >> >> +* eventdev: Modified type of ``union
> > >> >> >> +rte_event_crypto_metadata`` to struct and
> > >> >> >> +  removed reserved bytes from ``struct
> > rte_event_crypto_request``.
> > >> >> >>
> > >> >> >>  Known Issues
> > >> >> >>  ------------
> > >> >> >> diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c
> > >> >> >> b/drivers/crypto/octeontx/otx_cryptodev_ops.c
> > >> >> >> index eac6796cfb..c51be63146 100644
> > >> >> >> --- a/drivers/crypto/octeontx/otx_cryptodev_ops.c
> > >> >> >> +++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c
> > >> >> >> @@ -710,17 +710,17 @@ submit_request_to_sso(struct ssows
> > *ws,
> > >> >> >> uintptr_t req,
> > >> >> >>  	ssovf_store_pair(add_work, req,
> > >> >> >> ws->grps[rsp_info->queue_id]); }
> > >> >> >>
> > >> >> >> -static inline union rte_event_crypto_metadata *
> > >> >> >> +static inline struct rte_event_crypto_metadata *
> > >> >> >>  get_event_crypto_mdata(struct rte_crypto_op *op)  {
> > >> >> >> -	union rte_event_crypto_metadata *ec_mdata;
> > >> >> >> +	struct rte_event_crypto_metadata *ec_mdata;
> > >> >> >>
> > >> >> >>  	if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION)
> > >> >> >>  		ec_mdata =
> > rte_cryptodev_sym_session_get_user_data(
> > >> >> >>  							   op->sym-
> > >>session);
> > >> >> >>  	else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
> > >> >> >>  		 op->private_data_offset)
> > >> >> >> -		ec_mdata = (union rte_event_crypto_metadata *)
> > >> >> >> +		ec_mdata = (struct rte_event_crypto_metadata *)
> > >> >> >>  			((uint8_t *)op + op->private_data_offset);
> > >> >> >>  	else
> > >> >> >>  		return NULL;
> > >> >> >> @@ -731,7 +731,7 @@ get_event_crypto_mdata(struct
> > rte_crypto_op
> > >> >> *op)
> > >> >> >> uint16_t __rte_hot  otx_crypto_adapter_enqueue(void *port,
> > >> >> >> struct rte_crypto_op *op)  {
> > >> >> >> -	union rte_event_crypto_metadata *ec_mdata;
> > >> >> >> +	struct rte_event_crypto_metadata *ec_mdata;
> > >> >> >>  	struct cpt_instance *instance;
> > >> >> >>  	struct cpt_request_info *req;
> > >> >> >>  	struct rte_event *rsp_info; diff --git
> > >> >> >> a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
> > >> >> >> b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
> > >> >> >> index 42100154cd..952d1352f4 100644
> > >> >> >> --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
> > >> >> >> +++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
> > >> >> >> @@ -453,7 +453,7 @@ otx2_ca_enqueue_req(const struct
> > >> otx2_cpt_qp
> > >> >> *qp,
> > >> >> >>  		    struct rte_crypto_op *op,
> > >> >> >>  		    uint64_t cpt_inst_w7)
> > >> >> >>  {
> > >> >> >> -	union rte_event_crypto_metadata *m_data;
> > >> >> >> +	struct rte_event_crypto_metadata *m_data;
> > >> >> >>  	union cpt_inst_s inst;
> > >> >> >>  	uint64_t lmt_status;
> > >> >> >>
> > >> >> >> @@ -468,7 +468,7 @@ otx2_ca_enqueue_req(const struct
> > >> otx2_cpt_qp
> > >> >> *qp,
> > >> >> >>  		}
> > >> >> >>  	} else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS
> > &&
> > >> >> >>  		   op->private_data_offset) {
> > >> >> >> -		m_data = (union rte_event_crypto_metadata *)
> > >> >> >> +		m_data = (struct rte_event_crypto_metadata *)
> > >> >> >>  			 ((uint8_t *)op +
> > >> >> >>  			  op->private_data_offset);
> > >> >> >>  	} else {
> > >> >> >> diff --git
> > >> >> >> a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
> > >> >> >> b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
> > >> >> >> index ecf7eb9f56..458e8306d7 100644
> > >> >> >> --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
> > >> >> >> +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
> > >> >> >> @@ -16,7 +16,7 @@
> > >> >> >>  static inline uint16_t
> > >> >> >>  otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev)  {
> > >> >> >> -	union rte_event_crypto_metadata *m_data;
> > >> >> >> +	struct rte_event_crypto_metadata *m_data;
> > >> >> >>  	struct rte_crypto_op *crypto_op;
> > >> >> >>  	struct rte_cryptodev *cdev;
> > >> >> >>  	struct otx2_cpt_qp *qp;
> > >> >> >> @@ -37,7 +37,7 @@ otx2_ca_enq(uintptr_t tag_op, const struct
> > >> >> >> rte_event
> > >> >> >> *ev)
> > >> >> >>  		qp_id = m_data->request_info.queue_pair_id;
> > >> >> >>  	} else if (crypto_op->sess_type ==
> > RTE_CRYPTO_OP_SESSIONLESS
> > >> >> &&
> > >> >> >>  		   crypto_op->private_data_offset) {
> > >> >> >> -		m_data = (union rte_event_crypto_metadata *)
> > >> >> >> +		m_data = (struct rte_event_crypto_metadata *)
> > >> >> >>  			 ((uint8_t *)crypto_op +
> > >> >> >>  			  crypto_op->private_data_offset);
> > >> >> >>  		cdev_id = m_data->request_info.cdev_id; diff --git
> > >> >> >> a/lib/eventdev/rte_event_crypto_adapter.c
> > >> >> >> b/lib/eventdev/rte_event_crypto_adapter.c
> > >> >> >> index e1d38d383d..6977391ae9 100644
> > >> >> >> --- a/lib/eventdev/rte_event_crypto_adapter.c
> > >> >> >> +++ b/lib/eventdev/rte_event_crypto_adapter.c
> > >> >> >> @@ -333,7 +333,7 @@ eca_enq_to_cryptodev(struct
> > >> >> >> rte_event_crypto_adapter *adapter,
> > >> >> >>  		 struct rte_event *ev, unsigned int cnt)  {
> > >> >> >>  	struct rte_event_crypto_adapter_stats *stats = &adapter-
> > >> >> >> >crypto_stats;
> > >> >> >> -	union rte_event_crypto_metadata *m_data = NULL;
> > >> >> >> +	struct rte_event_crypto_metadata *m_data = NULL;
> > >> >> >>  	struct crypto_queue_pair_info *qp_info = NULL;
> > >> >> >>  	struct rte_crypto_op *crypto_op;
> > >> >> >>  	unsigned int i, n;
> > >> >> >> @@ -371,7 +371,7 @@ eca_enq_to_cryptodev(struct
> > >> >> >> rte_event_crypto_adapter *adapter,
> > >> >> >>  			len++;
> > >> >> >>  		} else if (crypto_op->sess_type ==
> > >> >> RTE_CRYPTO_OP_SESSIONLESS &&
> > >> >> >>  				crypto_op->private_data_offset) {
> > >> >> >> -			m_data = (union
> > rte_event_crypto_metadata *)
> > >> >> >> +			m_data = (struct
> > rte_event_crypto_metadata *)
> > >> >> >>  				 ((uint8_t *)crypto_op +
> > >> >> >>  					crypto_op-
> > >private_data_offset);
> > >> >> >>  			cdev_id = m_data->request_info.cdev_id;
> > @@ -
> > >> >> >> 504,7 +504,7 @@ eca_ops_enqueue_burst(struct
> > >> >> rte_event_crypto_adapter
> > >> >> >> *adapter,
> > >> >> >>  		  struct rte_crypto_op **ops, uint16_t num)  {
> > >> >> >>  	struct rte_event_crypto_adapter_stats *stats = &adapter-
> > >> >> >> >crypto_stats;
> > >> >> >> -	union rte_event_crypto_metadata *m_data = NULL;
> > >> >> >> +	struct rte_event_crypto_metadata *m_data = NULL;
> > >> >> >>  	uint8_t event_dev_id = adapter->eventdev_id;
> > >> >> >>  	uint8_t event_port_id = adapter->event_port_id;
> > >> >> >>  	struct rte_event events[BATCH_SIZE]; @@ -523,7 +523,7 @@
> > >> >> >> eca_ops_enqueue_burst(struct rte_event_crypto_adapter
> > *adapter,
> > >> >> >>  					ops[i]->sym->session);
> > >> >> >>  		} else if (ops[i]->sess_type ==
> > >> RTE_CRYPTO_OP_SESSIONLESS &&
> > >> >> >>  				ops[i]->private_data_offset) {
> > >> >> >> -			m_data = (union
> > rte_event_crypto_metadata *)
> > >> >> >> +			m_data = (struct
> > rte_event_crypto_metadata *)
> > >> >> >>  				 ((uint8_t *)ops[i] +
> > >> >> >>  				  ops[i]->private_data_offset);
> > >> >> >>  		}
> > >> >> >> diff --git a/lib/eventdev/rte_event_crypto_adapter.h
> > >> >> >> b/lib/eventdev/rte_event_crypto_adapter.h
> > >> >> >> index f8c6cca87c..3c24d9d9df 100644
> > >> >> >> --- a/lib/eventdev/rte_event_crypto_adapter.h
> > >> >> >> +++ b/lib/eventdev/rte_event_crypto_adapter.h
> > >> >> >> @@ -200,11 +200,6 @@ enum rte_event_crypto_adapter_mode
> {
> > >> >> >>   * provide event request information to the adapter.
> > >> >> >>   */
> > >> >> >>  struct rte_event_crypto_request {
> > >> >> >> -	uint8_t resv[8];
> > >> >> >> -	/**< Overlaps with first 8 bytes of struct rte_event
> > >> >> >> -	 * that encode the response event information. Application
> > >> >> >> -	 * is expected to fill in struct rte_event response_info.
> > >> >> >> -	 */
> > >> >> >>  	uint16_t cdev_id;
> > >> >> >>  	/**< cryptodev ID to be used */
> > >> >> >>  	uint16_t queue_pair_id;
> > >> >> >> @@ -223,16 +218,16 @@ struct rte_event_crypto_request {
> > >> >> >>   * operation. If the transfer is done by SW, event response
> > >> information
> > >> >> >>   * will be used by the adapter.
> > >> >> >>   */
> > >> >> >> -union rte_event_crypto_metadata {
> > >> >> >> -	struct rte_event_crypto_request request_info;
> > >> >> >> -	/**< Request information to be filled in by application
> > >> >> >> -	 * for RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode.
> > >> >> >> -	 */
> > >> >> >> +struct rte_event_crypto_metadata {
> > >> >> >>  	struct rte_event response_info;
> > >> >> >>  	/**< Response information to be filled in by application
> > >> >> >>  	 * for RTE_EVENT_CRYPTO_ADAPTER_OP_NEW and
> > >> >> >>  	 * RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode.
> > >> >> >>  	 */
> > >> >> >> +	struct rte_event_crypto_request request_info;
> > >> >> >> +	/**< Request information to be filled in by application
> > >> >> >> +	 * for RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode.
> > >> >> >> +	 */
> > >> >> >>  };
> > >> >> >>
> > >> >> >>  /**
> > >> >> >> --
> > >> >> >> 2.25.1


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3] eventdev: update crypto adapter metadata structures
  2021-09-19 14:25  0%                 ` Gujjar, Abhinandan S
@ 2021-09-19 18:49  3%                   ` Akhil Goyal
  2021-09-20  5:54  3%                     ` Gujjar, Abhinandan S
  0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2021-09-19 18:49 UTC (permalink / raw)
  To: Gujjar, Abhinandan S, Anoob Joseph, Shijith Thotton
  Cc: Jerin Jacob Kollanukkaran, Pavan Nikhilesh Bhagavatula,
	Ray Kinsella, Ankur Dwivedi, dev, thomas, nipun.gupta,
	Hemant Agrawal

Hi Abhinandan,
> > > >> >>
> > > >> >> >> In crypto adapter metadata, reserved bytes in request info
> > > >> >> >> structure is a space holder for response info. It enforces an
> > > >> >> >> order of operation if the structures are updated using memcpy
> > > >> >> >> to avoid overwriting response info. It is logical to move the
> > > >> >> >> reserved space out of request info. It also solves the
> > > >> >> >> ordering issue
> > > >> mentioned before.
> > > >> >> >I would like to understand what kind of ordering issue you have
> > > >> >> >faced with the current approach. Could you please give an
> > > >> >> >example/sequence
> > > >> >> and explain?
> > > >> >> >
> > > >> >>
> > > >> >> I have seen this issue with crypto adapter autotest (#n215).
> > > >> >>
> > > >> >> Example:
> > > >> >> rte_memcpy(&m_data.response_info, &response_info,
> > > >> >> sizeof(response_info)); rte_memcpy(&m_data.request_info,
> > > >> >> &request_info, sizeof(request_info));
> > > >> >>
> > > >> >> Here response info is getting overwritten by request info.
> > > >> >> Above lines can reordered to fix the issue, but can be ignored
> > > >> >> with this
> > > >> patch.
> > > >> >There is a reason for designing the metadata in this way.
> > > >> >Right now, sizeof (union rte_event_crypto_metadata) is 16 bytes.
> > > >> >So, the session based case needs just 16 bytes to store the data.
> > > >> >Whereas, for sessionless case each crypto_ops requires another 16
> > > bytes.
> > > >> >
> > > >> >By changing the struct in the following way you are doubling the
> > > >> >memory requirement.
> > > >> >With the changes, for sessionless case, each crypto op requires 32
> > > >> >bytes of space instead of 16 bytes and the mempool will be bigger.
> > > >> >This will have the perf impact too!
> > > >> >
> > > >> >You can just copy the individual members(cdev_id & queue_pair_id)
> > > >> >after the response_info.
> > > >> >OR You have a better way?
> > > >> >
> > > >>
> > > >> I missed the second word of event structure. It adds an extra 8
> > > >> bytes, which is not required.
> > > >I guess you meant not adding, it is overlapping with request
> information.
> > > >> Let me know, what you think of below change to metadata structure.
> > > >>
> > > >> struct rte_event_crypto_metadata {
> > > >> 	uint64_t response_info; // 8 bytes
> > > >With this, it lags clarity saying it is an event information.
> > > >> 	struct rte_event_crypto_request request_info; // 8 bytes };
> > > >>
> > > >> Total structure size is 16 bytes.
> > > >>
> > > >> Response info can be set like below in test app:
> > > >> 	m_data.response_info = response_info.event;
> > > >>
> > > >> The main aim of this patch is to decouple response info from request
> > info.
> > > >The decoupling was required as it was doing memcpy.
> > > >If you copy the individual members of request info(after response),
> > > >you don't require it.
> > >
> > > With this change, the application will be free of such constraints.
> >
> > [Anoob] Why don't we make it like,
> >
> > struct rte_event_crypto_metadata {
> >  	uint64_t ev_w0_template;
> >               /**< Template of the first word of ``struct rte_event``
> > (rte_event.event) to be set in the events generated by crypto adapter in
> > both RTE_EVENT_CRYPTO_ADAPTER_OP_NEW and
> > 	 * RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD modes.
> > 	*/
> >  	struct rte_event_crypto_request request_info; };
> >
> > This way, the usage would become more obvious and we will not have
> > additional space reserved as well.
> 
> Thanks for the suggestion. At this point, it is like an application without
> understanding the data structure/ internals
> has used memcpy and we are trying to fix it by changing the actual data
> structure instead of fixing the application!
> With this patch, the other applications(outside of dpdk) which are using the
> data structures in a right are forced to change!
> I don't think, this is the right way to handle this. I think, we should be
> updating the test case and documentation for this rather
> than changing the data structure.  I expect the next version of this patch
> should have those changes. Thanks!

The point here is about making the specification better and clearer to the user.
If the structure is not clear and is error prone if somebody does not follow
Proper documentation, we can modify it to reduce possible issues.

This is a cleanup which was added in deprecation notice as well in the last release.
This is a ABI break release and we should do this cleanup if it is legit.
Applications outside DPDK are notified as per the deprecation notice in the last release.
We have followed standard procedure to modify this structure,
hence, no need to worry about that.
We have provided, 2-3 suggestions to clean this up. Please suggest which
one is best for Intel use case.

Having first element as reserved in the structure doesn't look quite obvious.
This was also highlighted in the original patch, but was not taken up seriously
as we were not supporting it at that point.
See this
http://patches.dpdk.org/project/dpdk/patch/1524573807-168522-2-git-send-email-abhinandan.gujjar@intel.com/

-Akhil

Please note:
Avoid top posting for comments and remove unnecessary lines while commenting back.



^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v3] eventdev: update crypto adapter metadata structures
  2021-09-19 18:49  3%                   ` Akhil Goyal
@ 2021-09-20  5:54  3%                     ` Gujjar, Abhinandan S
  0 siblings, 0 replies; 200+ results
From: Gujjar, Abhinandan S @ 2021-09-20  5:54 UTC (permalink / raw)
  To: Akhil Goyal, Anoob Joseph, Shijith Thotton
  Cc: Jerin Jacob Kollanukkaran, Pavan Nikhilesh Bhagavatula,
	Ray Kinsella, Ankur Dwivedi, dev, thomas, nipun.gupta,
	Hemant Agrawal

Hi Akhil,

> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Monday, September 20, 2021 12:19 AM
> To: Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>; Anoob Joseph
> <anoobj@marvell.com>; Shijith Thotton <sthotton@marvell.com>
> Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Pavan Nikhilesh
> Bhagavatula <pbhagavatula@marvell.com>; Ray Kinsella <mdr@ashroe.eu>;
> Ankur Dwivedi <adwivedi@marvell.com>; dev@dpdk.org;
> thomas@monjalon.net; nipun.gupta@nxp.com; Hemant Agrawal
> <hemant.agrawal@nxp.com>
> Subject: RE: [PATCH v3] eventdev: update crypto adapter metadata
> structures
> 
> Hi Abhinandan,
> > > > >> >>
> > > > >> >> >> In crypto adapter metadata, reserved bytes in request
> > > > >> >> >> info structure is a space holder for response info. It
> > > > >> >> >> enforces an order of operation if the structures are
> > > > >> >> >> updated using memcpy to avoid overwriting response info.
> > > > >> >> >> It is logical to move the reserved space out of request
> > > > >> >> >> info. It also solves the ordering issue
> > > > >> mentioned before.
> > > > >> >> >I would like to understand what kind of ordering issue you
> > > > >> >> >have faced with the current approach. Could you please give
> > > > >> >> >an example/sequence
> > > > >> >> and explain?
> > > > >> >> >
> > > > >> >>
> > > > >> >> I have seen this issue with crypto adapter autotest (#n215).
> > > > >> >>
> > > > >> >> Example:
> > > > >> >> rte_memcpy(&m_data.response_info, &response_info,
> > > > >> >> sizeof(response_info)); rte_memcpy(&m_data.request_info,
> > > > >> >> &request_info, sizeof(request_info));
> > > > >> >>
> > > > >> >> Here response info is getting overwritten by request info.
> > > > >> >> Above lines can reordered to fix the issue, but can be
> > > > >> >> ignored with this
> > > > >> patch.
> > > > >> >There is a reason for designing the metadata in this way.
> > > > >> >Right now, sizeof (union rte_event_crypto_metadata) is 16 bytes.
> > > > >> >So, the session based case needs just 16 bytes to store the data.
> > > > >> >Whereas, for sessionless case each crypto_ops requires another
> > > > >> >16
> > > > bytes.
> > > > >> >
> > > > >> >By changing the struct in the following way you are doubling
> > > > >> >the memory requirement.
> > > > >> >With the changes, for sessionless case, each crypto op
> > > > >> >requires 32 bytes of space instead of 16 bytes and the mempool
> will be bigger.
> > > > >> >This will have the perf impact too!
> > > > >> >
> > > > >> >You can just copy the individual members(cdev_id &
> > > > >> >queue_pair_id) after the response_info.
> > > > >> >OR You have a better way?
> > > > >> >
> > > > >>
> > > > >> I missed the second word of event structure. It adds an extra 8
> > > > >> bytes, which is not required.
> > > > >I guess you meant not adding, it is overlapping with request
> > information.
> > > > >> Let me know, what you think of below change to metadata
> structure.
> > > > >>
> > > > >> struct rte_event_crypto_metadata {
> > > > >> 	uint64_t response_info; // 8 bytes
> > > > >With this, it lags clarity saying it is an event information.
> > > > >> 	struct rte_event_crypto_request request_info; // 8 bytes };
> > > > >>
> > > > >> Total structure size is 16 bytes.
> > > > >>
> > > > >> Response info can be set like below in test app:
> > > > >> 	m_data.response_info = response_info.event;
> > > > >>
> > > > >> The main aim of this patch is to decouple response info from
> > > > >> request
> > > info.
> > > > >The decoupling was required as it was doing memcpy.
> > > > >If you copy the individual members of request info(after
> > > > >response), you don't require it.
> > > >
> > > > With this change, the application will be free of such constraints.
> > >
> > > [Anoob] Why don't we make it like,
> > >
> > > struct rte_event_crypto_metadata {
> > >  	uint64_t ev_w0_template;
> > >               /**< Template of the first word of ``struct
> > > rte_event``
> > > (rte_event.event) to be set in the events generated by crypto
> > > adapter in both RTE_EVENT_CRYPTO_ADAPTER_OP_NEW and
> > > 	 * RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD modes.
> > > 	*/
> > >  	struct rte_event_crypto_request request_info; };
> > >
> > > This way, the usage would become more obvious and we will not have
> > > additional space reserved as well.
> >
> > Thanks for the suggestion. At this point, it is like an application
> > without understanding the data structure/ internals has used memcpy
> > and we are trying to fix it by changing the actual data structure
> > instead of fixing the application!
> > With this patch, the other applications(outside of dpdk) which are
> > using the data structures in a right are forced to change!
> > I don't think, this is the right way to handle this. I think, we
> > should be updating the test case and documentation for this rather
> > than changing the data structure.  I expect the next version of this
> > patch should have those changes. Thanks!
> 
> The point here is about making the specification better and clearer to the
> user.
> If the structure is not clear and is error prone if somebody does not follow
> Proper documentation, we can modify it to reduce possible issues.
I think, the structure is clear. Apart from adding documentation,
I feel forming a structure with appropriate data type and naming it to make it self-explanatory is important.
That is what we have done, and this is already discussed in the original patch as well.
It is important for any application writer to go through the documentation and structure to understand before using it.
The current documentation clearing says it has overlap. If you still feel it lags clarity in some places please go ahead and add documentation.
It is not about the ABI breakage; it is about changing a proper structure for an ignorant application!
I am not convinced with the proposed changes. If you have a better one, please let me know.

> 
> This is a cleanup which was added in deprecation notice as well in the last
> release.
> This is a ABI break release and we should do this cleanup if it is legit.
> Applications outside DPDK are notified as per the deprecation notice in the
> last release.
> We have followed standard procedure to modify this structure, hence, no
> need to worry about that.
> We have provided, 2-3 suggestions to clean this up. Please suggest which
> one is best for Intel use case.
> 
> Having first element as reserved in the structure doesn't look quite obvious.
> This was also highlighted in the original patch, but was not taken up seriously
> as we were not supporting it at that point.
> See this
> http://patches.dpdk.org/project/dpdk/patch/1524573807-168522-2-git-
> send-email-abhinandan.gujjar@intel.com/

> 
> -Akhil
> 
> Please note:
> Avoid top posting for comments and remove unnecessary lines while
> commenting back.
> 
Please note that, I am using outlook with appropriate settings to respond back.
I am not sure, why this is coming as top posting.

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH] cmdline: reduce ABI
  @ 2021-09-20 11:11  4%   ` David Marchand
  2021-09-20 11:21  4%     ` Olivier Matz
  2021-09-28 19:47  4%   ` [dpdk-dev] [PATCH v2 0/2] " Dmitry Kozlyuk
  1 sibling, 1 reply; 200+ results
From: David Marchand @ 2021-09-20 11:11 UTC (permalink / raw)
  To: Dmitry Kozlyuk, Ray Kinsella, Olivier Matz; +Cc: dev

On Sat, Sep 11, 2021 at 1:17 AM Dmitry Kozlyuk <dmitry.kozliuk@gmail.com> wrote:
>
> Remove the definition of `struct cmdline` from public header.
> Deprecation notice:
> https://mails.dpdk.org/archives/dev/2020-September/183310.html
>
> Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>

This patch lgtm.
Acked-by: David Marchand <david.marchand@redhat.com>


> ---
> I would also hide struct rdline to be able to alter buffer size,
> but we don't have a deprecation notice for it.

Fyi, I found one project looking into a rdline pointer to get the back
reference to cmdline stored in opaque.
https://github.com/Gandi/packet-journey/blob/master/app/cmdline.c#L1398

This cmdline pointer is then dereferenced to get s_out.
Given that we announced cmdline becoming opaque, they would have to
handle the first API change in any case.
I don't think another API change would really make a big difference to them.

Plus, this project seems stuck to 18.08 support.



--
David Marchand


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH] cmdline: reduce ABI
  2021-09-20 11:11  4%   ` David Marchand
@ 2021-09-20 11:21  4%     ` Olivier Matz
  0 siblings, 0 replies; 200+ results
From: Olivier Matz @ 2021-09-20 11:21 UTC (permalink / raw)
  To: David Marchand; +Cc: Dmitry Kozlyuk, Ray Kinsella, dev

Hi Dmitry,

On Mon, Sep 20, 2021 at 01:11:23PM +0200, David Marchand wrote:
> On Sat, Sep 11, 2021 at 1:17 AM Dmitry Kozlyuk <dmitry.kozliuk@gmail.com> wrote:
> >
> > Remove the definition of `struct cmdline` from public header.
> > Deprecation notice:
> > https://mails.dpdk.org/archives/dev/2020-September/183310.html
> >
> > Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> 
> This patch lgtm.
> Acked-by: David Marchand <david.marchand@redhat.com>

Acked-by: Olivier Matz <olivier.matz@6wind.com>

Many thanks Dmitry for taking care of this.


> > ---
> > I would also hide struct rdline to be able to alter buffer size,
> > but we don't have a deprecation notice for it.
> 
> Fyi, I found one project looking into a rdline pointer to get the back
> reference to cmdline stored in opaque.
> https://github.com/Gandi/packet-journey/blob/master/app/cmdline.c#L1398
> 
> This cmdline pointer is then dereferenced to get s_out.
> Given that we announced cmdline becoming opaque, they would have to
> handle the first API change in any case.
> I don't think another API change would really make a big difference to them.
> 
> Plus, this project seems stuck to 18.08 support.

I agree with you and David, it would make sense to also hide the rdline
struct at the same time.


Olivier

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v9] eal: remove sys/queue.h from public headers
  @ 2021-09-20 20:11  0%   ` Narcisa Ana Maria Vasile
  2021-09-30 22:16  0%     ` William Tu
  2021-10-01 10:34  0%     ` Thomas Monjalon
  0 siblings, 2 replies; 200+ results
From: Narcisa Ana Maria Vasile @ 2021-09-20 20:11 UTC (permalink / raw)
  To: William Tu
  Cc: dev, Dmitry.Kozliuk, Nick Connolly, Omar Cardona, Pallavi Kadam

On Tue, Aug 24, 2021 at 04:21:03PM +0000, William Tu wrote:
> Currently there are some public headers that include 'sys/queue.h', which
> is not POSIX, but usually provided by the Linux/BSD system library.
> (Not in POSIX.1, POSIX.1-2001, or POSIX.1-2008. Present on the BSDs.)
> The file is missing on Windows. During the Windows build, DPDK uses a
> bundled copy, so building a DPDK library works fine.  But when OVS or other
> applications use DPDK as a library, because some DPDK public headers
> include 'sys/queue.h', on Windows, it triggers an error due to no such
> file.
> 
> One solution is to install the 'lib/eal/windows/include/sys/queue.h' into
> Windows environment, such as [1]. However, this means DPDK exports the
> functionalities of 'sys/queue.h' into the environment, which might cause
> symbols, macros, headers clashing with other applications.
> 
> The patch fixes it by removing the "#include <sys/queue.h>" from
> DPDK public headers, so programs including DPDK headers don't depend
> on the system to provide 'sys/queue.h'. When these public headers use
> macros such as TAILQ_xxx, we replace it by the ones with RTE_ prefix.
> For Windows, we copy the definitions from <sys/queue.h> to rte_os.h
> in Windows EAL. Note that these RTE_ macros are compatible with
> <sys/queue.h>, both at the level of API (to use with <sys/queue.h>
> macros in C files) and ABI (to avoid breaking it).
> 
> Additionally, the TAILQ_FOREACH_SAFE is not part of <sys/queue.h>,
> the patch replaces it with RTE_TAILQ_FOREACH_SAFE.
> 
> [1] http://mails.dpdk.org/archives/dev/2021-August/216304.html
> 
> Suggested-by: Nick Connolly <nick.connolly@mayadata.io>
> Suggested-by: Dmitry Kozliuk <Dmitry.Kozliuk@gmail.com>
> Acked-by: Dmitry Kozliuk <Dmitry.Kozliuk@gmail.com>
> Signed-off-by: William Tu <u9012063@gmail.com>
> ---
Acked-by: Narcisa Vasile <navasile@linux.microsoft.com>

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [RFC PATCH v7 0/5] Add PIE support for HQoS library
  @ 2021-09-22  7:46  3% ` Liguzinski, WojciechX
  2021-09-23  9:45  3%   ` [dpdk-dev] [RFC PATCH v8 " Liguzinski, WojciechX
  0 siblings, 1 reply; 200+ results
From: Liguzinski, WojciechX @ 2021-09-22  7:46 UTC (permalink / raw)
  To: dev, jasvinder.singh, cristian.dumitrescu; +Cc: megha.ajmera

DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem
which is a situation when excess buffers in the network cause high latency and latency 
variation. Currently, it supports RED for active queue management (which is designed 
to control the queue length but it does not control latency directly and is now being 
obsoleted). However, more advanced queue management is required to address this problem
and provide desirable quality of service to users.

This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral
controller Enhanced) that can effectively and directly control queuing latency to address 
the bufferbloat problem.

The implementation of mentioned functionality includes modification of existing and 
adding a new set of data structures to the library, adding PIE related APIs. 
This affects structures in public API/ABI. That is why deprecation notice is going
to be prepared and sent.

Liguzinski, WojciechX (5):
  sched: add PIE based congestion management
  example/qos_sched: add PIE support
  example/ip_pipeline: add PIE support
  doc/guides/prog_guide: added PIE
  app/test: add tests for PIE

 app/test/autotest_data.py                    |   18 +
 app/test/meson.build                         |    4 +
 app/test/test_pie.c                          | 1065 ++++++++++++++++++
 config/rte_config.h                          |    1 -
 doc/guides/prog_guide/glossary.rst           |    3 +
 doc/guides/prog_guide/qos_framework.rst      |   60 +-
 doc/guides/prog_guide/traffic_management.rst |   13 +-
 drivers/net/softnic/rte_eth_softnic_tm.c     |    6 +-
 examples/ip_pipeline/tmgr.c                  |    6 +-
 examples/qos_sched/app_thread.c              |    1 -
 examples/qos_sched/cfg_file.c                |   82 +-
 examples/qos_sched/init.c                    |    7 +-
 examples/qos_sched/profile.cfg               |  196 ++--
 lib/sched/meson.build                        |   10 +-
 lib/sched/rte_pie.c                          |   86 ++
 lib/sched/rte_pie.h                          |  398 +++++++
 lib/sched/rte_sched.c                        |  228 ++--
 lib/sched/rte_sched.h                        |   53 +-
 lib/sched/version.map                        |    3 +
 19 files changed, 2050 insertions(+), 190 deletions(-)
 create mode 100644 app/test/test_pie.c
 create mode 100644 lib/sched/rte_pie.c
 create mode 100644 lib/sched/rte_pie.h

-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [RFC v2 0/5] hide eth dev related structures
    @ 2021-09-22 14:09  4% ` Konstantin Ananyev
  2021-09-22 14:09  1%   ` [dpdk-dev] [RFC v2 2/5] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
                     ` (3 more replies)
  1 sibling, 4 replies; 200+ results
From: Konstantin Ananyev @ 2021-09-22 14:09 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

The aim of these patch series is to make rte_ethdev core data structures
(rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback, etc.) internal to
DPDK and not visible to the user.
That should allow future possible changes to core ethdev related structures
to be transparent to the user and help to improve ABI/API stability.
Note that current ethdev API is preserved, but it is a formal ABI break.

The work is based on previous discussions at:
https://www.mail-archive.com/dev@dpdk.org/msg211405.html
https://www.mail-archive.com/dev@dpdk.org/msg216685.html
and consists of the following main points:
1. Copy public 'fast' function pointers (rx_pkt_burst(), etc.) and
   related data pointer from rte_eth_dev into a separate flat array.
   We keep it public to still be able to use inline functions for these
   'fast' calls (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
   Note that apart from function pointers itself, each element of this
   flat array also contains two opaque pointers for each ethdev:
   1) a pointer to an array of internal queue data pointers
   2)  points to array of queue callback data pointers.
   Note that exposing this extra information allows us to avoid extra
   changes inside PMD level, plus should help to avoid possible
   performance degradation.
2. Change implementation of 'fast' inline ethdev functions
   (rte_eth_rx_burst(), etc.) to use new public flat array.
   While it is an ABI breakage, this change is intended to be transparent
   for both users (no changes in user app is required) and PMD developers
   (no changes in PMD is required).
   One extra note - with new implementation RX/TX callback invocation
   will cost one extra function call with this changes. That might cause
   some slowdown for code-path with RX/TX callbacks heavily involved.
   Hope such tradeoff is acceptable for the community.
3. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
   things into internal header: <ethdev_driver.h>.

That approach was selected to:
  - Avoid(/minimize) possible performance losses.
  - Minimize required changes inside PMDs.
 
Performance testing results (ICX 2.0GHz):
 - testpmd macswap fwd mode, plus
   a) no RX/TX callbacks:
      performance numbers remains the same before and after the patch
   b) bpf-load rx 0 0 JM ./dpdk.org/examples/bpf/t3.o:
      ~2% slowdown

Would like to thank Ferruh and Jerrin for reviewing and testing previous
version of this RFC.
All interested parties please provide your feedback for v2.
If there would be no major objections, I plan to submit a proper v3
patch in next few days.

Konstantin Ananyev (5):
  ethdev: allocate max space for internal queue array
  ethdev: change input parameters for rx_queue_count
  ethdev: copy ethdev 'burst' API into separate structure
  ethdev: make burst functions to use new flat array
  ethdev: hide eth dev related structures

 app/test-pmd/config.c                         |  23 +-
 drivers/common/octeontx2/otx2_sec_idev.c      |   2 +-
 drivers/crypto/octeontx2/otx2_cryptodev_ops.c |   2 +-
 drivers/net/ark/ark_ethdev_rx.c               |   4 +-
 drivers/net/ark/ark_ethdev_rx.h               |   3 +-
 drivers/net/atlantic/atl_ethdev.h             |   2 +-
 drivers/net/atlantic/atl_rxtx.c               |   9 +-
 drivers/net/bnxt/bnxt_ethdev.c                |   8 +-
 drivers/net/cxgbe/base/adapter.h              |   2 +-
 drivers/net/dpaa/dpaa_ethdev.c                |   9 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |   9 +-
 drivers/net/dpaa2/dpaa2_ptp.c                 |   2 +-
 drivers/net/e1000/e1000_ethdev.h              |   6 +-
 drivers/net/e1000/em_rxtx.c                   |   4 +-
 drivers/net/e1000/igb_rxtx.c                  |   4 +-
 drivers/net/enic/enic_ethdev.c                |  12 +-
 drivers/net/fm10k/fm10k.h                     |   2 +-
 drivers/net/fm10k/fm10k_rxtx.c                |   4 +-
 drivers/net/hns3/hns3_rxtx.c                  |   7 +-
 drivers/net/hns3/hns3_rxtx.h                  |   2 +-
 drivers/net/i40e/i40e_rxtx.c                  |   4 +-
 drivers/net/i40e/i40e_rxtx.h                  |   3 +-
 drivers/net/iavf/iavf_rxtx.c                  |   4 +-
 drivers/net/iavf/iavf_rxtx.h                  |   2 +-
 drivers/net/ice/ice_rxtx.c                    |   4 +-
 drivers/net/ice/ice_rxtx.h                    |   2 +-
 drivers/net/igc/igc_txrx.c                    |   5 +-
 drivers/net/igc/igc_txrx.h                    |   3 +-
 drivers/net/ixgbe/ixgbe_ethdev.h              |   3 +-
 drivers/net/ixgbe/ixgbe_rxtx.c                |   4 +-
 drivers/net/mlx5/mlx5_rx.c                    |  26 +-
 drivers/net/mlx5/mlx5_rx.h                    |   2 +-
 drivers/net/netvsc/hn_rxtx.c                  |   4 +-
 drivers/net/netvsc/hn_var.h                   |   3 +-
 drivers/net/nfp/nfp_rxtx.c                    |   4 +-
 drivers/net/nfp/nfp_rxtx.h                    |   3 +-
 drivers/net/octeontx2/otx2_ethdev.h           |   2 +-
 drivers/net/octeontx2/otx2_ethdev_ops.c       |   8 +-
 drivers/net/sfc/sfc_ethdev.c                  |  12 +-
 drivers/net/thunderx/nicvf_ethdev.c           |   3 +-
 drivers/net/thunderx/nicvf_rxtx.c             |   4 +-
 drivers/net/thunderx/nicvf_rxtx.h             |   2 +-
 drivers/net/txgbe/txgbe_ethdev.h              |   3 +-
 drivers/net/txgbe/txgbe_rxtx.c                |   4 +-
 drivers/net/vhost/rte_eth_vhost.c             |   4 +-
 lib/ethdev/ethdev_driver.h                    | 152 +++++++++
 lib/ethdev/ethdev_private.c                   |  84 +++++
 lib/ethdev/ethdev_private.h                   |   7 +
 lib/ethdev/rte_ethdev.c                       |  78 +++--
 lib/ethdev/rte_ethdev.h                       | 288 ++++++++++++------
 lib/ethdev/rte_ethdev_core.h                  | 168 +++-------
 lib/ethdev/version.map                        |   6 +
 lib/eventdev/rte_event_eth_rx_adapter.c       |   2 +-
 lib/eventdev/rte_event_eth_tx_adapter.c       |   2 +-
 lib/eventdev/rte_eventdev.c                   |   2 +-
 55 files changed, 643 insertions(+), 380 deletions(-)

-- 
2.26.3


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [RFC v2 2/5] ethdev: change input parameters for rx_queue_count
  2021-09-22 14:09  4% ` [dpdk-dev] [RFC v2 0/5] " Konstantin Ananyev
@ 2021-09-22 14:09  1%   ` Konstantin Ananyev
  2021-09-23  5:51  0%     ` Wang, Haiyue
  2021-09-22 14:09  2%   ` [dpdk-dev] [RFC v2 3/5] ethdev: copy ethdev 'burst' API into separate structure Konstantin Ananyev
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 200+ results
From: Konstantin Ananyev @ 2021-09-22 14:09 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Currently majority of 'fast' ethdev ops take pointers to internal
queue data structures as an input parameter.
While eth_rx_queue_count() takes a pointer to rte_eth_dev and queue
index.
For future work to hide rte_eth_devices[] and friends it would be
plausible to unify parameters list of all 'fast' ethdev ops.
This patch changes eth_rx_queue_count() to accept pointer to internal
queue data as input parameter.
This is an API and ABI breakage.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 drivers/net/ark/ark_ethdev_rx.c         |  4 ++--
 drivers/net/ark/ark_ethdev_rx.h         |  3 +--
 drivers/net/atlantic/atl_ethdev.h       |  2 +-
 drivers/net/atlantic/atl_rxtx.c         |  9 ++-------
 drivers/net/bnxt/bnxt_ethdev.c          |  8 +++++---
 drivers/net/dpaa/dpaa_ethdev.c          |  9 ++++-----
 drivers/net/dpaa2/dpaa2_ethdev.c        |  9 ++++-----
 drivers/net/e1000/e1000_ethdev.h        |  6 ++----
 drivers/net/e1000/em_rxtx.c             |  4 ++--
 drivers/net/e1000/igb_rxtx.c            |  4 ++--
 drivers/net/enic/enic_ethdev.c          | 12 ++++++------
 drivers/net/fm10k/fm10k.h               |  2 +-
 drivers/net/fm10k/fm10k_rxtx.c          |  4 ++--
 drivers/net/hns3/hns3_rxtx.c            |  7 +++++--
 drivers/net/hns3/hns3_rxtx.h            |  2 +-
 drivers/net/i40e/i40e_rxtx.c            |  4 ++--
 drivers/net/i40e/i40e_rxtx.h            |  3 +--
 drivers/net/iavf/iavf_rxtx.c            |  4 ++--
 drivers/net/iavf/iavf_rxtx.h            |  2 +-
 drivers/net/ice/ice_rxtx.c              |  4 ++--
 drivers/net/ice/ice_rxtx.h              |  2 +-
 drivers/net/igc/igc_txrx.c              |  5 ++---
 drivers/net/igc/igc_txrx.h              |  3 +--
 drivers/net/ixgbe/ixgbe_ethdev.h        |  3 +--
 drivers/net/ixgbe/ixgbe_rxtx.c          |  4 ++--
 drivers/net/mlx5/mlx5_rx.c              | 26 ++++++++++++-------------
 drivers/net/mlx5/mlx5_rx.h              |  2 +-
 drivers/net/netvsc/hn_rxtx.c            |  4 ++--
 drivers/net/netvsc/hn_var.h             |  2 +-
 drivers/net/nfp/nfp_rxtx.c              |  4 ++--
 drivers/net/nfp/nfp_rxtx.h              |  3 +--
 drivers/net/octeontx2/otx2_ethdev.h     |  2 +-
 drivers/net/octeontx2/otx2_ethdev_ops.c |  8 ++++----
 drivers/net/sfc/sfc_ethdev.c            | 12 ++++++------
 drivers/net/thunderx/nicvf_ethdev.c     |  3 +--
 drivers/net/thunderx/nicvf_rxtx.c       |  4 ++--
 drivers/net/thunderx/nicvf_rxtx.h       |  2 +-
 drivers/net/txgbe/txgbe_ethdev.h        |  3 +--
 drivers/net/txgbe/txgbe_rxtx.c          |  4 ++--
 drivers/net/vhost/rte_eth_vhost.c       |  4 ++--
 lib/ethdev/rte_ethdev.h                 |  2 +-
 lib/ethdev/rte_ethdev_core.h            |  3 +--
 42 files changed, 97 insertions(+), 110 deletions(-)

diff --git a/drivers/net/ark/ark_ethdev_rx.c b/drivers/net/ark/ark_ethdev_rx.c
index d255f0177b..98658ce621 100644
--- a/drivers/net/ark/ark_ethdev_rx.c
+++ b/drivers/net/ark/ark_ethdev_rx.c
@@ -388,11 +388,11 @@ eth_ark_rx_queue_drain(struct ark_rx_queue *queue)
 }
 
 uint32_t
-eth_ark_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id)
+eth_ark_dev_rx_queue_count(void *rx_queue)
 {
 	struct ark_rx_queue *queue;
 
-	queue = dev->data->rx_queues[queue_id];
+	queue = rx_queue;
 	return (queue->prod_index - queue->cons_index);	/* mod arith */
 }
 
diff --git a/drivers/net/ark/ark_ethdev_rx.h b/drivers/net/ark/ark_ethdev_rx.h
index c8dc340a8a..859fcf1e6f 100644
--- a/drivers/net/ark/ark_ethdev_rx.h
+++ b/drivers/net/ark/ark_ethdev_rx.h
@@ -17,8 +17,7 @@ int eth_ark_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			       unsigned int socket_id,
 			       const struct rte_eth_rxconf *rx_conf,
 			       struct rte_mempool *mp);
-uint32_t eth_ark_dev_rx_queue_count(struct rte_eth_dev *dev,
-				    uint16_t rx_queue_id);
+uint32_t eth_ark_dev_rx_queue_count(void *rx_queue);
 int eth_ark_rx_stop_queue(struct rte_eth_dev *dev, uint16_t queue_id);
 int eth_ark_rx_start_queue(struct rte_eth_dev *dev, uint16_t queue_id);
 uint16_t eth_ark_recv_pkts_noop(void *rx_queue, struct rte_mbuf **rx_pkts,
diff --git a/drivers/net/atlantic/atl_ethdev.h b/drivers/net/atlantic/atl_ethdev.h
index f547571b5c..e808460520 100644
--- a/drivers/net/atlantic/atl_ethdev.h
+++ b/drivers/net/atlantic/atl_ethdev.h
@@ -66,7 +66,7 @@ int atl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
 
-uint32_t atl_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t atl_rx_queue_count(void *rx_queue);
 
 int atl_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int atl_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c
index 7d367c9306..35bb13044e 100644
--- a/drivers/net/atlantic/atl_rxtx.c
+++ b/drivers/net/atlantic/atl_rxtx.c
@@ -689,18 +689,13 @@ atl_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 /* Return Rx queue avail count */
 
 uint32_t
-atl_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+atl_rx_queue_count(void *rx_queue)
 {
 	struct atl_rx_queue *rxq;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (rx_queue_id >= dev->data->nb_rx_queues) {
-		PMD_DRV_LOG(ERR, "Invalid RX queue id=%d", rx_queue_id);
-		return 0;
-	}
-
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 
 	if (rxq == NULL)
 		return 0;
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 097dd10de9..e07242e961 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3130,20 +3130,22 @@ bnxt_dev_led_off_op(struct rte_eth_dev *dev)
 }
 
 static uint32_t
-bnxt_rx_queue_count_op(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+bnxt_rx_queue_count_op(void *rx_queue)
 {
-	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
+	struct bnxt *bp;
 	struct bnxt_cp_ring_info *cpr;
 	uint32_t desc = 0, raw_cons, cp_ring_size;
 	struct bnxt_rx_queue *rxq;
 	struct rx_pkt_cmpl *rxcmp;
 	int rc;
 
+	rxq = rx_queue;
+	bp = rxq->bp;
+
 	rc = is_bnxt_in_error(bp);
 	if (rc)
 		return rc;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
 	cpr = rxq->cp_ring;
 	raw_cons = cpr->cp_raw_cons;
 	cp_ring_size = cpr->cp_ring_struct->ring_size;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 36d8f9249d..b5589300c9 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1278,17 +1278,16 @@ static void dpaa_eth_tx_queue_release(void *txq __rte_unused)
 }
 
 static uint32_t
-dpaa_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+dpaa_dev_rx_queue_count(void *rx_queue)
 {
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
-	struct qman_fq *rxq = &dpaa_intf->rx_queues[rx_queue_id];
+	struct qman_fq *rxq = rx_queue;
 	u32 frm_cnt = 0;
 
 	PMD_INIT_FUNC_TRACE();
 
 	if (qman_query_fq_frm_cnt(rxq, &frm_cnt) == 0) {
-		DPAA_PMD_DEBUG("RX frame count for q(%d) is %u",
-			       rx_queue_id, frm_cnt);
+		DPAA_PMD_DEBUG("RX frame count for q(%p) is %u",
+			       rx_queue, frm_cnt);
 	}
 	return frm_cnt;
 }
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index c12169578e..b295af2a57 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1011,10 +1011,9 @@ dpaa2_dev_tx_queue_release(void *q __rte_unused)
 }
 
 static uint32_t
-dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+dpaa2_dev_rx_queue_count(void *rx_queue)
 {
 	int32_t ret;
-	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	struct dpaa2_queue *dpaa2_q;
 	struct qbman_swp *swp;
 	struct qbman_fq_query_np_rslt state;
@@ -1031,12 +1030,12 @@ dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 	swp = DPAA2_PER_LCORE_PORTAL;
 
-	dpaa2_q = (struct dpaa2_queue *)priv->rx_vq[rx_queue_id];
+	dpaa2_q = rx_queue;
 
 	if (qbman_fq_query_state(swp, dpaa2_q->fqid, &state) == 0) {
 		frame_cnt = qbman_fq_state_frame_count(&state);
-		DPAA2_PMD_DP_DEBUG("RX frame count for q(%d) is %u",
-				rx_queue_id, frame_cnt);
+		DPAA2_PMD_DP_DEBUG("RX frame count for q(%p) is %u",
+				rx_queue, frame_cnt);
 	}
 	return frame_cnt;
 }
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 3b4d9c3ee6..460e130a83 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -399,8 +399,7 @@ int eth_igb_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		const struct rte_eth_rxconf *rx_conf,
 		struct rte_mempool *mb_pool);
 
-uint32_t eth_igb_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t eth_igb_rx_queue_count(void *rx_queue);
 
 int eth_igb_rx_descriptor_done(void *rx_queue, uint16_t offset);
 
@@ -476,8 +475,7 @@ int eth_em_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		const struct rte_eth_rxconf *rx_conf,
 		struct rte_mempool *mb_pool);
 
-uint32_t eth_em_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t eth_em_rx_queue_count(void *rx_queue);
 
 int eth_em_rx_descriptor_done(void *rx_queue, uint16_t offset);
 
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index dfd8f2fd00..40de36cb20 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -1489,14 +1489,14 @@ eth_em_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-eth_em_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_em_rx_queue_count(void *rx_queue)
 {
 #define EM_RXQ_SCAN_INTERVAL 4
 	volatile struct e1000_rx_desc *rxdp;
 	struct em_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 278d5d2712..3210a0e008 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -1769,14 +1769,14 @@ eth_igb_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-eth_igb_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_igb_rx_queue_count(void *rx_queue)
 {
 #define IGB_RXQ_SCAN_INTERVAL 4
 	volatile union e1000_adv_rx_desc *rxdp;
 	struct igb_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 8d5797523b..5b2d60ad9c 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -233,18 +233,18 @@ static void enicpmd_dev_rx_queue_release(void *rxq)
 	enic_free_rq(rxq);
 }
 
-static uint32_t enicpmd_dev_rx_queue_count(struct rte_eth_dev *dev,
-					   uint16_t rx_queue_id)
+static uint32_t enicpmd_dev_rx_queue_count(void *rx_queue)
 {
-	struct enic *enic = pmd_priv(dev);
+	struct enic *enic;
+	struct vnic_rq *sop_rq;
 	uint32_t queue_count = 0;
 	struct vnic_cq *cq;
 	uint32_t cq_tail;
 	uint16_t cq_idx;
-	int rq_num;
 
-	rq_num = enic_rte_rq_idx_to_sop_idx(rx_queue_id);
-	cq = &enic->cq[enic_cq_rq(enic, rq_num)];
+	sop_rq = rx_queue;
+	enic = vnic_dev_priv(sop_rq->vdev);
+	cq = &enic->cq[enic_cq_rq(enic, sop_rq->index)];
 	cq_idx = cq->to_clean;
 
 	cq_tail = ioread32(&cq->ctrl->cq_tail);
diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h
index 916b856acc..648d12a1b4 100644
--- a/drivers/net/fm10k/fm10k.h
+++ b/drivers/net/fm10k/fm10k.h
@@ -324,7 +324,7 @@ uint16_t fm10k_recv_scattered_pkts(void *rx_queue,
 		struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
 
 uint32_t
-fm10k_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+fm10k_dev_rx_queue_count(void *rx_queue);
 
 int
 fm10k_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index 0a9a27aa5a..eab798e52c 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -367,14 +367,14 @@ fm10k_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 }
 
 uint32_t
-fm10k_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+fm10k_dev_rx_queue_count(void *rx_queue)
 {
 #define FM10K_RXQ_SCAN_INTERVAL 4
 	volatile union fm10k_rx_desc *rxdp;
 	struct fm10k_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->hw_ring[rxq->next_dd];
 	while ((desc < rxq->nb_desc) &&
 		rxdp->w.status & rte_cpu_to_le_16(FM10K_RXD_STATUS_DD)) {
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 481872e395..04791ae7d0 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -4673,7 +4673,7 @@ hns3_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
 }
 
 uint32_t
-hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+hns3_rx_queue_count(void *rx_queue)
 {
 	/*
 	 * Number of BDs that have been processed by the driver
@@ -4681,9 +4681,12 @@ hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	 */
 	uint32_t driver_hold_bd_num;
 	struct hns3_rx_queue *rxq;
+	const struct rte_eth_dev *dev;
 	uint32_t fbd_num;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
+	dev = &rte_eth_devices[rxq->port_id];
+
 	fbd_num = hns3_read_dev(rxq, HNS3_RING_RX_FBDNUM_REG);
 	if (dev->rx_pkt_burst == hns3_recv_pkts_vec ||
 	    dev->rx_pkt_burst == hns3_recv_pkts_vec_sve)
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index cd7c21c1d0..34a028701f 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -696,7 +696,7 @@ int hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 			struct rte_mempool *mp);
 int hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 			unsigned int socket, const struct rte_eth_txconf *conf);
-uint32_t hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t hns3_rx_queue_count(void *rx_queue);
 int hns3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int hns3_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int hns3_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 3eb82578b0..5493ae6bba 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2117,14 +2117,14 @@ i40e_dev_rx_queue_release(void *rxq)
 }
 
 uint32_t
-i40e_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+i40e_dev_rx_queue_count(void *rx_queue)
 {
 #define I40E_RXQ_SCAN_INTERVAL 4
 	volatile union i40e_rx_desc *rxdp;
 	struct i40e_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 	while ((desc < rxq->nb_rx_desc) &&
 		((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 5ccf5773e8..a08b80f020 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -225,8 +225,7 @@ int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
 int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
 void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
 
-uint32_t i40e_dev_rx_queue_count(struct rte_eth_dev *dev,
-				 uint16_t rx_queue_id);
+uint32_t i40e_dev_rx_queue_count(void *rx_queue);
 int i40e_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
 int i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 6de8ad3fe3..a08c2c6cf4 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -2793,14 +2793,14 @@ iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 
 /* Get the number of used descriptors of a rx queue */
 uint32_t
-iavf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id)
+iavf_dev_rxq_count(void *rx_queue)
 {
 #define IAVF_RXQ_SCAN_INTERVAL 4
 	volatile union iavf_rx_desc *rxdp;
 	struct iavf_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index e210b913d6..2f7bec2b63 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -453,7 +453,7 @@ void iavf_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			  struct rte_eth_rxq_info *qinfo);
 void iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			  struct rte_eth_txq_info *qinfo);
-uint32_t iavf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id);
+uint32_t iavf_dev_rxq_count(void *rx_queue);
 int iavf_dev_rx_desc_status(void *rx_queue, uint16_t offset);
 int iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset);
 
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 5d7ab4f047..61936b0ab1 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1427,14 +1427,14 @@ ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 }
 
 uint32_t
-ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+ice_rx_queue_count(void *rx_queue)
 {
 #define ICE_RXQ_SCAN_INTERVAL 4
 	volatile union ice_rx_flex_desc *rxdp;
 	struct ice_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 	while ((desc < rxq->nb_rx_desc) &&
 	       rte_le_to_cpu_16(rxdp->wb.status_error0) &
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index b10db0874d..b45abec91a 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -222,7 +222,7 @@ uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 void ice_set_tx_function_flag(struct rte_eth_dev *dev,
 			      struct ice_tx_queue *txq);
 void ice_set_tx_function(struct rte_eth_dev *dev);
-uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t ice_rx_queue_count(void *rx_queue);
 void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 		      struct rte_eth_rxq_info *qinfo);
 void ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index b5489eedd2..437992ecdf 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -722,8 +722,7 @@ void eth_igc_rx_queue_release(void *rxq)
 		igc_rx_queue_release(rxq);
 }
 
-uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id)
+uint32_t eth_igc_rx_queue_count(void *rx_queue)
 {
 	/**
 	 * Check the DD bit of a rx descriptor of each 4 in a group,
@@ -736,7 +735,7 @@ uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
 	struct igc_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 
 	while (desc < rxq->nb_rx_desc - rxq->rx_tail) {
diff --git a/drivers/net/igc/igc_txrx.h b/drivers/net/igc/igc_txrx.h
index f2b2d75bbc..b0c4b3ebd9 100644
--- a/drivers/net/igc/igc_txrx.h
+++ b/drivers/net/igc/igc_txrx.h
@@ -22,8 +22,7 @@ int eth_igc_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		const struct rte_eth_rxconf *rx_conf,
 		struct rte_mempool *mb_pool);
 
-uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t eth_igc_rx_queue_count(void *rx_queue);
 
 int eth_igc_rx_descriptor_done(void *rx_queue, uint16_t offset);
 
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index a0ce18ca24..c5027be1dc 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -602,8 +602,7 @@ int  ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
 
-uint32_t ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t ixgbe_dev_rx_queue_count(void *rx_queue);
 
 int ixgbe_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
 
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index bfdfd5e755..1f802851e3 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -3258,14 +3258,14 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+ixgbe_dev_rx_queue_count(void *rx_queue)
 {
 #define IXGBE_RXQ_SCAN_INTERVAL 4
 	volatile union ixgbe_adv_rx_desc *rxdp;
 	struct ixgbe_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index e3b1051ba4..1a9eb35acc 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -240,32 +240,32 @@ mlx5_rx_burst_mode_get(struct rte_eth_dev *dev,
 /**
  * DPDK callback to get the number of used descriptors in a RX queue.
  *
- * @param dev
- *   Pointer to the device structure.
- *
- * @param rx_queue_id
- *   The Rx queue.
+ * @param rx_queue
+ *   The Rx queue pointer.
  *
  * @return
  *   The number of used rx descriptor.
  *   -EINVAL if the queue is invalid
  */
 uint32_t
-mlx5_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+mlx5_rx_queue_count(void *rx_queue)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq;
+	struct mlx5_rxq_data *rxq = rx_queue;
+	struct rte_eth_dev *dev;
+
+	if (!rxq) {
+		rte_errno = EINVAL;
+		return -rte_errno;
+	}
+
+	dev = &rte_eth_devices[rxq->port_id];
 
 	if (dev->rx_pkt_burst == NULL ||
 	    dev->rx_pkt_burst == removed_rx_burst) {
 		rte_errno = ENOTSUP;
 		return -rte_errno;
 	}
-	rxq = (*priv->rxqs)[rx_queue_id];
-	if (!rxq) {
-		rte_errno = EINVAL;
-		return -rte_errno;
-	}
+
 	return rx_queue_count(rxq);
 }
 
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 3f2b99fb65..5e4ac7324d 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -260,7 +260,7 @@ uint16_t mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts,
 uint16_t removed_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts,
 			  uint16_t pkts_n);
 int mlx5_rx_descriptor_status(void *rx_queue, uint16_t offset);
-uint32_t mlx5_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t mlx5_rx_queue_count(void *rx_queue);
 void mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 		       struct rte_eth_rxq_info *qinfo);
 int mlx5_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t rx_queue_id,
diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c
index c6bf7cc132..30aac371c8 100644
--- a/drivers/net/netvsc/hn_rxtx.c
+++ b/drivers/net/netvsc/hn_rxtx.c
@@ -1018,9 +1018,9 @@ hn_dev_rx_queue_release(void *arg)
  * For this device that means how many packets are pending in the ring.
  */
 uint32_t
-hn_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id)
+hn_dev_rx_queue_count(void *rx_queue)
 {
-	struct hn_rx_queue *rxq = dev->data->rx_queues[queue_id];
+	struct hn_rx_queue *rxq = rx_queue;
 
 	return rte_ring_count(rxq->rx_ring);
 }
diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h
index 43642408bc..2a2bac9338 100644
--- a/drivers/net/netvsc/hn_var.h
+++ b/drivers/net/netvsc/hn_var.h
@@ -215,7 +215,7 @@ int	hn_dev_rx_queue_setup(struct rte_eth_dev *dev,
 void	hn_dev_rx_queue_info(struct rte_eth_dev *dev, uint16_t queue_id,
 			     struct rte_eth_rxq_info *qinfo);
 void	hn_dev_rx_queue_release(void *arg);
-uint32_t hn_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id);
+uint32_t hn_dev_rx_queue_count(void *rx_queue);
 int	hn_dev_rx_queue_status(void *rxq, uint16_t offset);
 void	hn_dev_free_queues(struct rte_eth_dev *dev);
 
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 1402c5f84a..4b2ac4cc43 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -97,14 +97,14 @@ nfp_net_rx_freelist_setup(struct rte_eth_dev *dev)
 }
 
 uint32_t
-nfp_net_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
+nfp_net_rx_queue_count(void *rx_queue)
 {
 	struct nfp_net_rxq *rxq;
 	struct nfp_net_rx_desc *rxds;
 	uint32_t idx;
 	uint32_t count;
 
-	rxq = (struct nfp_net_rxq *)dev->data->rx_queues[queue_idx];
+	rxq = rx_queue;
 
 	idx = rxq->rd_p;
 
diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h
index b0a8bf81b0..0fd50a6c22 100644
--- a/drivers/net/nfp/nfp_rxtx.h
+++ b/drivers/net/nfp/nfp_rxtx.h
@@ -275,8 +275,7 @@ struct nfp_net_rxq {
 } __rte_aligned(64);
 
 int nfp_net_rx_freelist_setup(struct rte_eth_dev *dev);
-uint32_t nfp_net_rx_queue_count(struct rte_eth_dev *dev,
-				       uint16_t queue_idx);
+uint32_t nfp_net_rx_queue_count(void *rx_queue);
 uint16_t nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 				  uint16_t nb_pkts);
 void nfp_net_rx_queue_release(void *rxq);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 7871e3d30b..6696db6f6f 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -431,7 +431,7 @@ int otx2_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			   struct rte_eth_burst_mode *mode);
 int otx2_tx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			   struct rte_eth_burst_mode *mode);
-uint32_t otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t qidx);
+uint32_t otx2_nix_rx_queue_count(void *rx_queue);
 int otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt);
 int otx2_nix_rx_descriptor_done(void *rxq, uint16_t offset);
 int otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 552e6bd43d..e6f8e5bfc1 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -342,13 +342,13 @@ nix_rx_head_tail_get(struct otx2_eth_dev *dev,
 }
 
 uint32_t
-otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t queue_idx)
+otx2_nix_rx_queue_count(void *rx_queue)
 {
-	struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[queue_idx];
-	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_eth_rxq *rxq = rx_queue;
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(rxq->eth_dev);
 	uint32_t head, tail;
 
-	nix_rx_head_tail_get(dev, &head, &tail, queue_idx);
+	nix_rx_head_tail_get(dev, &head, &tail, rxq->rq);
 	return (tail - head) % rxq->qlen;
 }
 
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 2db0d000c3..4b5713f3ec 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1281,19 +1281,19 @@ sfc_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
  * use any process-local pointers from the adapter data.
  */
 static uint32_t
-sfc_rx_queue_count(struct rte_eth_dev *dev, uint16_t ethdev_qid)
+sfc_rx_queue_count(void *rx_queue)
 {
-	const struct sfc_adapter_priv *sap = sfc_adapter_priv_by_eth_dev(dev);
-	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
-	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
+	struct sfc_dp_rxq *dp_rxq = rx_queue;
+	const struct sfc_dp_rx *dp_rx;
 	struct sfc_rxq_info *rxq_info;
 
-	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
+	dp_rx = sfc_dp_rx_by_dp_rxq(dp_rxq);
+	rxq_info = sfc_rxq_info_by_dp_rxq(dp_rxq);
 
 	if ((rxq_info->state & SFC_RXQ_STARTED) == 0)
 		return 0;
 
-	return sap->dp_rx->qdesc_npending(rxq_info->dp);
+	return dp_rx->qdesc_npending(dp_rxq);
 }
 
 /*
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 561a98fc81..0e87620e42 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -1060,8 +1060,7 @@ nicvf_rx_queue_release_mbufs(struct rte_eth_dev *dev, struct nicvf_rxq *rxq)
 	if (dev->rx_pkt_burst == NULL)
 		return;
 
-	while ((rxq_cnt = nicvf_dev_rx_queue_count(dev,
-				nicvf_netdev_qidx(rxq->nic, rxq->queue_id)))) {
+	while ((rxq_cnt = nicvf_dev_rx_queue_count(rxq))) {
 		nb_pkts = dev->rx_pkt_burst(rxq, rx_pkts,
 					NICVF_MAX_RX_FREE_THRESH);
 		PMD_DRV_LOG(INFO, "nb_pkts=%d  rxq_cnt=%d", nb_pkts, rxq_cnt);
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index 91e09ff8d5..0d4f4ae87e 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -649,11 +649,11 @@ nicvf_recv_pkts_multiseg_cksum_vlan_strip(void *rx_queue,
 }
 
 uint32_t
-nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
+nicvf_dev_rx_queue_count(void *rx_queue)
 {
 	struct nicvf_rxq *rxq;
 
-	rxq = dev->data->rx_queues[queue_idx];
+	rxq = rx_queue;
 	return nicvf_addr_read(rxq->cq_status) & NICVF_CQ_CQE_COUNT_MASK;
 }
 
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index d6ed660b4e..271f329dc4 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -83,7 +83,7 @@ nicvf_mbuff_init_mseg_update(struct rte_mbuf *pkt, const uint64_t mbuf_init,
 	*(uint64_t *)(&pkt->rearm_data) = init.value;
 }
 
-uint32_t nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx);
+uint32_t nicvf_dev_rx_queue_count(void *rx_queue);
 uint32_t nicvf_dev_rbdr_refill(struct rte_eth_dev *dev, uint16_t queue_idx);
 
 uint16_t nicvf_recv_pkts_no_offload(void *rxq, struct rte_mbuf **rx_pkts,
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 3021933965..569cd6a48f 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -446,8 +446,7 @@ int  txgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
 
-uint32_t txgbe_dev_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t txgbe_dev_rx_queue_count(void *rx_queue);
 
 int txgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int txgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 1a261287d1..2a7cfdeedb 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -2688,14 +2688,14 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-txgbe_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+txgbe_dev_rx_queue_count(void *rx_queue)
 {
 #define TXGBE_RXQ_SCAN_INTERVAL 4
 	volatile struct txgbe_rx_desc *rxdp;
 	struct txgbe_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index a202931e9a..f2b3f142d8 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -1369,11 +1369,11 @@ eth_link_update(struct rte_eth_dev *dev __rte_unused,
 }
 
 static uint32_t
-eth_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_rx_queue_count(void *rx_queue)
 {
 	struct vhost_queue *vq;
 
-	vq = dev->data->rx_queues[rx_queue_id];
+	vq = rx_queue;
 	if (vq == NULL)
 		return 0;
 
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index bef24173cf..73b89fb2f0 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -5028,7 +5028,7 @@ rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
 	    dev->data->rx_queues[queue_id] == NULL)
 		return -EINVAL;
 
-	return (int)(*dev->rx_queue_count)(dev, queue_id);
+	return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]);
 }
 
 /**
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index edf96de2dc..00f27c643a 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -41,8 +41,7 @@ typedef uint16_t (*eth_tx_prep_t)(void *txq,
 /**< @internal Prepare output packets on a transmit queue of an Ethernet device. */
 
 
-typedef uint32_t (*eth_rx_queue_count_t)(struct rte_eth_dev *dev,
-					 uint16_t rx_queue_id);
+typedef uint32_t (*eth_rx_queue_count_t)(void *rxq);
 /**< @internal Get number of used descriptors on a receive queue. */
 
 typedef int (*eth_rx_descriptor_done_t)(void *rxq, uint16_t offset);
-- 
2.26.3


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [RFC v2 3/5] ethdev: copy ethdev 'burst' API into separate structure
  2021-09-22 14:09  4% ` [dpdk-dev] [RFC v2 0/5] " Konstantin Ananyev
  2021-09-22 14:09  1%   ` [dpdk-dev] [RFC v2 2/5] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
@ 2021-09-22 14:09  2%   ` Konstantin Ananyev
  2021-09-23  5:58  0%     ` Wang, Haiyue
  2021-09-22 14:09  2%   ` [dpdk-dev] [RFC v2 4/5] ethdev: make burst functions to use new flat array Konstantin Ananyev
  2021-10-01 14:02  4%   ` [dpdk-dev] [PATCH v3 0/7] hide eth dev related structures Konstantin Ananyev
  3 siblings, 1 reply; 200+ results
From: Konstantin Ananyev @ 2021-09-22 14:09 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Copy public function pointers (rx_pkt_burst(), etc.) and related
pointers to internal data from rte_eth_dev structure into a separate flat
array. We can keep it public to still use inline functions for 'fast' calls
(like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
The intention is to make rte_eth_dev and related structures internal.
That should allow future possible changes to core eth_dev strcutures
to be transaprent to the user and help to avoid ABI/API breakages.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/ethdev/ethdev_private.c  | 53 ++++++++++++++++++++++++++++++++++++
 lib/ethdev/ethdev_private.h  |  7 +++++
 lib/ethdev/rte_ethdev.c      | 17 ++++++++++++
 lib/ethdev/rte_ethdev_core.h | 45 ++++++++++++++++++++++++++++++
 4 files changed, 122 insertions(+)

diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 012cf73ca2..a1683da77b 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -174,3 +174,56 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
 		RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
 	return str == NULL ? -1 : 0;
 }
+
+static uint16_t
+dummy_eth_rx_burst(__rte_unused void *rxq,
+		__rte_unused struct rte_mbuf **rx_pkts,
+		__rte_unused uint16_t nb_pkts)
+{
+	RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+static uint16_t
+dummy_eth_tx_burst(__rte_unused void *txq,
+		__rte_unused struct rte_mbuf **tx_pkts,
+		__rte_unused uint16_t nb_pkts)
+{
+	RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n");
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+void
+eth_dev_burst_api_reset(struct rte_eth_burst_api *rba)
+{
+	static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
+	static const struct rte_eth_burst_api dummy_api = {
+		.rx_pkt_burst = dummy_eth_rx_burst,
+		.tx_pkt_burst = dummy_eth_tx_burst,
+		.rxq = {.data = dummy_data, .clbk = dummy_data,},
+		.txq = {.data = dummy_data, .clbk = dummy_data,},
+	};
+
+	*rba = dummy_api;
+}
+
+void
+eth_dev_burst_api_setup(struct rte_eth_burst_api *rba,
+		const struct rte_eth_dev *dev)
+{
+	rba->rx_pkt_burst = dev->rx_pkt_burst;
+	rba->tx_pkt_burst = dev->tx_pkt_burst;
+	rba->tx_pkt_prepare = dev->tx_pkt_prepare;
+	rba->rx_queue_count = dev->rx_queue_count;
+	rba->rx_descriptor_status = dev->rx_descriptor_status;
+	rba->tx_descriptor_status = dev->tx_descriptor_status;
+
+	rba->rxq.data = dev->data->rx_queues;
+	rba->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
+
+	rba->txq.data = dev->data->tx_queues;
+	rba->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
+}
+
diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
index 9bb0879538..54921f4860 100644
--- a/lib/ethdev/ethdev_private.h
+++ b/lib/ethdev/ethdev_private.h
@@ -30,6 +30,13 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
 /* Parse devargs value for representor parameter. */
 int rte_eth_devargs_parse_representor_ports(char *str, void *data);
 
+/* reset eth 'burst' API to dummy values */
+void eth_dev_burst_api_reset(struct rte_eth_burst_api *rba);
+
+/* setup eth 'burst' API to ethdev values */
+void eth_dev_burst_api_setup(struct rte_eth_burst_api *rba,
+		const struct rte_eth_dev *dev);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 424bc260fa..5904bb7bae 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -44,6 +44,9 @@
 static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
 struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
 
+/* public 'fast/burst' API */
+struct rte_eth_burst_api rte_eth_burst_api[RTE_MAX_ETHPORTS];
+
 /* spinlock for eth device callbacks */
 static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
 
@@ -1788,6 +1791,9 @@ rte_eth_dev_start(uint16_t port_id)
 		(*dev->dev_ops->link_update)(dev, 0);
 	}
 
+	/* expose selection of PMD rx/tx function */
+	eth_dev_burst_api_setup(rte_eth_burst_api + port_id, dev);
+
 	rte_ethdev_trace_start(port_id);
 	return 0;
 }
@@ -1810,6 +1816,9 @@ rte_eth_dev_stop(uint16_t port_id)
 		return 0;
 	}
 
+	/* point rx/tx functions to dummy ones */
+	eth_dev_burst_api_reset(rte_eth_burst_api + port_id);
+
 	dev->data->dev_started = 0;
 	ret = (*dev->dev_ops->dev_stop)(dev);
 	rte_ethdev_trace_stop(port_id, ret);
@@ -4568,6 +4577,14 @@ rte_eth_mirror_rule_reset(uint16_t port_id, uint8_t rule_id)
 	return eth_err(port_id, (*dev->dev_ops->mirror_rule_reset)(dev, rule_id));
 }
 
+RTE_INIT(eth_dev_init_burst_api)
+{
+	uint32_t i;
+
+	for (i = 0; i != RTE_DIM(rte_eth_burst_api); i++)
+		eth_dev_burst_api_reset(rte_eth_burst_api + i);
+}
+
 RTE_INIT(eth_dev_init_cb_lists)
 {
 	uint16_t i;
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index 00f27c643a..da6de5de43 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -53,6 +53,51 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
 typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
 /**< @internal Check the status of a Tx descriptor */
 
+/**
+ * @internal
+ * Structure used to hold opaque pointernals to internal ethdev RX/TXi
+ * queues data.
+ * The main purpose to expose these pointers at all - allow compiler
+ * to fetch this data for 'fast' ethdev inline functions in advance.
+ */
+struct rte_ethdev_qdata {
+	void **data;
+	/**< points to array of internal queue data pointers */
+	void **clbk;
+	/**< points to array of queue callback data pointers */
+};
+
+/**
+ * @internal
+ * 'fast' ethdev funcions and related data are hold in a flat array.
+ * one entry per ethdev.
+ */
+struct rte_eth_burst_api {
+
+	/** first 64B line */
+	eth_rx_burst_t rx_pkt_burst;
+	/**< PMD receive function. */
+	eth_tx_burst_t tx_pkt_burst;
+	/**< PMD transmit function. */
+	eth_tx_prep_t tx_pkt_prepare;
+	/**< PMD transmit prepare function. */
+	eth_rx_queue_count_t rx_queue_count;
+	/**< Get the number of used RX descriptors. */
+	eth_rx_descriptor_status_t rx_descriptor_status;
+	/**< Check the status of a Rx descriptor. */
+	eth_tx_descriptor_status_t tx_descriptor_status;
+	/**< Check the status of a Tx descriptor. */
+	uintptr_t reserved[2];
+
+	/** second 64B line */
+	struct rte_ethdev_qdata rxq;
+	struct rte_ethdev_qdata txq;
+	uintptr_t reserved2[4];
+
+} __rte_cache_aligned;
+
+extern struct rte_eth_burst_api rte_eth_burst_api[RTE_MAX_ETHPORTS];
+
 
 /**
  * @internal
-- 
2.26.3


^ permalink raw reply	[relevance 2%]

* [dpdk-dev] [RFC v2 4/5] ethdev: make burst functions to use new flat array
  2021-09-22 14:09  4% ` [dpdk-dev] [RFC v2 0/5] " Konstantin Ananyev
  2021-09-22 14:09  1%   ` [dpdk-dev] [RFC v2 2/5] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
  2021-09-22 14:09  2%   ` [dpdk-dev] [RFC v2 3/5] ethdev: copy ethdev 'burst' API into separate structure Konstantin Ananyev
@ 2021-09-22 14:09  2%   ` Konstantin Ananyev
  2021-10-01 14:02  4%   ` [dpdk-dev] [PATCH v3 0/7] hide eth dev related structures Konstantin Ananyev
  3 siblings, 0 replies; 200+ results
From: Konstantin Ananyev @ 2021-09-22 14:09 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Rework 'fast' burst functions to use rte_eth_burst_api[].
While it is an API/ABI breakage, this change is intended to be
transparent for both users (no changes in user app is required) and
PMD developers (no changes in PMD is required).
One extra thing to note - RX/TX callback invocation will cause extra
function call with this changes. That might cause some insignificant
slowdown for code-path with RX/TX callbacks heavily involved.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/ethdev/ethdev_private.c |  31 +++++
 lib/ethdev/rte_ethdev.h     | 244 ++++++++++++++++++++++++++----------
 lib/ethdev/version.map      |   5 +
 3 files changed, 212 insertions(+), 68 deletions(-)

diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index a1683da77b..b46e077139 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -227,3 +227,34 @@ eth_dev_burst_api_setup(struct rte_eth_burst_api *rba,
 	rba->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
 }
 
+uint16_t
+__rte_eth_rx_epilog(uint16_t port_id, uint16_t queue_id,
+	struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
+	void *opaque)
+{
+	const struct rte_eth_rxtx_callback *cb = opaque;
+
+	while (cb != NULL) {
+		nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
+				nb_pkts, cb->param);
+		cb = cb->next;
+	}
+
+	return nb_rx;
+}
+
+uint16_t
+__rte_eth_tx_prolog(uint16_t port_id, uint16_t queue_id,
+	struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque)
+{
+	const struct rte_eth_rxtx_callback *cb = opaque;
+
+	while (cb != NULL) {
+		nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
+				cb->param);
+		cb = cb->next;
+	}
+
+	return nb_pkts;
+}
+
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 73b89fb2f0..58ee983b03 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -4872,6 +4872,34 @@ int rte_eth_representor_info_get(uint16_t port_id,
 
 #include <rte_ethdev_core.h>
 
+/**
+ * @internal
+ * Helper routine for eth driver rx_burst API.
+ * Should be called at exit from PMD's rte_eth_rx_bulk implementation.
+ * Does necessary post-processing - invokes RX callbacks if any, etc.
+ *
+ * @param port_id
+ *  The port identifier of the Ethernet device.
+ * @param queue_id
+ *  The index of the receive queue from which to retrieve input packets.
+ * @param rx_pkts
+ *   The address of an array of pointers to *rte_mbuf* structures that
+ *   have been retrieved from the device.
+ * @param nb_pkts
+ *   The number of packets that were retrieved from the device.
+ * @param nb_pkts
+ *   The number of elements in *rx_pkts* array.
+ * @param opaque
+ *   Opaque pointer of RX queue callback related data.
+ *
+ * @return
+ *  The number of packets effectively supplied to the *rx_pkts* array.
+ */
+__rte_experimental
+uint16_t __rte_eth_rx_epilog(uint16_t port_id, uint16_t queue_id,
+		struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
+		void *opaque);
+
 /**
  *
  * Retrieve a burst of input packets from a receive queue of an Ethernet
@@ -4963,23 +4991,37 @@ static inline uint16_t
 rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
 		 struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
 {
-	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 	uint16_t nb_rx;
+	struct rte_eth_burst_api *p;
+	void *cb, *qd;
+
+#ifdef RTE_ETHDEV_DEBUG_RX
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return 0;
+	}
+#endif
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_burst_api[port_id];
+	qd = p->rxq.data[queue_id];
 
 #ifdef RTE_ETHDEV_DEBUG_RX
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
 
-	if (queue_id >= dev->data->nb_rx_queues) {
-		RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u\n", queue_id);
+	if (qd == NULL) {
+		RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u for port_id=%u\n",
+			queue_id, port_id);
 		return 0;
 	}
 #endif
-	nb_rx = (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id],
-				     rx_pkts, nb_pkts);
+
+	nb_rx = p->rx_pkt_burst(qd, rx_pkts, nb_pkts);
 
 #ifdef RTE_ETHDEV_RXTX_CALLBACKS
-	struct rte_eth_rxtx_callback *cb;
 
 	/* __ATOMIC_RELEASE memory order was used when the
 	 * call back was inserted into the list.
@@ -4987,16 +5029,10 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
 	 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
 	 * not required.
 	 */
-	cb = __atomic_load_n(&dev->post_rx_burst_cbs[queue_id],
-				__ATOMIC_RELAXED);
-
-	if (unlikely(cb != NULL)) {
-		do {
-			nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
-						nb_pkts, cb->param);
-			cb = cb->next;
-		} while (cb != NULL);
-	}
+	cb = __atomic_load_n((void **)&p->rxq.clbk[queue_id], __ATOMIC_RELAXED);
+	if (unlikely(cb != NULL))
+		nb_rx = __rte_eth_rx_epilog(port_id, queue_id, rx_pkts, nb_rx,
+				nb_pkts, cb);
 #endif
 
 	rte_ethdev_trace_rx_burst(port_id, queue_id, (void **)rx_pkts, nb_rx);
@@ -5019,16 +5055,27 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
 static inline int
 rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
 {
-	struct rte_eth_dev *dev;
+	struct rte_eth_burst_api *p;
+	void *qd;
+
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return -EINVAL;
+	}
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_burst_api[port_id];
+	qd = p->rxq.data[queue_id];
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
-	dev = &rte_eth_devices[port_id];
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_queue_count, -ENOTSUP);
-	if (queue_id >= dev->data->nb_rx_queues ||
-	    dev->data->rx_queues[queue_id] == NULL)
+	RTE_FUNC_PTR_OR_ERR_RET(*p->rx_queue_count, -ENOTSUP);
+	if (qd == NULL)
 		return -EINVAL;
 
-	return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]);
+	return (int)(*p->rx_queue_count)(qd);
 }
 
 /**
@@ -5097,21 +5144,30 @@ static inline int
 rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
 	uint16_t offset)
 {
-	struct rte_eth_dev *dev;
-	void *rxq;
+	struct rte_eth_burst_api *p;
+	void *qd;
 
 #ifdef RTE_ETHDEV_DEBUG_RX
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return -EINVAL;
+	}
 #endif
-	dev = &rte_eth_devices[port_id];
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_burst_api[port_id];
+	qd = p->rxq.data[queue_id];
+
 #ifdef RTE_ETHDEV_DEBUG_RX
-	if (queue_id >= dev->data->nb_rx_queues)
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (qd == NULL)
 		return -ENODEV;
 #endif
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_descriptor_status, -ENOTSUP);
-	rxq = dev->data->rx_queues[queue_id];
-
-	return (*dev->rx_descriptor_status)(rxq, offset);
+	RTE_FUNC_PTR_OR_ERR_RET(*p->rx_descriptor_status, -ENOTSUP);
+	return (*p->rx_descriptor_status)(qd, offset);
 }
 
 #define RTE_ETH_TX_DESC_FULL    0 /**< Desc filled for hw, waiting xmit. */
@@ -5154,23 +5210,55 @@ rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
 static inline int rte_eth_tx_descriptor_status(uint16_t port_id,
 	uint16_t queue_id, uint16_t offset)
 {
-	struct rte_eth_dev *dev;
-	void *txq;
+	struct rte_eth_burst_api *p;
+	void *qd;
 
 #ifdef RTE_ETHDEV_DEBUG_TX
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return -EINVAL;
+	}
 #endif
-	dev = &rte_eth_devices[port_id];
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_burst_api[port_id];
+	qd = p->txq.data[queue_id];
+
 #ifdef RTE_ETHDEV_DEBUG_TX
-	if (queue_id >= dev->data->nb_tx_queues)
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (qd == NULL)
 		return -ENODEV;
 #endif
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_descriptor_status, -ENOTSUP);
-	txq = dev->data->tx_queues[queue_id];
-
-	return (*dev->tx_descriptor_status)(txq, offset);
+	RTE_FUNC_PTR_OR_ERR_RET(*p->tx_descriptor_status, -ENOTSUP);
+	return (*p->tx_descriptor_status)(qd, offset);
 }
 
+/**
+ * @internal
+ * Helper routine for eth driver tx_burst API.
+ * Should be called before entry PMD's rte_eth_tx_bulk implementation.
+ * Does necessary pre-processing - invokes TX callbacks if any, etc.
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device.
+ * @param queue_id
+ *   The index of the transmit queue through which output packets must be
+ *   sent.
+ * @param tx_pkts
+ *   The address of an array of *nb_pkts* pointers to *rte_mbuf* structures
+ *   which contain the output packets.
+ * @param nb_pkts
+ *   The maximum number of packets to transmit.
+ * @return
+ *   The number of output packets to transmit.
+ */
+__rte_experimental
+uint16_t __rte_eth_tx_prolog(uint16_t port_id, uint16_t queue_id,
+	struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque);
+
 /**
  * Send a burst of output packets on a transmit queue of an Ethernet device.
  *
@@ -5241,20 +5329,34 @@ static inline uint16_t
 rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
 		 struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 {
-	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	struct rte_eth_burst_api *p;
+	void *cb, *qd;
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return 0;
+	}
+#endif
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_burst_api[port_id];
+	qd = p->txq.data[queue_id];
 
 #ifdef RTE_ETHDEV_DEBUG_TX
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0);
 
-	if (queue_id >= dev->data->nb_tx_queues) {
-		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
+	if (qd == NULL) {
+		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
+			queue_id, port_id);
 		return 0;
 	}
 #endif
 
 #ifdef RTE_ETHDEV_RXTX_CALLBACKS
-	struct rte_eth_rxtx_callback *cb;
 
 	/* __ATOMIC_RELEASE memory order was used when the
 	 * call back was inserted into the list.
@@ -5262,21 +5364,16 @@ rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
 	 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
 	 * not required.
 	 */
-	cb = __atomic_load_n(&dev->pre_tx_burst_cbs[queue_id],
-				__ATOMIC_RELAXED);
-
-	if (unlikely(cb != NULL)) {
-		do {
-			nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
-					cb->param);
-			cb = cb->next;
-		} while (cb != NULL);
-	}
+	cb = __atomic_load_n((void **)&p->txq.clbk[queue_id], __ATOMIC_RELAXED);
+	if (unlikely(cb != NULL))
+		nb_pkts = __rte_eth_tx_prolog(port_id, queue_id, tx_pkts,
+				nb_pkts, cb);
 #endif
 
-	rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts,
-		nb_pkts);
-	return (*dev->tx_pkt_burst)(dev->data->tx_queues[queue_id], tx_pkts, nb_pkts);
+	nb_pkts = p->tx_pkt_burst(qd, tx_pkts, nb_pkts);
+
+	rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts, nb_pkts);
+	return nb_pkts;
 }
 
 /**
@@ -5339,31 +5436,42 @@ static inline uint16_t
 rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id,
 		struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 {
-	struct rte_eth_dev *dev;
+	struct rte_eth_burst_api *p;
+	void *qd;
 
 #ifdef RTE_ETHDEV_DEBUG_TX
-	if (!rte_eth_dev_is_valid_port(port_id)) {
-		RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
 		rte_errno = ENODEV;
 		return 0;
 	}
 #endif
 
-	dev = &rte_eth_devices[port_id];
+	/* fetch pointer to queue data */
+	p = &rte_eth_burst_api[port_id];
+	qd = p->txq.data[queue_id];
 
 #ifdef RTE_ETHDEV_DEBUG_TX
-	if (queue_id >= dev->data->nb_tx_queues) {
-		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
+	if (!rte_eth_dev_is_valid_port(port_id)) {
+		RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
+		rte_errno = ENODEV;
+		return 0;
+	}
+	if (qd == NULL) {
+		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
+			queue_id, port_id);
 		rte_errno = EINVAL;
 		return 0;
 	}
 #endif
 
-	if (!dev->tx_pkt_prepare)
+	if (!p->tx_pkt_prepare)
 		return nb_pkts;
 
-	return (*dev->tx_pkt_prepare)(dev->data->tx_queues[queue_id],
-			tx_pkts, nb_pkts);
+	return p->tx_pkt_prepare(qd, tx_pkts, nb_pkts);
 }
 
 #else
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 904bce6ea1..65444f9a99 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -247,11 +247,16 @@ EXPERIMENTAL {
 	rte_mtr_meter_policy_delete;
 	rte_mtr_meter_policy_update;
 	rte_mtr_meter_policy_validate;
+
+	# added in 21.05
+	__rte_eth_rx_epilog;
+	__rte_eth_tx_prolog;
 };
 
 INTERNAL {
 	global:
 
+	rte_eth_burst_api;
 	rte_eth_dev_allocate;
 	rte_eth_dev_allocated;
 	rte_eth_dev_attach_secondary;
-- 
2.26.3


^ permalink raw reply	[relevance 2%]

* Re: [dpdk-dev] [RFC 0/7] hide eth dev related structures
  @ 2021-09-22 15:08  0%       ` Ananyev, Konstantin
  2021-09-27 16:14  0%         ` Jerin Jacob
  0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2021-09-22 15:08 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dpdk-dev, Thomas Monjalon, Yigit, Ferruh, Andrew Rybchenko, Yang,
	Qiming, Zhang, Qi Z, Xing, Beilei, techboard


> > Hi Jerin,
> >
> > > > NOTE: This is just an RFC to start further discussion and collect the feedback.
> > > > Due to significant amount of work, changes required are applied only to two
> > > > PMDs so far: net/i40e and net/ice.
> > > > So to build it you'll need to add:
> > > > -Denable_drivers='common/*,mempool/*,net/ice,net/i40e'
> > > > to your config options.
> > >
> > > >
> > > > That approach was selected to avoid(/minimize) possible performance losses.
> > > >
> > > > So far I done only limited amount functional and performance testing.
> > > > Didn't spot any functional problems, and performance numbers
> > > > remains the same before and after the patch on my box (testpmd, macswap fwd).
> > >
> > >
> > > Based on testing on octeonxt2. We see some regression in testpmd and
> > > bit on l3fwd too.
> > >
> > > Without patch: 73.5mpps/core in testpmd iofwd
> > > With out patch: 72 5mpps/core in testpmd iofwd
> > >
> > > Based on my understanding it is due to additional indirection.
> >
> > From your patch below, it looks like not actually additional indirection,
> > but extra memory dereference - func and dev pointers are now stored
> > at different places.
> 
> Yup. I meant the same. We are on the same page.
> 
> > Plus the fact that now we dereference rte_eth_devices[]
> > data inside PMD function. Which probably prevents compiler and CPU to load
> >  rte_eth_devices[port_id].data and rte_eth_devices[port_id]. pre_tx_burst_cbs[queue_id]
> > in advance before calling actual RX/TX function.
> 
> Yes.
> 
> > About your approach: I don’t mind to add extra opaque 'void *data' pointer,
> > but would prefer not to expose callback invocations code into inline function.
> > Main reason for that - I think it still need to be reworked to allow adding/removing
> > callbacks without stopping the device. Something similar to what was done for cryptodev
> > callbacks. To be able to do that in future without another ABI breakage callbacks related part
> > needs to be kept internal.
> > Though what we probably can do: add two dynamic arrays of opaque pointers to  rte_eth_burst_api.
> > One for rx/tx queue data pointers, second for rx/tx callback pointers.
> > To be more specific, something like:
> >
> > typedef uint16_t (*rte_eth_rx_burst_t)( void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, void *cbs);
> > typedef uint16_t (*rte_eth_tx_burst_t)(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *cbs);
> > ....
> >
> > struct rte_eth_burst_api {
> >         rte_eth_rx_burst_t rx_pkt_burst;
> >         /**< PMD receive function. */
> >         rte_eth_tx_burst_t tx_pkt_burst;
> >         /**< PMD transmit function. */
> >         rte_eth_tx_prep_t tx_pkt_prepare;
> >         /**< PMD transmit prepare function. */
> >         rte_eth_rx_queue_count_t rx_queue_count;
> >         /**< Get the number of used RX descriptors. */
> >         rte_eth_rx_descriptor_status_t rx_descriptor_status;
> >         /**< Check the status of a Rx descriptor. */
> >         rte_eth_tx_descriptor_status_t tx_descriptor_status;
> >         /**< Check the status of a Tx descriptor. */
> >         struct {
> >                  void **queue_data;   /* point to rte_eth_devices[port_id].data-> rx_queues */
> >                  void **cbs;                  /*  points to rte_eth_devices[port_id].post_rx_burst_cbs */
> >        } rx_data, tx_data;
> > } __rte_cache_aligned;
> >
> > static inline uint16_t
> > rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
> >                  struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
> > {
> >        struct rte_eth_burst_api *p;
> >
> >         if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT)
> >                 return 0;
> >
> >       p =  &rte_eth_burst_api[port_id];
> >       return p->rx_pkt_burst(p->rx_data.queue_data[queue_id], rx_pkts, nb_pkts, p->rx_data.cbs[queue_id]);
> 
> 
> 
> That works.
> 
> 
> > }
> >
> > Same for TX.
> >
> > If that looks ok to everyone, I'll try to prepare next version based on that.
> 
> 
> Looks good to me.
> 
> > In theory that should avoid extra dereference problem and even reduce indirection.
> > As a drawback data->rxq/txq should always be allocated for RTE_MAX_QUEUES_PER_PORT entries,
> > but I presume that’s not a big deal.
> >
> > As a side question - is there any reason why rte_ethdev_trace_rx_burst() is invoked at very last point,
> > while rte_ethdev_trace_tx_burst()  after CBs but before actual tx_pkt_burst()?
> > It would make things simpler if tracng would always be done either on entrance or exit of rx/tx_burst.
> 
> exit is fine.
> 
> >
> > >
> > > My suggestion to fix the problem by:
> > > Removing the additional `data` redirection and pull callback function
> > > pointers back
> > > and keep rest as opaque as done in the existing patch like [1]
> > >
> > > I don't believe this has any real implication on future ABI stability
> > > as we will not be adding
> > > any new item in rte_eth_fp in any way as new features can be added in slowpath
> > > rte_eth_dev as mentioned in the patch.
> 
> Ack
> 
> I will happy to test again after the rework and report any performance
> issues if any.

Thanks Jerin, v2 is out:
https://patches.dpdk.org/project/dpdk/list/?series=19084
Please have a look, when you'll get a chance.


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] Minutes of Technical Board Meeting, 2021-09-08
@ 2021-09-22 22:17  4% Honnappa Nagarahalli
  0 siblings, 0 replies; 200+ results
From: Honnappa Nagarahalli @ 2021-09-22 22:17 UTC (permalink / raw)
  To: dev; +Cc: nd, nd

Minutes of Technical Board Meeting, 2021-09-08

Members Attending: 10/12
- Aaron Conole
- Bruce Richardson
- Ferruh Yigit
- Hemant Agrawal
- Honnappa Nagarahalli (Chair)
- Kevin Traynor
- Maxime Coquelin
- Olivier Matz
- Stephen Hemminger
- Thomas Monjalon

NOTE: The Technical Board meetings take place every second Wednesday on  https://meet.jit.si/DPDK  at 3 pm UTC.
Meetings are public, and DPDK community members are welcome to attend.
Agenda and minutes can be found at https://annuel.framapad.org/p/r.0c3cc4d1e011214183872a98f6b5c7db
NOTE: Next meeting will be on Wednesday 2021-09-22 @3pm UTC, and will be chaired by Kevin


1) Any new ABI breakages?
	- Akhil will send out a ABI breakage approval request to Techboard in CryptoDev.
	- Honnappa will send out a ABI breakage approval request for rte_eal_wait_lcore API.
	- Please CC Ray Kinsela on these emails

2) ORAN AAL Questions/Concerns? Can we refer it to Governing board to get a legal opinion?
	- Aaron clarified that Redhat has to strip the DPDK modules that use SCCL before packaging
	- Techboard recommends that the ORAN AAL work should result in a specification that allows for DPDK to follow its own coding guidelines (ex: naming conventions), principles (ex: object types, meta data) and policies (ex: ABI policy).
	- Techboard will seek legal opinion on "If DPDK implements APIs from SCCL licensed specification document (assumed to be similar in technical content to VirtIO spec), what kind of licensing encumbrances are attached to the API?"
	- Techboard will request the governing board to establish a channel with ORAN community to proactively discuss the SCCL licensing issues

3) User space event
	- Current recommendation is to host the user space event in Europe just before or just after FOSDEM. This allows participants to cover both the events with a single travel.
	- Hybrid event is recommended.

4) Techboard membership policy
	- This work is split into 2 buckets. a) Document the current policy of accepting new members b) Policy to evaluate the continued Techboard membership
	- #a is almost ready. Hemant and Thomas will make final touches.
	- A deadline to complete #b will be discussed in the next Techboard meeting.

5) Slack channel update
	- Slack channel is created and tested
	- Slack channel's existence will be announced in this week's release status meeting and the invitation link will be provided.

6) Update on DTS usability WG
	- The WG has completed triaging the input
	- Next step is to create a set of slides to provide context and an excel sheet to document the changes required. Honnappa has the AI to bring this to Techboard.


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [RFC v2 2/5] ethdev: change input parameters for rx_queue_count
  2021-09-22 14:09  1%   ` [dpdk-dev] [RFC v2 2/5] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
@ 2021-09-23  5:51  0%     ` Wang, Haiyue
  0 siblings, 0 replies; 200+ results
From: Wang, Haiyue @ 2021-09-23  5:51 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev
  Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W, humin29,
	yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang, Qiming,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	Xia, Chenbo, thomas, Yigit, Ferruh, mdr, Jayatheerthan, Jay

> -----Original Message-----
> From: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Sent: Wednesday, September 22, 2021 22:10
> To: dev@dpdk.org
> Cc: Li, Xiaoyun <xiaoyun.li@intel.com>; anoobj@marvell.com; jerinj@marvell.com;
> ndabilpuram@marvell.com; adwivedi@marvell.com; shepard.siegel@atomicrules.com;
> ed.czeck@atomicrules.com; john.miller@atomicrules.com; irusskikh@marvell.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com; rahul.lakkireddy@chelsio.com;
> hemant.agrawal@nxp.com; sachin.saxena@oss.nxp.com; Wang, Haiyue <haiyue.wang@intel.com>; Daley, John
> <johndale@cisco.com>; hyonkim@cisco.com; Zhang, Qi Z <qi.z.zhang@intel.com>; Wang, Xiao W
> <xiao.w.wang@intel.com>; humin29@huawei.com; yisen.zhuang@huawei.com; oulijun@huawei.com; Xing, Beilei
> <beilei.xing@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Yang, Qiming <qiming.yang@intel.com>;
> matan@nvidia.com; viacheslavo@nvidia.com; sthemmin@microsoft.com; longli@microsoft.com;
> heinrich.kuhn@corigine.com; kirankumark@marvell.com; andrew.rybchenko@oktetlabs.ru;
> mczekaj@marvell.com; jiawenwu@trustnetic.com; jianwang@trustnetic.com; maxime.coquelin@redhat.com; Xia,
> Chenbo <chenbo.xia@intel.com>; thomas@monjalon.net; Yigit, Ferruh <ferruh.yigit@intel.com>;
> mdr@ashroe.eu; Jayatheerthan, Jay <jay.jayatheerthan@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>
> Subject: [RFC v2 2/5] ethdev: change input parameters for rx_queue_count
> 
> Currently majority of 'fast' ethdev ops take pointers to internal
> queue data structures as an input parameter.
> While eth_rx_queue_count() takes a pointer to rte_eth_dev and queue
> index.
> For future work to hide rte_eth_devices[] and friends it would be
> plausible to unify parameters list of all 'fast' ethdev ops.
> This patch changes eth_rx_queue_count() to accept pointer to internal
> queue data as input parameter.
> This is an API and ABI breakage.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>  drivers/net/ark/ark_ethdev_rx.c         |  4 ++--
>  drivers/net/ark/ark_ethdev_rx.h         |  3 +--
>  drivers/net/atlantic/atl_ethdev.h       |  2 +-
>  drivers/net/atlantic/atl_rxtx.c         |  9 ++-------
>  drivers/net/bnxt/bnxt_ethdev.c          |  8 +++++---
>  drivers/net/dpaa/dpaa_ethdev.c          |  9 ++++-----
>  drivers/net/dpaa2/dpaa2_ethdev.c        |  9 ++++-----
>  drivers/net/e1000/e1000_ethdev.h        |  6 ++----
>  drivers/net/e1000/em_rxtx.c             |  4 ++--
>  drivers/net/e1000/igb_rxtx.c            |  4 ++--
>  drivers/net/enic/enic_ethdev.c          | 12 ++++++------
>  drivers/net/fm10k/fm10k.h               |  2 +-
>  drivers/net/fm10k/fm10k_rxtx.c          |  4 ++--
>  drivers/net/hns3/hns3_rxtx.c            |  7 +++++--
>  drivers/net/hns3/hns3_rxtx.h            |  2 +-
>  drivers/net/i40e/i40e_rxtx.c            |  4 ++--
>  drivers/net/i40e/i40e_rxtx.h            |  3 +--
>  drivers/net/iavf/iavf_rxtx.c            |  4 ++--
>  drivers/net/iavf/iavf_rxtx.h            |  2 +-
>  drivers/net/ice/ice_rxtx.c              |  4 ++--
>  drivers/net/ice/ice_rxtx.h              |  2 +-
>  drivers/net/igc/igc_txrx.c              |  5 ++---
>  drivers/net/igc/igc_txrx.h              |  3 +--
>  drivers/net/ixgbe/ixgbe_ethdev.h        |  3 +--
>  drivers/net/ixgbe/ixgbe_rxtx.c          |  4 ++--
>  drivers/net/mlx5/mlx5_rx.c              | 26 ++++++++++++-------------
>  drivers/net/mlx5/mlx5_rx.h              |  2 +-
>  drivers/net/netvsc/hn_rxtx.c            |  4 ++--
>  drivers/net/netvsc/hn_var.h             |  2 +-
>  drivers/net/nfp/nfp_rxtx.c              |  4 ++--
>  drivers/net/nfp/nfp_rxtx.h              |  3 +--
>  drivers/net/octeontx2/otx2_ethdev.h     |  2 +-
>  drivers/net/octeontx2/otx2_ethdev_ops.c |  8 ++++----
>  drivers/net/sfc/sfc_ethdev.c            | 12 ++++++------
>  drivers/net/thunderx/nicvf_ethdev.c     |  3 +--
>  drivers/net/thunderx/nicvf_rxtx.c       |  4 ++--
>  drivers/net/thunderx/nicvf_rxtx.h       |  2 +-
>  drivers/net/txgbe/txgbe_ethdev.h        |  3 +--
>  drivers/net/txgbe/txgbe_rxtx.c          |  4 ++--
>  drivers/net/vhost/rte_eth_vhost.c       |  4 ++--
>  lib/ethdev/rte_ethdev.h                 |  2 +-
>  lib/ethdev/rte_ethdev_core.h            |  3 +--
>  42 files changed, 97 insertions(+), 110 deletions(-)
> 
> diff --git a/drivers/net/ark/ark_ethdev_rx.c b/drivers/net/ark/ark_ethdev_rx.c
> index d255f0177b..98658ce621 100644
> --- a/drivers/net/ark/ark_ethdev_rx.c
> +++ b/drivers/net/ark/ark_ethdev_rx.c
> @@ -388,11 +388,11 @@ eth_ark_rx_queue_drain(struct ark_rx_queue *queue)
>  }
> 
>  uint32_t
> -eth_ark_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id)
> +eth_ark_dev_rx_queue_count(void *rx_queue)
>  {
>  	struct ark_rx_queue *queue;

Just change it to be one line : " struct ark_rx_queue *queue = rx_queue;" ?

> 
> -	queue = dev->data->rx_queues[queue_id];
> +	queue = rx_queue;
>  	return (queue->prod_index - queue->cons_index);	/* mod arith */
>  }


> --
> 2.26.3


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [RFC v2 3/5] ethdev: copy ethdev 'burst' API into separate structure
  2021-09-22 14:09  2%   ` [dpdk-dev] [RFC v2 3/5] ethdev: copy ethdev 'burst' API into separate structure Konstantin Ananyev
@ 2021-09-23  5:58  0%     ` Wang, Haiyue
  2021-09-27 18:01  0%       ` Jerin Jacob
  0 siblings, 1 reply; 200+ results
From: Wang, Haiyue @ 2021-09-23  5:58 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev
  Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W, humin29,
	yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang, Qiming,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	Xia, Chenbo, thomas, Yigit, Ferruh, mdr, Jayatheerthan, Jay

> -----Original Message-----
> From: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Sent: Wednesday, September 22, 2021 22:10
> To: dev@dpdk.org
> Cc: Li, Xiaoyun <xiaoyun.li@intel.com>; anoobj@marvell.com; jerinj@marvell.com;
> ndabilpuram@marvell.com; adwivedi@marvell.com; shepard.siegel@atomicrules.com;
> ed.czeck@atomicrules.com; john.miller@atomicrules.com; irusskikh@marvell.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com; rahul.lakkireddy@chelsio.com;
> hemant.agrawal@nxp.com; sachin.saxena@oss.nxp.com; Wang, Haiyue <haiyue.wang@intel.com>; Daley, John
> <johndale@cisco.com>; hyonkim@cisco.com; Zhang, Qi Z <qi.z.zhang@intel.com>; Wang, Xiao W
> <xiao.w.wang@intel.com>; humin29@huawei.com; yisen.zhuang@huawei.com; oulijun@huawei.com; Xing, Beilei
> <beilei.xing@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Yang, Qiming <qiming.yang@intel.com>;
> matan@nvidia.com; viacheslavo@nvidia.com; sthemmin@microsoft.com; longli@microsoft.com;
> heinrich.kuhn@corigine.com; kirankumark@marvell.com; andrew.rybchenko@oktetlabs.ru;
> mczekaj@marvell.com; jiawenwu@trustnetic.com; jianwang@trustnetic.com; maxime.coquelin@redhat.com; Xia,
> Chenbo <chenbo.xia@intel.com>; thomas@monjalon.net; Yigit, Ferruh <ferruh.yigit@intel.com>;
> mdr@ashroe.eu; Jayatheerthan, Jay <jay.jayatheerthan@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>
> Subject: [RFC v2 3/5] ethdev: copy ethdev 'burst' API into separate structure
> 
> Copy public function pointers (rx_pkt_burst(), etc.) and related
> pointers to internal data from rte_eth_dev structure into a separate flat
> array. We can keep it public to still use inline functions for 'fast' calls
> (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
> The intention is to make rte_eth_dev and related structures internal.
> That should allow future possible changes to core eth_dev strcutures
> to be transaprent to the user and help to avoid ABI/API breakages.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>  lib/ethdev/ethdev_private.c  | 53 ++++++++++++++++++++++++++++++++++++
>  lib/ethdev/ethdev_private.h  |  7 +++++
>  lib/ethdev/rte_ethdev.c      | 17 ++++++++++++
>  lib/ethdev/rte_ethdev_core.h | 45 ++++++++++++++++++++++++++++++
>  4 files changed, 122 insertions(+)
> 
> diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
> index 012cf73ca2..a1683da77b 100644
> --- a/lib/ethdev/ethdev_private.c
> +++ b/lib/ethdev/ethdev_private.c
> @@ -174,3 +174,56 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
>  		RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
>  	return str == NULL ? -1 : 0;
>  }
> +
> +static uint16_t
> +dummy_eth_rx_burst(__rte_unused void *rxq,
> +		__rte_unused struct rte_mbuf **rx_pkts,
> +		__rte_unused uint16_t nb_pkts)
> +{
> +	RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
> +	rte_errno = ENOTSUP;
> +	return 0;
> +}
> +
> +static uint16_t
> +dummy_eth_tx_burst(__rte_unused void *txq,
> +		__rte_unused struct rte_mbuf **tx_pkts,
> +		__rte_unused uint16_t nb_pkts)
> +{
> +	RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n");
> +	rte_errno = ENOTSUP;
> +	return 0;
> +}
> +
> +void
> +eth_dev_burst_api_reset(struct rte_eth_burst_api *rba)
> +{
> +	static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
> +	static const struct rte_eth_burst_api dummy_api = {
> +		.rx_pkt_burst = dummy_eth_rx_burst,
> +		.tx_pkt_burst = dummy_eth_tx_burst,
> +		.rxq = {.data = dummy_data, .clbk = dummy_data,},
> +		.txq = {.data = dummy_data, .clbk = dummy_data,},
> +	};
> +
> +	*rba = dummy_api;
> +}
> +
> +void
> +eth_dev_burst_api_setup(struct rte_eth_burst_api *rba,
> +		const struct rte_eth_dev *dev)
> +{
> +	rba->rx_pkt_burst = dev->rx_pkt_burst;
> +	rba->tx_pkt_burst = dev->tx_pkt_burst;
> +	rba->tx_pkt_prepare = dev->tx_pkt_prepare;
> +	rba->rx_queue_count = dev->rx_queue_count;
> +	rba->rx_descriptor_status = dev->rx_descriptor_status;
> +	rba->tx_descriptor_status = dev->tx_descriptor_status;
> +
> +	rba->rxq.data = dev->data->rx_queues;
> +	rba->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
> +
> +	rba->txq.data = dev->data->tx_queues;
> +	rba->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
> +}
> +
> diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
> index 9bb0879538..54921f4860 100644
> --- a/lib/ethdev/ethdev_private.h
> +++ b/lib/ethdev/ethdev_private.h
> @@ -30,6 +30,13 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
>  /* Parse devargs value for representor parameter. */
>  int rte_eth_devargs_parse_representor_ports(char *str, void *data);
> 
> +/* reset eth 'burst' API to dummy values */
> +void eth_dev_burst_api_reset(struct rte_eth_burst_api *rba);
> +
> +/* setup eth 'burst' API to ethdev values */
> +void eth_dev_burst_api_setup(struct rte_eth_burst_api *rba,
> +		const struct rte_eth_dev *dev);
> +
>  #ifdef __cplusplus
>  }
>  #endif
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index 424bc260fa..5904bb7bae 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -44,6 +44,9 @@
>  static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
>  struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
> 
> +/* public 'fast/burst' API */
> +struct rte_eth_burst_api rte_eth_burst_api[RTE_MAX_ETHPORTS];
> +
>  /* spinlock for eth device callbacks */
>  static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
> 
> @@ -1788,6 +1791,9 @@ rte_eth_dev_start(uint16_t port_id)
>  		(*dev->dev_ops->link_update)(dev, 0);
>  	}
> 
> +	/* expose selection of PMD rx/tx function */
> +	eth_dev_burst_api_setup(rte_eth_burst_api + port_id, dev);
> +
>  	rte_ethdev_trace_start(port_id);
>  	return 0;
>  }
> @@ -1810,6 +1816,9 @@ rte_eth_dev_stop(uint16_t port_id)
>  		return 0;
>  	}
> 
> +	/* point rx/tx functions to dummy ones */
> +	eth_dev_burst_api_reset(rte_eth_burst_api + port_id);
> +
>  	dev->data->dev_started = 0;
>  	ret = (*dev->dev_ops->dev_stop)(dev);
>  	rte_ethdev_trace_stop(port_id, ret);
> @@ -4568,6 +4577,14 @@ rte_eth_mirror_rule_reset(uint16_t port_id, uint8_t rule_id)
>  	return eth_err(port_id, (*dev->dev_ops->mirror_rule_reset)(dev, rule_id));
>  }
> 
> +RTE_INIT(eth_dev_init_burst_api)
> +{
> +	uint32_t i;
> +
> +	for (i = 0; i != RTE_DIM(rte_eth_burst_api); i++)
> +		eth_dev_burst_api_reset(rte_eth_burst_api + i);
> +}
> +
>  RTE_INIT(eth_dev_init_cb_lists)
>  {
>  	uint16_t i;
> diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
> index 00f27c643a..da6de5de43 100644
> --- a/lib/ethdev/rte_ethdev_core.h
> +++ b/lib/ethdev/rte_ethdev_core.h
> @@ -53,6 +53,51 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
>  typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
>  /**< @internal Check the status of a Tx descriptor */
> 
> +/**
> + * @internal
> + * Structure used to hold opaque pointernals to internal ethdev RX/TXi
> + * queues data.
> + * The main purpose to expose these pointers at all - allow compiler
> + * to fetch this data for 'fast' ethdev inline functions in advance.
> + */
> +struct rte_ethdev_qdata {
> +	void **data;
> +	/**< points to array of internal queue data pointers */
> +	void **clbk;
> +	/**< points to array of queue callback data pointers */
> +};
> +
> +/**
> + * @internal
> + * 'fast' ethdev funcions and related data are hold in a flat array.
> + * one entry per ethdev.
> + */
> +struct rte_eth_burst_api {

'ops' is better ? Like "struct rte_eth_burst_ops". ;-)

> +
> +	/** first 64B line */
> +	eth_rx_burst_t rx_pkt_burst;
> +	/**< PMD receive function. */
> +	eth_tx_burst_t tx_pkt_burst;
> +	/**< PMD transmit function. */
> +	eth_tx_prep_t tx_pkt_prepare;
> +	/**< PMD transmit prepare function. */
> +	eth_rx_queue_count_t rx_queue_count;
> +	/**< Get the number of used RX descriptors. */
> +	eth_rx_descriptor_status_t rx_descriptor_status;
> +	/**< Check the status of a Rx descriptor. */
> +	eth_tx_descriptor_status_t tx_descriptor_status;
> +	/**< Check the status of a Tx descriptor. */
> +	uintptr_t reserved[2];
> +

How about 32 bit system ? Does it need something like :

__rte_cache_aligned for rxq to make sure 64B line?

> +	/** second 64B line */
> +	struct rte_ethdev_qdata rxq;
> +	struct rte_ethdev_qdata txq;
> +	uintptr_t reserved2[4];
> +
> +} __rte_cache_aligned;
> +
> +extern struct rte_eth_burst_api rte_eth_burst_api[RTE_MAX_ETHPORTS];
> +
> 
>  /**
>   * @internal
> --
> 2.26.3


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v1 0/7] make rte_intr_handle internal
    @ 2021-09-23  8:20  0%   ` David Marchand
  1 sibling, 0 replies; 200+ results
From: David Marchand @ 2021-09-23  8:20 UTC (permalink / raw)
  To: Harman Kalra
  Cc: dev, Yigit, Ferruh, Ajit Khaparde, Qi Zhang,
	Jerin Jacob Kollanukkaran, Raslan Darawsheh, Maxime Coquelin,
	Xia, Chenbo

Hello Harman,


On Fri, Sep 3, 2021 at 2:42 PM Harman Kalra <hkalra@marvell.com> wrote:
>
> Moving struct rte_intr_handle as an internal structure to
> avoid any ABI breakages in future. Since this structure defines
> some static arrays and changing respective macros breaks the ABI.
> Eg:
> Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
> MSI-X interrupts that can be defined for a PCI device, while PCI
> specification allows maximum 2048 MSI-X interrupts that can be used.
> If some PCI device requires more than 512 vectors, either change the
> RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
> PCI device MSI-X size on probe time. Either way its an ABI breakage.
>
> Change already included in 21.11 ABI improvement spreadsheet (item 42):
> https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
> preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
> 3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
> XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
> Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
>

Thanks for taking care of this huge cleanup.
I started looking at it.


CC: Ferruh and next-net* maintainers for awareness.

-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [RFC PATCH v8 0/5] Add PIE support for HQoS library
  2021-09-22  7:46  3% ` [dpdk-dev] [RFC PATCH v7 " Liguzinski, WojciechX
@ 2021-09-23  9:45  3%   ` Liguzinski, WojciechX
  2021-10-11  7:55  3%     ` [dpdk-dev] [PATCH v9 " Liguzinski, WojciechX
  0 siblings, 1 reply; 200+ results
From: Liguzinski, WojciechX @ 2021-09-23  9:45 UTC (permalink / raw)
  To: dev, jasvinder.singh, cristian.dumitrescu; +Cc: megha.ajmera

DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem
which is a situation when excess buffers in the network cause high latency and latency 
variation. Currently, it supports RED for active queue management (which is designed 
to control the queue length but it does not control latency directly and is now being 
obsoleted). However, more advanced queue management is required to address this problem
and provide desirable quality of service to users.

This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral
controller Enhanced) that can effectively and directly control queuing latency to address 
the bufferbloat problem.

The implementation of mentioned functionality includes modification of existing and 
adding a new set of data structures to the library, adding PIE related APIs. 
This affects structures in public API/ABI. That is why deprecation notice is going
to be prepared and sent.

Liguzinski, WojciechX (5):
  sched: add PIE based congestion management
  example/qos_sched: add PIE support
  example/ip_pipeline: add PIE support
  doc/guides/prog_guide: added PIE
  app/test: add tests for PIE

 app/test/autotest_data.py                    |   18 +
 app/test/meson.build                         |    4 +
 app/test/test_pie.c                          | 1065 ++++++++++++++++++
 config/rte_config.h                          |    1 -
 doc/guides/prog_guide/glossary.rst           |    3 +
 doc/guides/prog_guide/qos_framework.rst      |   60 +-
 doc/guides/prog_guide/traffic_management.rst |   13 +-
 drivers/net/softnic/rte_eth_softnic_tm.c     |    6 +-
 examples/ip_pipeline/tmgr.c                  |    6 +-
 examples/qos_sched/app_thread.c              |    1 -
 examples/qos_sched/cfg_file.c                |   82 +-
 examples/qos_sched/init.c                    |    7 +-
 examples/qos_sched/profile.cfg               |  196 ++--
 lib/sched/meson.build                        |   10 +-
 lib/sched/rte_pie.c                          |   86 ++
 lib/sched/rte_pie.h                          |  398 +++++++
 lib/sched/rte_sched.c                        |  228 ++--
 lib/sched/rte_sched.h                        |   53 +-
 lib/sched/version.map                        |    3 +
 19 files changed, 2050 insertions(+), 190 deletions(-)
 create mode 100644 app/test/test_pie.c
 create mode 100644 lib/sched/rte_pie.c
 create mode 100644 lib/sched/rte_pie.h

-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v2] flow_classify: remove eperimental tag from the API
  @ 2021-09-23 10:09  3%   ` Kevin Traynor
  0 siblings, 0 replies; 200+ results
From: Kevin Traynor @ 2021-09-23 10:09 UTC (permalink / raw)
  To: Bernard Iremonger, ray.kinsella, dev

Hi Bernard,

On 22/09/2021 17:48, Bernard Iremonger wrote:
> This API was introduced in 17.11, removing experimental tag
> to promote to stable state.
> 
> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
> ---
>   examples/flow_classify/meson.build    | 2 +-
>   lib/flow_classify/rte_flow_classify.h | 7 -------
>   lib/flow_classify/version.map         | 2 +-
>   3 files changed, 2 insertions(+), 9 deletions(-)
> 
> diff --git a/examples/flow_classify/meson.build b/examples/flow_classify/meson.build
> index 1be1bf0374..bffceb9465 100644
> --- a/examples/flow_classify/meson.build
> +++ b/examples/flow_classify/meson.build
> @@ -7,7 +7,7 @@
>   # DPDK instance, use 'make'
>   
>   deps += 'flow_classify'
> -allow_experimental_apis = true
> +
>   sources = files(
>           'flow_classify.c',
>   )
> diff --git a/lib/flow_classify/rte_flow_classify.h b/lib/flow_classify/rte_flow_classify.h
> index 82ea92b6a6..3759cd32af 100644
> --- a/lib/flow_classify/rte_flow_classify.h
> +++ b/lib/flow_classify/rte_flow_classify.h

The library is still marked as experimental, but I don't think this is 
valid anymore as it now contains stable APIs. So needs some removals:

Text below in rte_flow_classify.h,

  * @warning
  * @b EXPERIMENTAL:
  * All functions in this file may be changed or removed without prior 
notice.


Also in MAINTAINERS file,
Flow Classify - EXPERIMENTAL

and as it is lib moving into stable ABI, then probably something should 
go in the release notes for this.

thanks,
Kevin.

> @@ -157,7 +157,6 @@ struct rte_flow_classify_ipv4_5tuple_stats {
>    * @return
>    *   Handle to flow classifier instance on success or NULL otherwise
>    */
> -__rte_experimental
>   struct rte_flow_classifier *
>   rte_flow_classifier_create(struct rte_flow_classifier_params *params);
>   
> @@ -169,7 +168,6 @@ rte_flow_classifier_create(struct rte_flow_classifier_params *params);
>    * @return
>    *   0 on success, error code otherwise
>    */
> -__rte_experimental
>   int
>   rte_flow_classifier_free(struct rte_flow_classifier *cls);
>   
> @@ -183,7 +181,6 @@ rte_flow_classifier_free(struct rte_flow_classifier *cls);
>    * @return
>    *   0 on success, error code otherwise
>    */
> -__rte_experimental
>   int
>   rte_flow_classify_table_create(struct rte_flow_classifier *cls,
>   		struct rte_flow_classify_table_params *params);
> @@ -205,7 +202,6 @@ rte_flow_classify_table_create(struct rte_flow_classifier *cls,
>    * @return
>    *   0 on success, error code otherwise
>    */
> -__rte_experimental
>   int
>   rte_flow_classify_validate(struct rte_flow_classifier *cls,
>   		const struct rte_flow_attr *attr,
> @@ -232,7 +228,6 @@ rte_flow_classify_validate(struct rte_flow_classifier *cls,
>    * @return
>    *   A valid handle in case of success, NULL otherwise.
>    */
> -__rte_experimental
>   struct rte_flow_classify_rule *
>   rte_flow_classify_table_entry_add(struct rte_flow_classifier *cls,
>   		const struct rte_flow_attr *attr,
> @@ -251,7 +246,6 @@ rte_flow_classify_table_entry_add(struct rte_flow_classifier *cls,
>    * @return
>    *   0 on success, error code otherwise.
>    */
> -__rte_experimental
>   int
>   rte_flow_classify_table_entry_delete(struct rte_flow_classifier *cls,
>   		struct rte_flow_classify_rule *rule);
> @@ -273,7 +267,6 @@ rte_flow_classify_table_entry_delete(struct rte_flow_classifier *cls,
>    * @return
>    *   0 on success, error code otherwise.
>    */
> -__rte_experimental
>   int
>   rte_flow_classifier_query(struct rte_flow_classifier *cls,
>   		struct rte_mbuf **pkts,
> diff --git a/lib/flow_classify/version.map b/lib/flow_classify/version.map
> index 49bc25c6a0..b7a888053b 100644
> --- a/lib/flow_classify/version.map
> +++ b/lib/flow_classify/version.map
> @@ -1,4 +1,4 @@
> -EXPERIMENTAL {
> +DPDK_22 {
>   	global:
>   
>   	rte_flow_classifier_create;
> 


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v2 2/2] vhost: enable IOMMU for async vhost
  @ 2021-09-23 14:56  3%       ` Maxime Coquelin
  2021-09-24  1:53  4%         ` Xia, Chenbo
  0 siblings, 1 reply; 200+ results
From: Maxime Coquelin @ 2021-09-23 14:56 UTC (permalink / raw)
  To: Hu, Jiayu, Ding, Xuan, dev, Burakov, Anatoly, Xia, Chenbo
  Cc: Jiang, Cheng1, Richardson, Bruce, Pai G, Sunil, Wang, Yinan,
	Yang, YvonneX



On 9/23/21 16:39, Hu, Jiayu wrote:
> Hi Xuan,
> 
>> -----Original Message-----
>> From: Ding, Xuan <xuan.ding@intel.com>
>> Sent: Friday, September 17, 2021 1:26 PM
>> To: dev@dpdk.org; Burakov, Anatoly <anatoly.burakov@intel.com>;
>> maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>
>> Cc: Hu, Jiayu <jiayu.hu@intel.com>; Jiang, Cheng1 <cheng1.jiang@intel.com>;
>> Richardson, Bruce <bruce.richardson@intel.com>; Pai G, Sunil
>> <sunil.pai.g@intel.com>; Wang, Yinan <yinan.wang@intel.com>; Yang,
>> YvonneX <yvonnex.yang@intel.com>; Ding, Xuan <xuan.ding@intel.com>
>> Subject: [PATCH v2 2/2] vhost: enable IOMMU for async vhost
>>
>> The use of IOMMU has many advantages, such as isolation and address
>> translation. This patch extends the capbility of DMA engine to use IOMMU if
>> the DMA engine is bound to vfio.
>>
>> When set memory table, the guest memory will be mapped into the default
>> container of DPDK.
>>
>> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
>> ---
>>   lib/vhost/rte_vhost.h  |  1 +
>>   lib/vhost/vhost_user.c | 57
>> +++++++++++++++++++++++++++++++++++++++++-
>>   2 files changed, 57 insertions(+), 1 deletion(-)
>>
>> diff --git a/lib/vhost/rte_vhost.h b/lib/vhost/rte_vhost.h index
>> 8d875e9322..e0537249f3 100644
>> --- a/lib/vhost/rte_vhost.h
>> +++ b/lib/vhost/rte_vhost.h
>> @@ -127,6 +127,7 @@ struct rte_vhost_mem_region {
>>   	void	 *mmap_addr;
>>   	uint64_t mmap_size;
>>   	int fd;
>> +	uint64_t dma_map_success;
> 
> How about using bool for dma_map_success?

The bigger problem here is that you are breaking the ABI.

>>   };
>>
>>   /**
>> diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index
>> 29a4c9af60..7d1d592b86 100644
>> --- a/lib/vhost/vhost_user.c
>> +++ b/lib/vhost/vhost_user.c
>> @@ -45,6 +45,8 @@
>>   #include <rte_common.h>
>>   #include <rte_malloc.h>
>>   #include <rte_log.h>
>> +#include <rte_vfio.h>
>> +#include <rte_errno.h>
>>
>>   #include "iotlb.h"
>>   #include "vhost.h"
>> @@ -141,6 +143,46 @@ get_blk_size(int fd)
>>   	return ret == -1 ? (uint64_t)-1 : (uint64_t)stat.st_blksize;  }
>>
>> +static int
>> +async_dma_map(struct rte_vhost_mem_region *region, bool do_map) {
>> +	int ret = 0;
>> +	uint64_t host_iova;
>> +	host_iova = rte_mem_virt2iova((void *)(uintptr_t)region-
>>> host_user_addr);
>> +	if (do_map) {
>> +		/* Add mapped region into the default container of DPDK. */
>> +		ret =
>> rte_vfio_container_dma_map(RTE_VFIO_DEFAULT_CONTAINER_FD,
>> +						 region->host_user_addr,
>> +						 host_iova,
>> +						 region->size);
>> +		region->dma_map_success = ret == 0;
>> +		if (ret) {
>> +			if (rte_errno != ENODEV && rte_errno != ENOTSUP) {
>> +				VHOST_LOG_CONFIG(ERR, "DMA engine map
>> failed\n");
>> +				return ret;
>> +			}
>> +			return 0;
> 
> Why return 0, if ret is -1 here?
> 
> Thanks,
> Jiayu
> 
>> +		}
>> +		return ret;
>> +	} else {
>> +		/* No need to do vfio unmap if the map failed. */
>> +		if (!region->dma_map_success)
>> +			return 0;
>> +
>> +		/* Remove mapped region from the default container of
>> DPDK. */
>> +		ret =
>> rte_vfio_container_dma_unmap(RTE_VFIO_DEFAULT_CONTAINER_FD,
>> +						   region->host_user_addr,
>> +						   host_iova,
>> +						   region->size);
>> +		if (ret) {
>> +			VHOST_LOG_CONFIG(ERR, "DMA engine unmap
>> failed\n");
>> +			return ret;
>> +		}
>> +		region->dma_map_success = 0;
>> +	}
>> +	return ret;
>> +}
>> +
>>   static void
>>   free_mem_region(struct virtio_net *dev)  { @@ -153,6 +195,9 @@
>> free_mem_region(struct virtio_net *dev)
>>   	for (i = 0; i < dev->mem->nregions; i++) {
>>   		reg = &dev->mem->regions[i];
>>   		if (reg->host_user_addr) {
>> +			if (dev->async_copy && rte_vfio_is_enabled("vfio"))
>> +				async_dma_map(reg, false);
>> +
>>   			munmap(reg->mmap_addr, reg->mmap_size);
>>   			close(reg->fd);
>>   		}
>> @@ -1157,6 +1202,7 @@ vhost_user_mmap_region(struct virtio_net *dev,
>>   	uint64_t mmap_size;
>>   	uint64_t alignment;
>>   	int populate;
>> +	int ret;
>>
>>   	/* Check for memory_size + mmap_offset overflow */
>>   	if (mmap_offset >= -region->size) {
>> @@ -1210,13 +1256,22 @@ vhost_user_mmap_region(struct virtio_net *dev,
>>   	region->mmap_size = mmap_size;
>>   	region->host_user_addr = (uint64_t)(uintptr_t)mmap_addr +
>> mmap_offset;
>>
>> -	if (dev->async_copy)
>> +	if (dev->async_copy) {
>>   		if (add_guest_pages(dev, region, alignment) < 0) {
>>   			VHOST_LOG_CONFIG(ERR,
>>   					"adding guest pages to region
>> failed.\n");
>>   			return -1;
>>   		}
>>
>> +		if (rte_vfio_is_enabled("vfio")) {
>> +			ret = async_dma_map(region, true);
>> +			if (ret < 0) {
>> +				VHOST_LOG_CONFIG(ERR, "Configure
>> IOMMU for DMA engine failed\n");
>> +				return -1;
>> +			}
>> +		}
>> +	}
>> +
>>   	VHOST_LOG_CONFIG(INFO,
>>   			"guest memory region size: 0x%" PRIx64 "\n"
>>   			"\t guest physical addr: 0x%" PRIx64 "\n"
>> --
>> 2.17.1
> 


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v2 2/2] vhost: enable IOMMU for async vhost
  2021-09-23 14:56  3%       ` Maxime Coquelin
@ 2021-09-24  1:53  4%         ` Xia, Chenbo
  2021-09-24  7:13  0%           ` Maxime Coquelin
  0 siblings, 1 reply; 200+ results
From: Xia, Chenbo @ 2021-09-24  1:53 UTC (permalink / raw)
  To: Maxime Coquelin, Hu, Jiayu, Ding, Xuan, dev, Burakov, Anatoly
  Cc: Jiang, Cheng1, Richardson, Bruce, Pai G, Sunil, Wang, Yinan,
	Yang, YvonneX

> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Thursday, September 23, 2021 10:56 PM
> To: Hu, Jiayu <jiayu.hu@intel.com>; Ding, Xuan <xuan.ding@intel.com>;
> dev@dpdk.org; Burakov, Anatoly <anatoly.burakov@intel.com>; Xia, Chenbo
> <chenbo.xia@intel.com>
> Cc: Jiang, Cheng1 <cheng1.jiang@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; Pai G, Sunil <sunil.pai.g@intel.com>; Wang,
> Yinan <yinan.wang@intel.com>; Yang, YvonneX <yvonnex.yang@intel.com>
> Subject: Re: [PATCH v2 2/2] vhost: enable IOMMU for async vhost
> 
> 
> 
> On 9/23/21 16:39, Hu, Jiayu wrote:
> > Hi Xuan,
> >
> >> -----Original Message-----
> >> From: Ding, Xuan <xuan.ding@intel.com>
> >> Sent: Friday, September 17, 2021 1:26 PM
> >> To: dev@dpdk.org; Burakov, Anatoly <anatoly.burakov@intel.com>;
> >> maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>
> >> Cc: Hu, Jiayu <jiayu.hu@intel.com>; Jiang, Cheng1 <cheng1.jiang@intel.com>;
> >> Richardson, Bruce <bruce.richardson@intel.com>; Pai G, Sunil
> >> <sunil.pai.g@intel.com>; Wang, Yinan <yinan.wang@intel.com>; Yang,
> >> YvonneX <yvonnex.yang@intel.com>; Ding, Xuan <xuan.ding@intel.com>
> >> Subject: [PATCH v2 2/2] vhost: enable IOMMU for async vhost
> >>
> >> The use of IOMMU has many advantages, such as isolation and address
> >> translation. This patch extends the capbility of DMA engine to use IOMMU if
> >> the DMA engine is bound to vfio.
> >>
> >> When set memory table, the guest memory will be mapped into the default
> >> container of DPDK.
> >>
> >> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
> >> ---
> >>   lib/vhost/rte_vhost.h  |  1 +
> >>   lib/vhost/vhost_user.c | 57
> >> +++++++++++++++++++++++++++++++++++++++++-
> >>   2 files changed, 57 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/lib/vhost/rte_vhost.h b/lib/vhost/rte_vhost.h index
> >> 8d875e9322..e0537249f3 100644
> >> --- a/lib/vhost/rte_vhost.h
> >> +++ b/lib/vhost/rte_vhost.h
> >> @@ -127,6 +127,7 @@ struct rte_vhost_mem_region {
> >>   	void	 *mmap_addr;
> >>   	uint64_t mmap_size;
> >>   	int fd;
> >> +	uint64_t dma_map_success;
> >
> > How about using bool for dma_map_success?
> 
> The bigger problem here is that you are breaking the ABI.

Maybe this kind of driver-facing structs/functions should be removed
from ABI, since we are refactoring DPDK ABI recently.

/Chenbo

> 
> >>   };
> >>
> >>   /**


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] Minutes of Technical Board Meeting, 2021-Sep-22
@ 2021-09-24  4:38  5% Jerin Jacob Kollanukkaran
  0 siblings, 0 replies; 200+ results
From: Jerin Jacob Kollanukkaran @ 2021-09-24  4:38 UTC (permalink / raw)
  To: dev; +Cc: techboard

Minutes of Technical Board Meeting, 2021-Sep-22

Members Attending
-----------------
-Aaron
-Bruce
-Ferruh
-Hemant
-Honnappa
-Jerin (Chair)
-Kevin
-Konstantin
-Maxime
-Olivier
-Stephen
-Thomas

NOTE: The technical board meetings every second Wednesday at https://meet.jit.si/DPDK at 3 pm UTC.
Meetings are public, and DPDK community members are welcome to attend.
 
NOTE: Next meeting will be on Wednesday 2021-Oct-06 @3pm UTC, and will be chaired by Kevin


# doc: announce changes in EFD function
- TB approved the ABI change as the v21.11 release can break the ABI and it minor parameter change

# DPDK adaptation to DMARC compliant mailing list
- TB decided to have the first trial run on the web and users mailing list as it low volume traffic mailing list
- Ali to send an email on this on the announcement mailing list as the first step for migration
- Link for the announcement: http://mails.dpdk.org/archives/dev/2021-September/221037.html

# OOPS handling EAL feature
- http://patches.dpdk.org/project/dpdk/patch/20210906041732.1019743-2-jerinj@marvell.com/
- There is no strong desire to have this feature in DPDK/EAL
- Some of the aspects overlaps with systemd in Linux OS
- Author decided to withdraw the patch due to lack of interest.

# eal: update rte_eal_wait_lcore definition
- TB approved for the ABI change as v21.11 release can break the ABI

# Update on DTS usability
- Honnappa to send the updated document on DTS usability reviews

# Slack channel - Timelines for general availability
- New slack channel details updated on the DPDK website
- Ferruh to send the details on the same on the announcement mailing list

# DPDK collaboration with ORAN AAL community
- Honnappa to share liaison note for DPDK TB recommendation
- Link for the note: http://mails.dpdk.org/archives/dev/2021-September/220949.html
- Jerin to share the same liaison note with the ORAN AAL community

^ permalink raw reply	[relevance 5%]

* Re: [dpdk-dev] [PATCH v2 2/2] vhost: enable IOMMU for async vhost
  2021-09-24  1:53  4%         ` Xia, Chenbo
@ 2021-09-24  7:13  0%           ` Maxime Coquelin
  2021-09-24  7:35  4%             ` Xia, Chenbo
  0 siblings, 1 reply; 200+ results
From: Maxime Coquelin @ 2021-09-24  7:13 UTC (permalink / raw)
  To: Xia, Chenbo, Hu, Jiayu, Ding, Xuan, dev, Burakov, Anatoly
  Cc: Jiang, Cheng1, Richardson, Bruce, Pai G, Sunil, Wang, Yinan,
	Yang, YvonneX



On 9/24/21 03:53, Xia, Chenbo wrote:
>> -----Original Message-----
>> From: Maxime Coquelin <maxime.coquelin@redhat.com>
>> Sent: Thursday, September 23, 2021 10:56 PM
>> To: Hu, Jiayu <jiayu.hu@intel.com>; Ding, Xuan <xuan.ding@intel.com>;
>> dev@dpdk.org; Burakov, Anatoly <anatoly.burakov@intel.com>; Xia, Chenbo
>> <chenbo.xia@intel.com>
>> Cc: Jiang, Cheng1 <cheng1.jiang@intel.com>; Richardson, Bruce
>> <bruce.richardson@intel.com>; Pai G, Sunil <sunil.pai.g@intel.com>; Wang,
>> Yinan <yinan.wang@intel.com>; Yang, YvonneX <yvonnex.yang@intel.com>
>> Subject: Re: [PATCH v2 2/2] vhost: enable IOMMU for async vhost
>>
>>
>>
>> On 9/23/21 16:39, Hu, Jiayu wrote:
>>> Hi Xuan,
>>>
>>>> -----Original Message-----
>>>> From: Ding, Xuan <xuan.ding@intel.com>
>>>> Sent: Friday, September 17, 2021 1:26 PM
>>>> To: dev@dpdk.org; Burakov, Anatoly <anatoly.burakov@intel.com>;
>>>> maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>
>>>> Cc: Hu, Jiayu <jiayu.hu@intel.com>; Jiang, Cheng1 <cheng1.jiang@intel.com>;
>>>> Richardson, Bruce <bruce.richardson@intel.com>; Pai G, Sunil
>>>> <sunil.pai.g@intel.com>; Wang, Yinan <yinan.wang@intel.com>; Yang,
>>>> YvonneX <yvonnex.yang@intel.com>; Ding, Xuan <xuan.ding@intel.com>
>>>> Subject: [PATCH v2 2/2] vhost: enable IOMMU for async vhost
>>>>
>>>> The use of IOMMU has many advantages, such as isolation and address
>>>> translation. This patch extends the capbility of DMA engine to use IOMMU if
>>>> the DMA engine is bound to vfio.
>>>>
>>>> When set memory table, the guest memory will be mapped into the default
>>>> container of DPDK.
>>>>
>>>> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
>>>> ---
>>>>    lib/vhost/rte_vhost.h  |  1 +
>>>>    lib/vhost/vhost_user.c | 57
>>>> +++++++++++++++++++++++++++++++++++++++++-
>>>>    2 files changed, 57 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/lib/vhost/rte_vhost.h b/lib/vhost/rte_vhost.h index
>>>> 8d875e9322..e0537249f3 100644
>>>> --- a/lib/vhost/rte_vhost.h
>>>> +++ b/lib/vhost/rte_vhost.h
>>>> @@ -127,6 +127,7 @@ struct rte_vhost_mem_region {
>>>>    	void	 *mmap_addr;
>>>>    	uint64_t mmap_size;
>>>>    	int fd;
>>>> +	uint64_t dma_map_success;
>>>
>>> How about using bool for dma_map_success?
>>
>> The bigger problem here is that you are breaking the ABI.
> 
> Maybe this kind of driver-facing structs/functions should be removed
> from ABI, since we are refactoring DPDK ABI recently.

It has actually been exposed for SPDK, we cannot just remove it from
API.

Maxime

> /Chenbo
> 
>>
>>>>    };
>>>>
>>>>    /**
> 


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v2 2/2] vhost: enable IOMMU for async vhost
  2021-09-24  7:13  0%           ` Maxime Coquelin
@ 2021-09-24  7:35  4%             ` Xia, Chenbo
  2021-09-24  8:18  3%               ` Ding, Xuan
  0 siblings, 1 reply; 200+ results
From: Xia, Chenbo @ 2021-09-24  7:35 UTC (permalink / raw)
  To: Maxime Coquelin, Hu, Jiayu, Ding, Xuan, dev, Burakov, Anatoly
  Cc: Jiang, Cheng1, Richardson, Bruce, Pai G, Sunil, Wang, Yinan,
	Yang, YvonneX

> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Friday, September 24, 2021 3:14 PM
> To: Xia, Chenbo <chenbo.xia@intel.com>; Hu, Jiayu <jiayu.hu@intel.com>; Ding,
> Xuan <xuan.ding@intel.com>; dev@dpdk.org; Burakov, Anatoly
> <anatoly.burakov@intel.com>
> Cc: Jiang, Cheng1 <cheng1.jiang@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; Pai G, Sunil <sunil.pai.g@intel.com>; Wang,
> Yinan <yinan.wang@intel.com>; Yang, YvonneX <yvonnex.yang@intel.com>
> Subject: Re: [PATCH v2 2/2] vhost: enable IOMMU for async vhost
> 
> 
> 
> On 9/24/21 03:53, Xia, Chenbo wrote:
> >> -----Original Message-----
> >> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> >> Sent: Thursday, September 23, 2021 10:56 PM
> >> To: Hu, Jiayu <jiayu.hu@intel.com>; Ding, Xuan <xuan.ding@intel.com>;
> >> dev@dpdk.org; Burakov, Anatoly <anatoly.burakov@intel.com>; Xia, Chenbo
> >> <chenbo.xia@intel.com>
> >> Cc: Jiang, Cheng1 <cheng1.jiang@intel.com>; Richardson, Bruce
> >> <bruce.richardson@intel.com>; Pai G, Sunil <sunil.pai.g@intel.com>; Wang,
> >> Yinan <yinan.wang@intel.com>; Yang, YvonneX <yvonnex.yang@intel.com>
> >> Subject: Re: [PATCH v2 2/2] vhost: enable IOMMU for async vhost
> >>
> >>
> >>
> >> On 9/23/21 16:39, Hu, Jiayu wrote:
> >>> Hi Xuan,
> >>>
> >>>> -----Original Message-----
> >>>> From: Ding, Xuan <xuan.ding@intel.com>
> >>>> Sent: Friday, September 17, 2021 1:26 PM
> >>>> To: dev@dpdk.org; Burakov, Anatoly <anatoly.burakov@intel.com>;
> >>>> maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>
> >>>> Cc: Hu, Jiayu <jiayu.hu@intel.com>; Jiang, Cheng1
> <cheng1.jiang@intel.com>;
> >>>> Richardson, Bruce <bruce.richardson@intel.com>; Pai G, Sunil
> >>>> <sunil.pai.g@intel.com>; Wang, Yinan <yinan.wang@intel.com>; Yang,
> >>>> YvonneX <yvonnex.yang@intel.com>; Ding, Xuan <xuan.ding@intel.com>
> >>>> Subject: [PATCH v2 2/2] vhost: enable IOMMU for async vhost
> >>>>
> >>>> The use of IOMMU has many advantages, such as isolation and address
> >>>> translation. This patch extends the capbility of DMA engine to use IOMMU
> if
> >>>> the DMA engine is bound to vfio.
> >>>>
> >>>> When set memory table, the guest memory will be mapped into the default
> >>>> container of DPDK.
> >>>>
> >>>> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
> >>>> ---
> >>>>    lib/vhost/rte_vhost.h  |  1 +
> >>>>    lib/vhost/vhost_user.c | 57
> >>>> +++++++++++++++++++++++++++++++++++++++++-
> >>>>    2 files changed, 57 insertions(+), 1 deletion(-)
> >>>>
> >>>> diff --git a/lib/vhost/rte_vhost.h b/lib/vhost/rte_vhost.h index
> >>>> 8d875e9322..e0537249f3 100644
> >>>> --- a/lib/vhost/rte_vhost.h
> >>>> +++ b/lib/vhost/rte_vhost.h
> >>>> @@ -127,6 +127,7 @@ struct rte_vhost_mem_region {
> >>>>    	void	 *mmap_addr;
> >>>>    	uint64_t mmap_size;
> >>>>    	int fd;
> >>>> +	uint64_t dma_map_success;
> >>>
> >>> How about using bool for dma_map_success?
> >>
> >> The bigger problem here is that you are breaking the ABI.
> >
> > Maybe this kind of driver-facing structs/functions should be removed
> > from ABI, since we are refactoring DPDK ABI recently.
> 
> It has actually been exposed for SPDK, we cannot just remove it from
> API.

'exposed' does not mean it has to be ABI. Like 'driver_sdk_headers' in
ethdev lib, those headers can be exposed but do not include ABI. I see
SPDK is using that for building its lib. Not sure in this case, the SPDK
Vhost lib should be considered as application.

Thanks,
Chenbo 

> 
> Maxime
> 
> > /Chenbo
> >
> >>
> >>>>    };
> >>>>
> >>>>    /**
> >


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v2 2/2] vhost: enable IOMMU for async vhost
  2021-09-24  7:35  4%             ` Xia, Chenbo
@ 2021-09-24  8:18  3%               ` Ding, Xuan
  0 siblings, 0 replies; 200+ results
From: Ding, Xuan @ 2021-09-24  8:18 UTC (permalink / raw)
  To: Xia, Chenbo, Maxime Coquelin, Hu, Jiayu, dev, Burakov, Anatoly
  Cc: Jiang, Cheng1, Richardson, Bruce, Pai G, Sunil, Wang, Yinan,
	Yang, YvonneX



> -----Original Message-----
> From: Xia, Chenbo <chenbo.xia@intel.com>
> Sent: Friday, September 24, 2021 3:36 PM
> To: Maxime Coquelin <maxime.coquelin@redhat.com>; Hu, Jiayu
> <jiayu.hu@intel.com>; Ding, Xuan <xuan.ding@intel.com>; dev@dpdk.org;
> Burakov, Anatoly <anatoly.burakov@intel.com>
> Cc: Jiang, Cheng1 <cheng1.jiang@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; Pai G, Sunil <sunil.pai.g@intel.com>; Wang,
> Yinan <yinan.wang@intel.com>; Yang, YvonneX <yvonnex.yang@intel.com>
> Subject: RE: [PATCH v2 2/2] vhost: enable IOMMU for async vhost
> 
> > -----Original Message-----
> > From: Maxime Coquelin <maxime.coquelin@redhat.com>
> > Sent: Friday, September 24, 2021 3:14 PM
> > To: Xia, Chenbo <chenbo.xia@intel.com>; Hu, Jiayu <jiayu.hu@intel.com>;
> Ding,
> > Xuan <xuan.ding@intel.com>; dev@dpdk.org; Burakov, Anatoly
> > <anatoly.burakov@intel.com>
> > Cc: Jiang, Cheng1 <cheng1.jiang@intel.com>; Richardson, Bruce
> > <bruce.richardson@intel.com>; Pai G, Sunil <sunil.pai.g@intel.com>; Wang,
> > Yinan <yinan.wang@intel.com>; Yang, YvonneX <yvonnex.yang@intel.com>
> > Subject: Re: [PATCH v2 2/2] vhost: enable IOMMU for async vhost
> >
> >
> >
> > On 9/24/21 03:53, Xia, Chenbo wrote:
> > >> -----Original Message-----
> > >> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> > >> Sent: Thursday, September 23, 2021 10:56 PM
> > >> To: Hu, Jiayu <jiayu.hu@intel.com>; Ding, Xuan <xuan.ding@intel.com>;
> > >> dev@dpdk.org; Burakov, Anatoly <anatoly.burakov@intel.com>; Xia,
> Chenbo
> > >> <chenbo.xia@intel.com>
> > >> Cc: Jiang, Cheng1 <cheng1.jiang@intel.com>; Richardson, Bruce
> > >> <bruce.richardson@intel.com>; Pai G, Sunil <sunil.pai.g@intel.com>; Wang,
> > >> Yinan <yinan.wang@intel.com>; Yang, YvonneX <yvonnex.yang@intel.com>
> > >> Subject: Re: [PATCH v2 2/2] vhost: enable IOMMU for async vhost
> > >>
> > >>
> > >>
> > >> On 9/23/21 16:39, Hu, Jiayu wrote:
> > >>> Hi Xuan,
> > >>>
> > >>>> -----Original Message-----
> > >>>> From: Ding, Xuan <xuan.ding@intel.com>
> > >>>> Sent: Friday, September 17, 2021 1:26 PM
> > >>>> To: dev@dpdk.org; Burakov, Anatoly <anatoly.burakov@intel.com>;
> > >>>> maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>
> > >>>> Cc: Hu, Jiayu <jiayu.hu@intel.com>; Jiang, Cheng1
> > <cheng1.jiang@intel.com>;
> > >>>> Richardson, Bruce <bruce.richardson@intel.com>; Pai G, Sunil
> > >>>> <sunil.pai.g@intel.com>; Wang, Yinan <yinan.wang@intel.com>; Yang,
> > >>>> YvonneX <yvonnex.yang@intel.com>; Ding, Xuan <xuan.ding@intel.com>
> > >>>> Subject: [PATCH v2 2/2] vhost: enable IOMMU for async vhost
> > >>>>
> > >>>> The use of IOMMU has many advantages, such as isolation and address
> > >>>> translation. This patch extends the capbility of DMA engine to use
> IOMMU
> > if
> > >>>> the DMA engine is bound to vfio.
> > >>>>
> > >>>> When set memory table, the guest memory will be mapped into the
> default
> > >>>> container of DPDK.
> > >>>>
> > >>>> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
> > >>>> ---
> > >>>>    lib/vhost/rte_vhost.h  |  1 +
> > >>>>    lib/vhost/vhost_user.c | 57
> > >>>> +++++++++++++++++++++++++++++++++++++++++-
> > >>>>    2 files changed, 57 insertions(+), 1 deletion(-)
> > >>>>
> > >>>> diff --git a/lib/vhost/rte_vhost.h b/lib/vhost/rte_vhost.h index
> > >>>> 8d875e9322..e0537249f3 100644
> > >>>> --- a/lib/vhost/rte_vhost.h
> > >>>> +++ b/lib/vhost/rte_vhost.h
> > >>>> @@ -127,6 +127,7 @@ struct rte_vhost_mem_region {
> > >>>>    	void	 *mmap_addr;
> > >>>>    	uint64_t mmap_size;
> > >>>>    	int fd;
> > >>>> +	uint64_t dma_map_success;
> > >>>
> > >>> How about using bool for dma_map_success?
> > >>
> > >> The bigger problem here is that you are breaking the ABI.
> > >
> > > Maybe this kind of driver-facing structs/functions should be removed
> > > from ABI, since we are refactoring DPDK ABI recently.
> >
> > It has actually been exposed for SPDK, we cannot just remove it from
> > API.
> 
> 'exposed' does not mean it has to be ABI. Like 'driver_sdk_headers' in
> ethdev lib, those headers can be exposed but do not include ABI. I see
> SPDK is using that for building its lib. Not sure in this case, the SPDK
> Vhost lib should be considered as application.
> 
> Thanks,
> Chenbo

Thanks for the discussion. Since the possible ABI changing is in the future,
I consider adding the dma_map_success in the virtio_net structure, to indicate
the map status of each region. This flag can even be removed if it is not considering
the restrictions on user(kernel driver support). Details can be provided in next version's patch.

Hope to get your insights. :)

Thanks,
Xuan

> 
> >
> > Maxime
> >
> > > /Chenbo
> > >
> > >>
> > >>>>    };
> > >>>>
> > >>>>    /**
> > >


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v23 0/6] support dmadev
  @ 2021-09-24 10:53  3% ` Chengwen Feng
  2021-10-09  9:33  3% ` [dpdk-dev] [PATCH v24 " Chengwen Feng
  1 sibling, 0 replies; 200+ results
From: Chengwen Feng @ 2021-09-24 10:53 UTC (permalink / raw)
  To: thomas, ferruh.yigit, bruce.richardson, jerinj, jerinjacobk,
	andrew.rybchenko
  Cc: dev, mb, nipun.gupta, hemant.agrawal, maxime.coquelin,
	honnappa.nagarahalli, david.marchand, sburla, pkapoor,
	konstantin.ananyev, conor.walsh, kevin.laatz

This patch set contains six patch for new add dmadev.

Chengwen Feng (6):
  dmadev: introduce DMA device library
  dmadev: add control plane function support
  dmadev: add data plane function support
  dmadev: add multi-process support
  dma/skeleton: introduce skeleton dmadev driver
  app/test: add dmadev API test

---
v23:
* split multi-process support from 1st patch.
* fix some static check warning.
* fix skeleton cpu thread zero_req_count flip bug.
* add test_dmadev_api.h.
* add the description of modifying the dmadev state when init OK.
v22:
* function prefix change from rte_dmadev_* to rte_dma_*.
* change to prefix comment in most scenarios.
* dmadev dev_id use int16_t type.
* fix typo.
* organize patchsets in incremental mode.
v21:
* add comment for reserved fields of struct rte_dmadev.
v20:
* delete unnecessary and duplicate include header files.
* the conf_sz parameter is added to the configure and vchan-setup
  callbacks of the PMD, this is mainly used to enhance ABI
  compatibility.
* the rte_dmadev structure field is rearranged to reserve more space
  for I/O functions.
* fix some ambiguous and unnecessary comments.
* fix the potential memory leak of ut.
* redefine skeldma_init_once to skeldma_count.
* suppress rte_dmadev error output when execute ut.

 MAINTAINERS                            |    7 +
 app/test/meson.build                   |    4 +
 app/test/test_dmadev.c                 |   41 +
 app/test/test_dmadev_api.c             |  574 +++++++++++++
 app/test/test_dmadev_api.h             |    5 +
 config/rte_config.h                    |    3 +
 doc/api/doxy-api-index.md              |    1 +
 doc/api/doxy-api.conf.in               |    1 +
 doc/guides/dmadevs/index.rst           |   12 +
 doc/guides/index.rst                   |    1 +
 doc/guides/prog_guide/dmadev.rst       |  127 +++
 doc/guides/prog_guide/img/dmadev.svg   |  283 +++++++
 doc/guides/prog_guide/index.rst        |    1 +
 doc/guides/rel_notes/release_21_11.rst |    7 +
 drivers/dma/meson.build                |    6 +
 drivers/dma/skeleton/meson.build       |    7 +
 drivers/dma/skeleton/skeleton_dmadev.c |  570 +++++++++++++
 drivers/dma/skeleton/skeleton_dmadev.h |   61 ++
 drivers/dma/skeleton/version.map       |    3 +
 drivers/meson.build                    |    1 +
 lib/dmadev/meson.build                 |    7 +
 lib/dmadev/rte_dmadev.c                |  728 ++++++++++++++++
 lib/dmadev/rte_dmadev.h                | 1074 ++++++++++++++++++++++++
 lib/dmadev/rte_dmadev_core.h           |  181 ++++
 lib/dmadev/rte_dmadev_pmd.h            |   60 ++
 lib/dmadev/version.map                 |   35 +
 lib/meson.build                        |    1 +
 27 files changed, 3801 insertions(+)
 create mode 100644 app/test/test_dmadev.c
 create mode 100644 app/test/test_dmadev_api.c
 create mode 100644 app/test/test_dmadev_api.h
 create mode 100644 doc/guides/dmadevs/index.rst
 create mode 100644 doc/guides/prog_guide/dmadev.rst
 create mode 100644 doc/guides/prog_guide/img/dmadev.svg
 create mode 100644 drivers/dma/meson.build
 create mode 100644 drivers/dma/skeleton/meson.build
 create mode 100644 drivers/dma/skeleton/skeleton_dmadev.c
 create mode 100644 drivers/dma/skeleton/skeleton_dmadev.h
 create mode 100644 drivers/dma/skeleton/version.map
 create mode 100644 lib/dmadev/meson.build
 create mode 100644 lib/dmadev/rte_dmadev.c
 create mode 100644 lib/dmadev/rte_dmadev.h
 create mode 100644 lib/dmadev/rte_dmadev_core.h
 create mode 100644 lib/dmadev/rte_dmadev_pmd.h
 create mode 100644 lib/dmadev/version.map

-- 
2.33.0


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v11 06/12] pdump: support pcapng and filtering
  @ 2021-09-24 15:21  1%   ` Stephen Hemminger
  2021-09-24 15:22  1%   ` [dpdk-dev] [PATCH v11 11/12] doc: changes for new pcapng and dumpcap Stephen Hemminger
  1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-09-24 15:21 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

This enhances the DPDK pdump library to support new
pcapng format and filtering via BPF.

The internal client/server protocol is changed to support
two versions: the original pdump basic version and a
new pcapng version.

The internal version number (not part of exposed API or ABI)
is intentionally increased to cause any attempt to try
mismatched primary/secondary process to fail.

Add new API to do allow filtering of captured packets with
DPDK BPF (eBPF) filter program. It keeps statistics
on packets captured, filtered, and missed (because ring was full).

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 lib/meson.build       |   4 +-
 lib/pdump/meson.build |   2 +-
 lib/pdump/rte_pdump.c | 427 ++++++++++++++++++++++++++++++------------
 lib/pdump/rte_pdump.h | 113 ++++++++++-
 lib/pdump/version.map |   8 +
 5 files changed, 427 insertions(+), 127 deletions(-)

diff --git a/lib/meson.build b/lib/meson.build
index ba88e9eabc58..9812e54f1a12 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -27,6 +27,7 @@ libraries = [
         'acl',
         'bbdev',
         'bitratestats',
+        'bpf',
         'cfgfile',
         'compressdev',
         'cryptodev',
@@ -43,7 +44,6 @@ libraries = [
         'member',
         'pcapng',
         'power',
-        'pdump',
         'rawdev',
         'regexdev',
         'rib',
@@ -55,10 +55,10 @@ libraries = [
         'ipsec', # ipsec lib depends on net, crypto and security
         'fib', #fib lib depends on rib
         'port', # pkt framework libs which use other libs from above
+        'pdump', # pdump lib depends on bpf
         'table',
         'pipeline',
         'flow_classify', # flow_classify lib depends on pkt framework table lib
-        'bpf',
         'graph',
         'node',
 ]
diff --git a/lib/pdump/meson.build b/lib/pdump/meson.build
index 3a95eabde6a6..51ceb2afdec5 100644
--- a/lib/pdump/meson.build
+++ b/lib/pdump/meson.build
@@ -3,4 +3,4 @@
 
 sources = files('rte_pdump.c')
 headers = files('rte_pdump.h')
-deps += ['ethdev']
+deps += ['ethdev', 'bpf', 'pcapng']
diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c
index 382217bc1564..82b4f622ca37 100644
--- a/lib/pdump/rte_pdump.c
+++ b/lib/pdump/rte_pdump.c
@@ -7,8 +7,10 @@
 #include <rte_ethdev.h>
 #include <rte_lcore.h>
 #include <rte_log.h>
+#include <rte_memzone.h>
 #include <rte_errno.h>
 #include <rte_string_fns.h>
+#include <rte_pcapng.h>
 
 #include "rte_pdump.h"
 
@@ -27,30 +29,23 @@ enum pdump_operation {
 	ENABLE = 2
 };
 
+/* Internal version number in request */
 enum pdump_version {
-	V1 = 1
+	V1 = 1,		    /* no filtering or snap */
+	V2 = 2,
 };
 
 struct pdump_request {
 	uint16_t ver;
 	uint16_t op;
 	uint32_t flags;
-	union pdump_data {
-		struct enable_v1 {
-			char device[RTE_DEV_NAME_MAX_LEN];
-			uint16_t queue;
-			struct rte_ring *ring;
-			struct rte_mempool *mp;
-			void *filter;
-		} en_v1;
-		struct disable_v1 {
-			char device[RTE_DEV_NAME_MAX_LEN];
-			uint16_t queue;
-			struct rte_ring *ring;
-			struct rte_mempool *mp;
-			void *filter;
-		} dis_v1;
-	} data;
+	char device[RTE_DEV_NAME_MAX_LEN];
+	uint16_t queue;
+	struct rte_ring *ring;
+	struct rte_mempool *mp;
+
+	const struct rte_bpf_prm *prm;
+	uint32_t snaplen;
 };
 
 struct pdump_response {
@@ -63,80 +58,136 @@ static struct pdump_rxtx_cbs {
 	struct rte_ring *ring;
 	struct rte_mempool *mp;
 	const struct rte_eth_rxtx_callback *cb;
-	void *filter;
+	const struct rte_bpf *filter;
+	enum pdump_version ver;
+	uint32_t snaplen;
 } rx_cbs[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT],
 tx_cbs[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
 
-
-static inline void
-pdump_copy(struct rte_mbuf **pkts, uint16_t nb_pkts, void *user_params)
+static const char *MZ_RTE_PDUMP_STATS = "rte_pdump_stats";
+
+/* Shared memory between primary and secondary processes. */
+static struct {
+	struct rte_pdump_stats rx[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
+	struct rte_pdump_stats tx[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
+} *pdump_stats;
+
+/* Create a clone of mbuf to be placed into ring. */
+static void
+pdump_copy(uint16_t port_id, uint16_t queue,
+	   enum rte_pcapng_direction direction,
+	   struct rte_mbuf **pkts, uint16_t nb_pkts,
+	   const struct pdump_rxtx_cbs *cbs,
+	   struct rte_pdump_stats *stats)
 {
 	unsigned int i;
 	int ring_enq;
 	uint16_t d_pkts = 0;
 	struct rte_mbuf *dup_bufs[nb_pkts];
-	struct pdump_rxtx_cbs *cbs;
+	uint64_t ts;
 	struct rte_ring *ring;
 	struct rte_mempool *mp;
 	struct rte_mbuf *p;
+	uint64_t rcs[nb_pkts];
+
+	if (cbs->filter)
+		rte_bpf_exec_burst(cbs->filter, (void **)pkts, rcs, nb_pkts);
 
-	cbs  = user_params;
+	ts = rte_get_tsc_cycles();
 	ring = cbs->ring;
 	mp = cbs->mp;
 	for (i = 0; i < nb_pkts; i++) {
-		p = rte_pktmbuf_copy(pkts[i], mp, 0, UINT32_MAX);
-		if (p)
+		/*
+		 * This uses same BPF return value convention as socket filter
+		 * and pcap_offline_filter.
+		 * if program returns zero
+		 * then packet doesn't match the filter (will be ignored).
+		 */
+		if (cbs->filter && rcs[i] == 0) {
+			__atomic_fetch_add(&stats->filtered,
+					   1, __ATOMIC_RELAXED);
+			continue;
+		}
+
+		/*
+		 * If using pcapng then want to wrap packets
+		 * otherwise a simple copy.
+		 */
+		if (cbs->ver == V2)
+			p = rte_pcapng_copy(port_id, queue,
+					    pkts[i], mp, cbs->snaplen,
+					    ts, direction);
+		else
+			p = rte_pktmbuf_copy(pkts[i], mp, 0, cbs->snaplen);
+
+		if (unlikely(p == NULL))
+			__atomic_fetch_add(&stats->nombuf, 1, __ATOMIC_RELAXED);
+		else
 			dup_bufs[d_pkts++] = p;
 	}
 
+	__atomic_fetch_add(&stats->accepted, d_pkts, __ATOMIC_RELAXED);
+
 	ring_enq = rte_ring_enqueue_burst(ring, (void *)dup_bufs, d_pkts, NULL);
 	if (unlikely(ring_enq < d_pkts)) {
 		unsigned int drops = d_pkts - ring_enq;
 
-		PDUMP_LOG(DEBUG,
-			"only %d of packets enqueued to ring\n", ring_enq);
+		__atomic_fetch_add(&stats->ringfull, drops, __ATOMIC_RELAXED);
 		rte_pktmbuf_free_bulk(&dup_bufs[ring_enq], drops);
 	}
 }
 
 static uint16_t
-pdump_rx(uint16_t port __rte_unused, uint16_t qidx __rte_unused,
+pdump_rx(uint16_t port, uint16_t queue,
 	struct rte_mbuf **pkts, uint16_t nb_pkts,
-	uint16_t max_pkts __rte_unused,
-	void *user_params)
+	uint16_t max_pkts __rte_unused, void *user_params)
 {
-	pdump_copy(pkts, nb_pkts, user_params);
+	const struct pdump_rxtx_cbs *cbs = user_params;
+	struct rte_pdump_stats *stats = &pdump_stats->rx[port][queue];
+
+	pdump_copy(port, queue, RTE_PCAPNG_DIRECTION_IN,
+		   pkts, nb_pkts, cbs, stats);
 	return nb_pkts;
 }
 
 static uint16_t
-pdump_tx(uint16_t port __rte_unused, uint16_t qidx __rte_unused,
+pdump_tx(uint16_t port, uint16_t queue,
 		struct rte_mbuf **pkts, uint16_t nb_pkts, void *user_params)
 {
-	pdump_copy(pkts, nb_pkts, user_params);
+	const struct pdump_rxtx_cbs *cbs = user_params;
+	struct rte_pdump_stats *stats = &pdump_stats->tx[port][queue];
+
+	pdump_copy(port, queue, RTE_PCAPNG_DIRECTION_OUT,
+		   pkts, nb_pkts, cbs, stats);
 	return nb_pkts;
 }
 
 static int
-pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
-				struct rte_ring *ring, struct rte_mempool *mp,
-				uint16_t operation)
+pdump_register_rx_callbacks(enum pdump_version ver,
+			    uint16_t end_q, uint16_t port, uint16_t queue,
+			    struct rte_ring *ring, struct rte_mempool *mp,
+			    struct rte_bpf *filter,
+			    uint16_t operation, uint32_t snaplen)
 {
 	uint16_t qid;
-	struct pdump_rxtx_cbs *cbs = NULL;
 
 	qid = (queue == RTE_PDUMP_ALL_QUEUES) ? 0 : queue;
 	for (; qid < end_q; qid++) {
-		cbs = &rx_cbs[port][qid];
-		if (cbs && operation == ENABLE) {
+		struct pdump_rxtx_cbs *cbs = &rx_cbs[port][qid];
+
+		if (operation == ENABLE) {
 			if (cbs->cb) {
 				PDUMP_LOG(ERR,
 					"rx callback for port=%d queue=%d, already exists\n",
 					port, qid);
 				return -EEXIST;
 			}
+			cbs->ver = ver;
 			cbs->ring = ring;
 			cbs->mp = mp;
+			cbs->snaplen = snaplen;
+			cbs->filter = filter;
+
 			cbs->cb = rte_eth_add_first_rx_callback(port, qid,
 								pdump_rx, cbs);
 			if (cbs->cb == NULL) {
@@ -145,8 +196,7 @@ pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
 					rte_errno);
 				return rte_errno;
 			}
-		}
-		if (cbs && operation == DISABLE) {
+		} else if (operation == DISABLE) {
 			int ret;
 
 			if (cbs->cb == NULL) {
@@ -170,26 +220,32 @@ pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
 }
 
 static int
-pdump_register_tx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
-				struct rte_ring *ring, struct rte_mempool *mp,
-				uint16_t operation)
+pdump_register_tx_callbacks(enum pdump_version ver,
+			    uint16_t end_q, uint16_t port, uint16_t queue,
+			    struct rte_ring *ring, struct rte_mempool *mp,
+			    struct rte_bpf *filter,
+			    uint16_t operation, uint32_t snaplen)
 {
 
 	uint16_t qid;
-	struct pdump_rxtx_cbs *cbs = NULL;
 
 	qid = (queue == RTE_PDUMP_ALL_QUEUES) ? 0 : queue;
 	for (; qid < end_q; qid++) {
-		cbs = &tx_cbs[port][qid];
-		if (cbs && operation == ENABLE) {
+		struct pdump_rxtx_cbs *cbs = &tx_cbs[port][qid];
+
+		if (operation == ENABLE) {
 			if (cbs->cb) {
 				PDUMP_LOG(ERR,
 					"tx callback for port=%d queue=%d, already exists\n",
 					port, qid);
 				return -EEXIST;
 			}
+			cbs->ver = ver;
 			cbs->ring = ring;
 			cbs->mp = mp;
+			cbs->snaplen = snaplen;
+			cbs->filter = filter;
+
 			cbs->cb = rte_eth_add_tx_callback(port, qid, pdump_tx,
 								cbs);
 			if (cbs->cb == NULL) {
@@ -198,8 +254,7 @@ pdump_register_tx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
 					rte_errno);
 				return rte_errno;
 			}
-		}
-		if (cbs && operation == DISABLE) {
+		} else if (operation == DISABLE) {
 			int ret;
 
 			if (cbs->cb == NULL) {
@@ -228,37 +283,47 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
 	uint16_t nb_rx_q = 0, nb_tx_q = 0, end_q, queue;
 	uint16_t port;
 	int ret = 0;
+	struct rte_bpf *filter = NULL;
 	uint32_t flags;
 	uint16_t operation;
 	struct rte_ring *ring;
 	struct rte_mempool *mp;
 
-	flags = p->flags;
-	operation = p->op;
-	if (operation == ENABLE) {
-		ret = rte_eth_dev_get_port_by_name(p->data.en_v1.device,
-				&port);
-		if (ret < 0) {
+	/* Check for possible DPDK version mismatch */
+	if (!(p->ver == V1 || p->ver == V2)) {
+		PDUMP_LOG(ERR,
+			  "incorrect client version %u\n", p->ver);
+		return -EINVAL;
+	}
+
+	if (p->prm) {
+		if (p->prm->prog_arg.type != RTE_BPF_ARG_PTR_MBUF) {
 			PDUMP_LOG(ERR,
-				"failed to get port id for device id=%s\n",
-				p->data.en_v1.device);
+				  "invalid BPF program type: %u\n",
+				  p->prm->prog_arg.type);
 			return -EINVAL;
 		}
-		queue = p->data.en_v1.queue;
-		ring = p->data.en_v1.ring;
-		mp = p->data.en_v1.mp;
-	} else {
-		ret = rte_eth_dev_get_port_by_name(p->data.dis_v1.device,
-				&port);
-		if (ret < 0) {
-			PDUMP_LOG(ERR,
-				"failed to get port id for device id=%s\n",
-				p->data.dis_v1.device);
-			return -EINVAL;
+
+		filter = rte_bpf_load(p->prm);
+		if (filter == NULL) {
+			PDUMP_LOG(ERR, "cannot load BPF filter: %s\n",
+				  rte_strerror(rte_errno));
+			return -rte_errno;
 		}
-		queue = p->data.dis_v1.queue;
-		ring = p->data.dis_v1.ring;
-		mp = p->data.dis_v1.mp;
+	}
+
+	flags = p->flags;
+	operation = p->op;
+	queue = p->queue;
+	ring = p->ring;
+	mp = p->mp;
+
+	ret = rte_eth_dev_get_port_by_name(p->device, &port);
+	if (ret < 0) {
+		PDUMP_LOG(ERR,
+			  "failed to get port id for device id=%s\n",
+			  p->device);
+		return -EINVAL;
 	}
 
 	/* validation if packet capture is for all queues */
@@ -296,8 +361,9 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
 	/* register RX callback */
 	if (flags & RTE_PDUMP_FLAG_RX) {
 		end_q = (queue == RTE_PDUMP_ALL_QUEUES) ? nb_rx_q : queue + 1;
-		ret = pdump_register_rx_callbacks(end_q, port, queue, ring, mp,
-							operation);
+		ret = pdump_register_rx_callbacks(p->ver, end_q, port, queue,
+						  ring, mp, filter,
+						  operation, p->snaplen);
 		if (ret < 0)
 			return ret;
 	}
@@ -305,8 +371,9 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
 	/* register TX callback */
 	if (flags & RTE_PDUMP_FLAG_TX) {
 		end_q = (queue == RTE_PDUMP_ALL_QUEUES) ? nb_tx_q : queue + 1;
-		ret = pdump_register_tx_callbacks(end_q, port, queue, ring, mp,
-							operation);
+		ret = pdump_register_tx_callbacks(p->ver, end_q, port, queue,
+						  ring, mp, filter,
+						  operation, p->snaplen);
 		if (ret < 0)
 			return ret;
 	}
@@ -332,7 +399,7 @@ pdump_server(const struct rte_mp_msg *mp_msg, const void *peer)
 		resp->err_value = set_pdump_rxtx_cbs(cli_req);
 	}
 
-	strlcpy(mp_resp.name, PDUMP_MP, RTE_MP_MAX_NAME_LEN);
+	rte_strscpy(mp_resp.name, PDUMP_MP, RTE_MP_MAX_NAME_LEN);
 	mp_resp.len_param = sizeof(*resp);
 	mp_resp.num_fds = 0;
 	if (rte_mp_reply(&mp_resp, peer) < 0) {
@@ -347,8 +414,18 @@ pdump_server(const struct rte_mp_msg *mp_msg, const void *peer)
 int
 rte_pdump_init(void)
 {
+	const struct rte_memzone *mz;
 	int ret;
 
+	mz = rte_memzone_reserve(MZ_RTE_PDUMP_STATS, sizeof(*pdump_stats),
+				 rte_socket_id(), 0);
+	if (mz == NULL) {
+		PDUMP_LOG(ERR, "cannot allocate pdump statistics\n");
+		rte_errno = ENOMEM;
+		return -1;
+	}
+	pdump_stats = mz->addr;
+
 	ret = rte_mp_action_register(PDUMP_MP, pdump_server);
 	if (ret && rte_errno != ENOTSUP)
 		return -1;
@@ -392,14 +469,21 @@ pdump_validate_ring_mp(struct rte_ring *ring, struct rte_mempool *mp)
 static int
 pdump_validate_flags(uint32_t flags)
 {
-	if (flags != RTE_PDUMP_FLAG_RX && flags != RTE_PDUMP_FLAG_TX &&
-		flags != RTE_PDUMP_FLAG_RXTX) {
+	if ((flags & RTE_PDUMP_FLAG_RXTX) == 0) {
 		PDUMP_LOG(ERR,
 			"invalid flags, should be either rx/tx/rxtx\n");
 		rte_errno = EINVAL;
 		return -1;
 	}
 
+	/* mask off the flags we know about */
+	if (flags & ~(RTE_PDUMP_FLAG_RXTX | RTE_PDUMP_FLAG_PCAPNG)) {
+		PDUMP_LOG(ERR,
+			  "unknown flags: %#x\n", flags);
+		rte_errno = ENOTSUP;
+		return -1;
+	}
+
 	return 0;
 }
 
@@ -426,12 +510,12 @@ pdump_validate_port(uint16_t port, char *name)
 }
 
 static int
-pdump_prepare_client_request(char *device, uint16_t queue,
-				uint32_t flags,
-				uint16_t operation,
-				struct rte_ring *ring,
-				struct rte_mempool *mp,
-				void *filter)
+pdump_prepare_client_request(const char *device, uint16_t queue,
+			     uint32_t flags, uint32_t snaplen,
+			     uint16_t operation,
+			     struct rte_ring *ring,
+			     struct rte_mempool *mp,
+			     const struct rte_bpf_prm *prm)
 {
 	int ret = -1;
 	struct rte_mp_msg mp_req, *mp_rep;
@@ -440,26 +524,22 @@ pdump_prepare_client_request(char *device, uint16_t queue,
 	struct pdump_request *req = (struct pdump_request *)mp_req.param;
 	struct pdump_response *resp;
 
-	req->ver = 1;
-	req->flags = flags;
+	memset(req, 0, sizeof(*req));
+
+	req->ver = (flags & RTE_PDUMP_FLAG_PCAPNG) ? V2 : V1;
+	req->flags = flags & RTE_PDUMP_FLAG_RXTX;
 	req->op = operation;
+	req->queue = queue;
+	rte_strscpy(req->device, device, sizeof(req->device));
+
 	if ((operation & ENABLE) != 0) {
-		strlcpy(req->data.en_v1.device, device,
-			sizeof(req->data.en_v1.device));
-		req->data.en_v1.queue = queue;
-		req->data.en_v1.ring = ring;
-		req->data.en_v1.mp = mp;
-		req->data.en_v1.filter = filter;
-	} else {
-		strlcpy(req->data.dis_v1.device, device,
-			sizeof(req->data.dis_v1.device));
-		req->data.dis_v1.queue = queue;
-		req->data.dis_v1.ring = NULL;
-		req->data.dis_v1.mp = NULL;
-		req->data.dis_v1.filter = NULL;
+		req->ring = ring;
+		req->mp = mp;
+		req->prm = prm;
+		req->snaplen = snaplen;
 	}
 
-	strlcpy(mp_req.name, PDUMP_MP, RTE_MP_MAX_NAME_LEN);
+	rte_strscpy(mp_req.name, PDUMP_MP, RTE_MP_MAX_NAME_LEN);
 	mp_req.len_param = sizeof(*req);
 	mp_req.num_fds = 0;
 	if (rte_mp_request_sync(&mp_req, &mp_reply, &ts) == 0) {
@@ -477,11 +557,17 @@ pdump_prepare_client_request(char *device, uint16_t queue,
 	return ret;
 }
 
-int
-rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
-			struct rte_ring *ring,
-			struct rte_mempool *mp,
-			void *filter)
+/*
+ * There are two versions of this function, because although original API
+ * left place holder for future filter, it never checked the value.
+ * Therefore the API can't depend on application passing a non
+ * bogus value.
+ */
+static int
+pdump_enable(uint16_t port, uint16_t queue,
+	     uint32_t flags, uint32_t snaplen,
+	     struct rte_ring *ring, struct rte_mempool *mp,
+	     const struct rte_bpf_prm *prm)
 {
 	int ret;
 	char name[RTE_DEV_NAME_MAX_LEN];
@@ -496,20 +582,42 @@ rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
 	if (ret < 0)
 		return ret;
 
-	ret = pdump_prepare_client_request(name, queue, flags,
-						ENABLE, ring, mp, filter);
+	if (snaplen == 0)
+		snaplen = UINT32_MAX;
 
-	return ret;
+	return pdump_prepare_client_request(name, queue, flags, snaplen,
+					    ENABLE, ring, mp, prm);
 }
 
 int
-rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
-				uint32_t flags,
-				struct rte_ring *ring,
-				struct rte_mempool *mp,
-				void *filter)
+rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
+		 struct rte_ring *ring,
+		 struct rte_mempool *mp,
+		 void *filter __rte_unused)
 {
-	int ret = 0;
+	return pdump_enable(port, queue, flags, 0,
+			    ring, mp, NULL);
+}
+
+int
+rte_pdump_enable_bpf(uint16_t port, uint16_t queue,
+		     uint32_t flags, uint32_t snaplen,
+		     struct rte_ring *ring,
+		     struct rte_mempool *mp,
+		     const struct rte_bpf_prm *prm)
+{
+	return pdump_enable(port, queue, flags, snaplen,
+			    ring, mp, prm);
+}
+
+static int
+pdump_enable_by_deviceid(const char *device_id, uint16_t queue,
+			 uint32_t flags, uint32_t snaplen,
+			 struct rte_ring *ring,
+			 struct rte_mempool *mp,
+			 const struct rte_bpf_prm *prm)
+{
+	int ret;
 
 	ret = pdump_validate_ring_mp(ring, mp);
 	if (ret < 0)
@@ -518,10 +626,30 @@ rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
 	if (ret < 0)
 		return ret;
 
-	ret = pdump_prepare_client_request(device_id, queue, flags,
-						ENABLE, ring, mp, filter);
+	return pdump_prepare_client_request(device_id, queue, flags, snaplen,
+					    ENABLE, ring, mp, prm);
+}
 
-	return ret;
+int
+rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
+			     uint32_t flags,
+			     struct rte_ring *ring,
+			     struct rte_mempool *mp,
+			     void *filter __rte_unused)
+{
+	return pdump_enable_by_deviceid(device_id, queue, flags, 0,
+					ring, mp, NULL);
+}
+
+int
+rte_pdump_enable_bpf_by_deviceid(const char *device_id, uint16_t queue,
+				 uint32_t flags, uint32_t snaplen,
+				 struct rte_ring *ring,
+				 struct rte_mempool *mp,
+				 const struct rte_bpf_prm *prm)
+{
+	return pdump_enable_by_deviceid(device_id, queue, flags, snaplen,
+					ring, mp, prm);
 }
 
 int
@@ -537,8 +665,8 @@ rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags)
 	if (ret < 0)
 		return ret;
 
-	ret = pdump_prepare_client_request(name, queue, flags,
-						DISABLE, NULL, NULL, NULL);
+	ret = pdump_prepare_client_request(name, queue, flags, 0,
+					   DISABLE, NULL, NULL, NULL);
 
 	return ret;
 }
@@ -553,8 +681,65 @@ rte_pdump_disable_by_deviceid(char *device_id, uint16_t queue,
 	if (ret < 0)
 		return ret;
 
-	ret = pdump_prepare_client_request(device_id, queue, flags,
-						DISABLE, NULL, NULL, NULL);
+	ret = pdump_prepare_client_request(device_id, queue, flags, 0,
+					   DISABLE, NULL, NULL, NULL);
 
 	return ret;
 }
+
+static void
+pdump_sum_stats(uint16_t port, uint16_t nq,
+		struct rte_pdump_stats stats[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT],
+		struct rte_pdump_stats *total)
+{
+	uint64_t *sum = (uint64_t *)total;
+	unsigned int i;
+	uint64_t val;
+	uint16_t qid;
+
+	for (qid = 0; qid < nq; qid++) {
+		const uint64_t *perq = (const uint64_t *)&stats[port][qid];
+
+		for (i = 0; i < sizeof(*total) / sizeof(uint64_t); i++) {
+			val = __atomic_load_n(&perq[i], __ATOMIC_RELAXED);
+			sum[i] += val;
+		}
+	}
+}
+
+int
+rte_pdump_stats(uint16_t port, struct rte_pdump_stats *stats)
+{
+	struct rte_eth_dev_info dev_info;
+	const struct rte_memzone *mz;
+	int ret;
+
+	memset(stats, 0, sizeof(*stats));
+	ret = rte_eth_dev_info_get(port, &dev_info);
+	if (ret != 0) {
+		PDUMP_LOG(ERR,
+			  "Error during getting device (port %u) info: %s\n",
+			  port, strerror(-ret));
+		return ret;
+	}
+
+	if (pdump_stats == NULL) {
+		if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+			PDUMP_LOG(ERR, "pdump stats initialized\n");
+			rte_errno = EINVAL;
+			return -1;
+		}
+
+		mz = rte_memzone_lookup(MZ_RTE_PDUMP_STATS);
+		if (mz == NULL) {
+			PDUMP_LOG(ERR, "can not find pdump stats\n");
+			rte_errno = EINVAL;
+			return -1;
+		}
+		pdump_stats = mz->addr;
+	}
+
+	pdump_sum_stats(port, dev_info.nb_rx_queues, pdump_stats->rx, stats);
+	pdump_sum_stats(port, dev_info.nb_tx_queues, pdump_stats->tx, stats);
+	return 0;
+}
diff --git a/lib/pdump/rte_pdump.h b/lib/pdump/rte_pdump.h
index 6b00fc17aeb2..6efa0274f2ce 100644
--- a/lib/pdump/rte_pdump.h
+++ b/lib/pdump/rte_pdump.h
@@ -15,6 +15,7 @@
 #include <stdint.h>
 #include <rte_mempool.h>
 #include <rte_ring.h>
+#include <rte_bpf.h>
 
 #ifdef __cplusplus
 extern "C" {
@@ -26,7 +27,9 @@ enum {
 	RTE_PDUMP_FLAG_RX = 1,  /* receive direction */
 	RTE_PDUMP_FLAG_TX = 2,  /* transmit direction */
 	/* both receive and transmit directions */
-	RTE_PDUMP_FLAG_RXTX = (RTE_PDUMP_FLAG_RX|RTE_PDUMP_FLAG_TX)
+	RTE_PDUMP_FLAG_RXTX = (RTE_PDUMP_FLAG_RX|RTE_PDUMP_FLAG_TX),
+
+	RTE_PDUMP_FLAG_PCAPNG = 4, /* format for pcapng */
 };
 
 /**
@@ -68,7 +71,7 @@ rte_pdump_uninit(void);
  * @param mp
  *  mempool on to which original packets will be mirrored or duplicated.
  * @param filter
- *  place holder for packet filtering.
+ *  Unused should be NULL.
  *
  * @return
  *    0 on success, -1 on error, rte_errno is set accordingly.
@@ -80,6 +83,41 @@ rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
 		struct rte_mempool *mp,
 		void *filter);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Enables packet capturing on given port and queue with filtering.
+ *
+ * @param port_id
+ *  The Ethernet port on which packet capturing should be enabled.
+ * @param queue
+ *  The queue on the Ethernet port which packet capturing
+ *  should be enabled. Pass UINT16_MAX to enable packet capturing on all
+ *  queues of a given port.
+ * @param flags
+ *  Pdump library flags that specify direction and packet format.
+ * @param snaplen
+ *  The upper limit on bytes to copy.
+ *  Passing UINT32_MAX means capture all the possible data.
+ * @param ring
+ *  The ring on which captured packets will be enqueued for user.
+ * @param mp
+ *  The mempool on to which original packets will be mirrored or duplicated.
+ * @param prm
+ *  Use BPF program to run to filter packes (can be NULL)
+ *
+ * @return
+ *    0 on success, -1 on error, rte_errno is set accordingly.
+ */
+__rte_experimental
+int
+rte_pdump_enable_bpf(uint16_t port_id, uint16_t queue,
+		     uint32_t flags, uint32_t snaplen,
+		     struct rte_ring *ring,
+		     struct rte_mempool *mp,
+		     const struct rte_bpf_prm *prm);
+
 /**
  * Disables packet capturing on given port and queue.
  *
@@ -118,7 +156,7 @@ rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags);
  * @param mp
  *  mempool on to which original packets will be mirrored or duplicated.
  * @param filter
- *  place holder for packet filtering.
+ *  unused should be NULL
  *
  * @return
  *    0 on success, -1 on error, rte_errno is set accordingly.
@@ -131,6 +169,43 @@ rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
 				struct rte_mempool *mp,
 				void *filter);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Enables packet capturing on given device id and queue with filtering.
+ * device_id can be name or pci address of device.
+ *
+ * @param device_id
+ *  device id on which packet capturing should be enabled.
+ * @param queue
+ *  The queue on the Ethernet port which packet capturing
+ *  should be enabled. Pass UINT16_MAX to enable packet capturing on all
+ *  queues of a given port.
+ * @param flags
+ *  Pdump library flags that specify direction and packet format.
+ * @param snaplen
+ *  The upper limit on bytes to copy.
+ *  Passing UINT32_MAX means capture all the possible data.
+ * @param ring
+ *  The ring on which captured packets will be enqueued for user.
+ * @param mp
+ *  The mempool on to which original packets will be mirrored or duplicated.
+ * @param filter
+ *  Use BPF program to run to filter packes (can be NULL)
+ *
+ * @return
+ *    0 on success, -1 on error, rte_errno is set accordingly.
+ */
+__rte_experimental
+int
+rte_pdump_enable_bpf_by_deviceid(const char *device_id, uint16_t queue,
+				 uint32_t flags, uint32_t snaplen,
+				 struct rte_ring *ring,
+				 struct rte_mempool *mp,
+				 const struct rte_bpf_prm *filter);
+
+
 /**
  * Disables packet capturing on given device_id and queue.
  * device_id can be name or pci address of device.
@@ -153,6 +228,38 @@ int
 rte_pdump_disable_by_deviceid(char *device_id, uint16_t queue,
 				uint32_t flags);
 
+
+/**
+ * A structure used to retrieve statistics from packet capture.
+ * The statistics are sum of both receive and transmit queues.
+ */
+struct rte_pdump_stats {
+	uint64_t accepted; /**< Number of packets accepted by filter. */
+	uint64_t filtered; /**< Number of packets rejected by filter. */
+	uint64_t nombuf;   /**< Number of mbuf allocation failures. */
+	uint64_t ringfull; /**< Number of missed packets due to ring full. */
+
+	uint64_t reserved[4]; /**< Reserved and pad to cache line */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Retrieve the packet capture statistics for a queue.
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device.
+ * @param stats
+ *   A pointer to structure of type *rte_pdump_stats* to be filled in.
+ * @return
+ *   Zero if successful. -1 on error and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_pdump_stats(uint16_t port_id, struct rte_pdump_stats *stats);
+
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/pdump/version.map b/lib/pdump/version.map
index f0a9d12c9a9e..ce5502d9cdf4 100644
--- a/lib/pdump/version.map
+++ b/lib/pdump/version.map
@@ -10,3 +10,11 @@ DPDK_22 {
 
 	local: *;
 };
+
+EXPERIMENTAL {
+	global:
+
+	rte_pdump_enable_bpf;
+	rte_pdump_enable_bpf_by_deviceid;
+	rte_pdump_stats;
+};
-- 
2.30.2


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [PATCH v11 11/12] doc: changes for new pcapng and dumpcap
    2021-09-24 15:21  1%   ` [dpdk-dev] [PATCH v11 06/12] pdump: support pcapng and filtering Stephen Hemminger
@ 2021-09-24 15:22  1%   ` Stephen Hemminger
  1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-09-24 15:22 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

Describe the new packet capture library and utilities

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 doc/api/doxy-api-index.md                     |  1 +
 doc/api/doxy-api.conf.in                      |  1 +
 .../howto/img/packet_capture_framework.svg    | 96 +++++++++----------
 doc/guides/howto/packet_capture_framework.rst | 67 ++++++-------
 doc/guides/prog_guide/index.rst               |  1 +
 doc/guides/prog_guide/pcapng_lib.rst          | 24 +++++
 doc/guides/prog_guide/pdump_lib.rst           | 28 ++++--
 doc/guides/rel_notes/release_21_11.rst        | 10 ++
 doc/guides/tools/dumpcap.rst                  | 86 +++++++++++++++++
 doc/guides/tools/index.rst                    |  1 +
 10 files changed, 228 insertions(+), 87 deletions(-)
 create mode 100644 doc/guides/prog_guide/pcapng_lib.rst
 create mode 100644 doc/guides/tools/dumpcap.rst

diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 1992107a0356..ee07394d1c78 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -223,3 +223,4 @@ The public API headers are grouped by topics:
   [experimental APIs]  (@ref rte_compat.h),
   [ABI versioning]     (@ref rte_function_versioning.h),
   [version]            (@ref rte_version.h)
+  [pcapng]             (@ref rte_pcapng.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 325a0195c6ab..aba17799a9a1 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -58,6 +58,7 @@ INPUT                   = @TOPDIR@/doc/api/doxy-api-index.md \
                           @TOPDIR@/lib/metrics \
                           @TOPDIR@/lib/node \
                           @TOPDIR@/lib/net \
+                          @TOPDIR@/lib/pcapng \
                           @TOPDIR@/lib/pci \
                           @TOPDIR@/lib/pdump \
                           @TOPDIR@/lib/pipeline \
diff --git a/doc/guides/howto/img/packet_capture_framework.svg b/doc/guides/howto/img/packet_capture_framework.svg
index a76baf71fdee..1c2646a81096 100644
--- a/doc/guides/howto/img/packet_capture_framework.svg
+++ b/doc/guides/howto/img/packet_capture_framework.svg
@@ -1,6 +1,4 @@
 <?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!-- Created with Inkscape (http://www.inkscape.org/) -->
-
 <svg
    xmlns:osb="http://www.openswatchbook.org/uri/2009/osb"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
@@ -16,8 +14,8 @@
    viewBox="0 0 425.19685 283.46457"
    id="svg2"
    version="1.1"
-   inkscape:version="0.91 r13725"
-   sodipodi:docname="drawing-pcap.svg">
+   inkscape:version="1.0.2 (e86c870879, 2021-01-15)"
+   sodipodi:docname="packet_capture_framework.svg">
   <defs
      id="defs4">
     <marker
@@ -228,7 +226,7 @@
        x2="487.64606"
        y2="258.38232"
        gradientUnits="userSpaceOnUse"
-       gradientTransform="translate(-84.916417,744.90779)" />
+       gradientTransform="matrix(1.1457977,0,0,0.99944907,-151.97019,745.05014)" />
     <linearGradient
        inkscape:collect="always"
        xlink:href="#linearGradient5784"
@@ -277,17 +275,18 @@
      borderopacity="1.0"
      inkscape:pageopacity="0.0"
      inkscape:pageshadow="2"
-     inkscape:zoom="0.57434918"
-     inkscape:cx="215.17857"
-     inkscape:cy="285.26445"
+     inkscape:zoom="1"
+     inkscape:cx="226.77165"
+     inkscape:cy="78.124511"
      inkscape:document-units="px"
      inkscape:current-layer="layer1"
      showgrid="false"
-     inkscape:window-width="1874"
-     inkscape:window-height="971"
-     inkscape:window-x="2"
-     inkscape:window-y="24"
-     inkscape:window-maximized="0" />
+     inkscape:window-width="2560"
+     inkscape:window-height="1414"
+     inkscape:window-x="0"
+     inkscape:window-y="0"
+     inkscape:window-maximized="1"
+     inkscape:document-rotation="0" />
   <metadata
      id="metadata7">
     <rdf:RDF>
@@ -296,7 +295,7 @@
         <dc:format>image/svg+xml</dc:format>
         <dc:type
            rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
-        <dc:title></dc:title>
+        <dc:title />
       </cc:Work>
     </rdf:RDF>
   </metadata>
@@ -321,15 +320,15 @@
        y="790.82452" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="61.050636"
        y="807.3205"
-       id="text4152"
-       sodipodi:linespacing="125%"><tspan
+       id="text4152"><tspan
          sodipodi:role="line"
          id="tspan4154"
          x="61.050636"
-         y="807.3205">DPDK Primary Application</tspan></text>
+         y="807.3205"
+         style="font-size:12.5px;line-height:1.25">DPDK Primary Application</tspan></text>
     <rect
        style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        id="rect4156-6"
@@ -339,19 +338,20 @@
        y="827.01843" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="350.68585"
        y="841.16058"
-       id="text4189"
-       sodipodi:linespacing="125%"><tspan
+       id="text4189"><tspan
          sodipodi:role="line"
          id="tspan4191"
          x="350.68585"
-         y="841.16058">dpdk-pdump</tspan><tspan
+         y="841.16058"
+         style="font-size:12.5px;line-height:1.25">dpdk-dumpcap</tspan><tspan
          sodipodi:role="line"
          x="350.68585"
          y="856.78558"
-         id="tspan4193">tool</tspan></text>
+         id="tspan4193"
+         style="font-size:12.5px;line-height:1.25">tool</tspan></text>
     <rect
        style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        id="rect4156-6-4"
@@ -361,15 +361,15 @@
        y="891.16315" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="352.70612"
        y="905.3053"
-       id="text4189-1"
-       sodipodi:linespacing="125%"><tspan
+       id="text4189-1"><tspan
          sodipodi:role="line"
          x="352.70612"
          y="905.3053"
-         id="tspan4193-3">PCAP PMD</tspan></text>
+         id="tspan4193-3"
+         style="font-size:12.5px;line-height:1.25">librte_pcapng</tspan></text>
     <rect
        style="fill:url(#linearGradient5745);fill-opacity:1;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        id="rect4156-6-6"
@@ -379,15 +379,15 @@
        y="923.9931" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="136.02846"
        y="938.13525"
-       id="text4189-0"
-       sodipodi:linespacing="125%"><tspan
+       id="text4189-0"><tspan
          sodipodi:role="line"
          x="136.02846"
          y="938.13525"
-         id="tspan4193-6">dpdk_port0</tspan></text>
+         id="tspan4193-6"
+         style="font-size:12.5px;line-height:1.25">dpdk_port0</tspan></text>
     <rect
        style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        id="rect4156-6-5"
@@ -397,33 +397,33 @@
        y="824.99817" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="137.54369"
        y="839.14026"
-       id="text4189-4"
-       sodipodi:linespacing="125%"><tspan
+       id="text4189-4"><tspan
          sodipodi:role="line"
          x="137.54369"
          y="839.14026"
-         id="tspan4193-2">librte_pdump</tspan></text>
+         id="tspan4193-2"
+         style="font-size:12.5px;line-height:1.25">librte_pdump</tspan></text>
     <rect
-       style="fill:url(#linearGradient5788);fill-opacity:1;stroke:#257cdc;stroke-width:1;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+       style="fill:url(#linearGradient5788);fill-opacity:1;stroke:#257cdc;stroke-width:1.07013;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        id="rect4156-6-4-5"
-       width="94.449265"
-       height="35.355339"
-       x="307.7804"
-       y="985.61243" />
+       width="108.21974"
+       height="35.335861"
+       x="297.9809"
+       y="985.62219" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="352.70618"
        y="999.75458"
-       id="text4189-1-8"
-       sodipodi:linespacing="125%"><tspan
+       id="text4189-1-8"><tspan
          sodipodi:role="line"
          x="352.70618"
          y="999.75458"
-         id="tspan4193-3-2">capture.pcap</tspan></text>
+         id="tspan4193-3-2"
+         style="font-size:12.5px;line-height:1.25">capture.pcapng</tspan></text>
     <rect
        style="fill:url(#linearGradient5788-1);fill-opacity:1;stroke:#257cdc;stroke-width:1.12555885;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        id="rect4156-6-4-5-1"
@@ -433,15 +433,15 @@
        y="983.14984" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="136.53352"
        y="1002.785"
-       id="text4189-1-8-4"
-       sodipodi:linespacing="125%"><tspan
+       id="text4189-1-8-4"><tspan
          sodipodi:role="line"
          x="136.53352"
          y="1002.785"
-         id="tspan4193-3-2-7">Traffic Generator</tspan></text>
+         id="tspan4193-3-2-7"
+         style="font-size:12.5px;line-height:1.25">Traffic Generator</tspan></text>
     <path
        style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#marker7331)"
        d="m 351.46948,927.02357 c 0,57.5787 0,57.5787 0,57.5787"
diff --git a/doc/guides/howto/packet_capture_framework.rst b/doc/guides/howto/packet_capture_framework.rst
index c31bac52340e..78baa609a021 100644
--- a/doc/guides/howto/packet_capture_framework.rst
+++ b/doc/guides/howto/packet_capture_framework.rst
@@ -1,18 +1,19 @@
 ..  SPDX-License-Identifier: BSD-3-Clause
     Copyright(c) 2017 Intel Corporation.
 
-DPDK pdump Library and pdump Tool
-=================================
+DPDK packet capture libraries and tools
+=======================================
 
 This document describes how the Data Plane Development Kit (DPDK) Packet
 Capture Framework is used for capturing packets on DPDK ports. It is intended
 for users of DPDK who want to know more about the Packet Capture feature and
 for those who want to monitor traffic on DPDK-controlled devices.
 
-The DPDK packet capture framework was introduced in DPDK v16.07. The DPDK
-packet capture framework consists of the DPDK pdump library and DPDK pdump
-tool.
-
+The DPDK packet capture framework was introduced in DPDK v16.07 and
+enhanced in 21.1. The DPDK packet capture framework consists of the
+libraries for collecting packets ``librte_pdump`` and writing packets
+to a file ``librte_pcapng``. There are two sample applications:
+``dpdk-dumpcap`` and older ``dpdk-pdump``.
 
 Introduction
 ------------
@@ -22,43 +23,46 @@ allow users to initialize the packet capture framework and to enable or
 disable packet capture. The library works on a multi process communication model and its
 usage is recommended for debugging purposes.
 
-The :ref:`dpdk-pdump <pdump_tool>` tool is developed based on the
-``librte_pdump`` library.  It runs as a DPDK secondary process and is capable
-of enabling or disabling packet capture on DPDK ports. The ``dpdk-pdump`` tool
-provides command-line options with which users can request enabling or
-disabling of the packet capture on DPDK ports.
+The :ref:`librte_pcapng <pcapng_library>` library provides the APIs to format
+packets and write them to a file in Pcapng format.
+
+
+The :ref:`dpdk-dumpcap <dumpcap_tool>` is a tool that captures packets in
+like Wireshark dumpcap does for Linux. It runs as a DPDK secondary process and
+captures packets from one or more interfaces and writes them to a file
+in Pcapng format.  The ``dpdk-dumpcap`` tool is designed to take
+most of the same options as the Wireshark ``dumpcap`` command.
 
-The application which initializes the packet capture framework will be a primary process
-and the application that enables or disables the packet capture will
-be a secondary process. The primary process sends the Rx and Tx packets from the DPDK ports
-to the secondary process.
+Without any options it will use the packet capture framework to
+capture traffic from the first available DPDK port.
 
 In DPDK the ``testpmd`` application can be used to initialize the packet
-capture framework and acts as a server, and the ``dpdk-pdump`` tool acts as a
+capture framework and acts as a server, and the ``dpdk-dumpcap`` tool acts as a
 client. To view Rx or Tx packets of ``testpmd``, the application should be
-launched first, and then the ``dpdk-pdump`` tool. Packets from ``testpmd``
-will be sent to the tool, which then sends them on to the Pcap PMD device and
-that device writes them to the Pcap file or to an external interface depending
-on the command-line option used.
+launched first, and then the ``dpdk-dumpcap`` tool. Packets from ``testpmd``
+will be sent to the tool, and then to the Pcapng file.
 
 Some things to note:
 
-* The ``dpdk-pdump`` tool can only be used in conjunction with a primary
+* All tools using ``librte_pdump`` can only be used in conjunction with a primary
   application which has the packet capture framework initialized already. In
   dpdk, only ``testpmd`` is modified to initialize packet capture framework,
-  other applications remain untouched. So, if the ``dpdk-pdump`` tool has to
+  other applications remain untouched. So, if the ``dpdk-dumpcap`` tool has to
   be used with any application other than the testpmd, the user needs to
   explicitly modify that application to call the packet capture framework
   initialization code. Refer to the ``app/test-pmd/testpmd.c`` code and look
   for ``pdump`` keyword to see how this is done.
 
-* The ``dpdk-pdump`` tool depends on the libpcap based PMD.
+* The ``dpdk-pdump`` tool is an older tool created as demonstration of ``librte_pdump``
+  library. The ``dpdk-pdump`` tool provides more limited functionality and
+  and depends on the Pcap PMD. It is retained only for compatibility reasons;
+  users should use ``dpdk-dumpcap`` instead.
 
 
 Test Environment
 ----------------
 
-The overview of using the Packet Capture Framework and the ``dpdk-pdump`` tool
+The overview of using the Packet Capture Framework and the ``dpdk-dumpcap`` utility
 for packet capturing on the DPDK port in
 :numref:`figure_packet_capture_framework`.
 
@@ -66,13 +70,13 @@ for packet capturing on the DPDK port in
 
 .. figure:: img/packet_capture_framework.*
 
-   Packet capturing on a DPDK port using the dpdk-pdump tool.
+   Packet capturing on a DPDK port using the dpdk-dumpcap utility.
 
 
 Running the Application
 -----------------------
 
-The following steps demonstrate how to run the ``dpdk-pdump`` tool to capture
+The following steps demonstrate how to run the ``dpdk-dumpcap`` tool to capture
 Rx side packets on dpdk_port0 in :numref:`figure_packet_capture_framework` and
 inspect them using ``tcpdump``.
 
@@ -80,16 +84,15 @@ inspect them using ``tcpdump``.
 
      sudo <build_dir>/app/dpdk-testpmd -c 0xf0 -n 4 -- -i --port-topology=chained
 
-#. Launch the pdump tool as follows::
+#. Launch the dpdk-dump as follows::
 
-     sudo <build_dir>/app/dpdk-pdump -- \
-          --pdump 'port=0,queue=*,rx-dev=/tmp/capture.pcap'
+     sudo <build_dir>/app/dpdk-dumpcap -w /tmp/capture.pcapng
 
 #. Send traffic to dpdk_port0 from traffic generator.
-   Inspect packets captured in the file capture.pcap using a tool
-   that can interpret Pcap files, for example tcpdump::
+   Inspect packets captured in the file capture.pcap using a tool such as
+   tcpdump or tshark that can interpret Pcapng files::
 
-     $tcpdump -nr /tmp/capture.pcap
+     $ tcpdump -nr /tmp/capture.pcapng
      reading from file /tmp/capture.pcap, link-type EN10MB (Ethernet)
      11:11:36.891404 IP 4.4.4.4.whois++ > 3.3.3.3.whois++: UDP, length 18
      11:11:36.891442 IP 4.4.4.4.whois++ > 3.3.3.3.whois++: UDP, length 18
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 2dce507f46a3..b440c77c2ba1 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -43,6 +43,7 @@ Programmer's Guide
     ip_fragment_reassembly_lib
     generic_receive_offload_lib
     generic_segmentation_offload_lib
+    pcapng_lib
     pdump_lib
     multi_proc_support
     kernel_nic_interface
diff --git a/doc/guides/prog_guide/pcapng_lib.rst b/doc/guides/prog_guide/pcapng_lib.rst
new file mode 100644
index 000000000000..36379b530a57
--- /dev/null
+++ b/doc/guides/prog_guide/pcapng_lib.rst
@@ -0,0 +1,24 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2016 Intel Corporation.
+
+.. _pcapng_library:
+
+Packet Capture File Writer
+==========================
+
+Pcapng is a library for creating files in Pcapng file format.
+The Pcapng file format is the default capture file format for modern
+network capture processing tools. It can be read by wireshark and tcpdump.
+
+Usage
+-----
+
+Before the library can be used the function ``rte_pcapng_init``
+should be called once to initialize timestamp computation.
+
+
+References
+----------
+* Draft RFC https://www.ietf.org/id/draft-tuexen-opsawg-pcapng-03.html
+
+* Project repository  https://github.com/pcapng/pcapng/
diff --git a/doc/guides/prog_guide/pdump_lib.rst b/doc/guides/prog_guide/pdump_lib.rst
index 62c0b015b2fe..9af91415e5ea 100644
--- a/doc/guides/prog_guide/pdump_lib.rst
+++ b/doc/guides/prog_guide/pdump_lib.rst
@@ -3,10 +3,10 @@
 
 .. _pdump_library:
 
-The librte_pdump Library
-========================
+The Packet Capture Library
+==========================
 
-The ``librte_pdump`` library provides a framework for packet capturing in DPDK.
+The DPDK ``pdump`` library provides a framework for packet capturing in DPDK.
 The library does the complete copy of the Rx and Tx mbufs to a new mempool and
 hence it slows down the performance of the applications, so it is recommended
 to use this library for debugging purposes.
@@ -23,11 +23,19 @@ or disable the packet capture, and to uninitialize it.
 
 * ``rte_pdump_enable()``:
   This API enables the packet capture on a given port and queue.
-  Note: The filter option in the API is a place holder for future enhancements.
+
+* ``rte_pdump_enable_bpf()``
+  This API enables the packet capture on a given port and queue.
+  It also allows setting an optional filter using DPDK BPF interpreter and
+  setting the captured packet length.
 
 * ``rte_pdump_enable_by_deviceid()``:
   This API enables the packet capture on a given device id (``vdev name or pci address``) and queue.
-  Note: The filter option in the API is a place holder for future enhancements.
+
+* ``rte_pdump_enable_bpf_by_deviceid()``
+  This API enables the packet capture on a given device id (``vdev name or pci address``) and queue.
+  It also allows seating an optional filter using DPDK BPF interpreter and
+  setting the captured packet length.
 
 * ``rte_pdump_disable()``:
   This API disables the packet capture on a given port and queue.
@@ -61,6 +69,12 @@ and enables the packet capture by registering the Ethernet RX and TX callbacks f
 and queue combinations. Then the primary process will mirror the packets to the new mempool and enqueue them to
 the rte_ring that secondary process have passed to these APIs.
 
+The packet ring supports one of two formats. The default format enqueues copies of the original packets
+into the rte_ring. If the ``RTE_PDUMP_FLAG_PCAPNG`` is set the mbuf data is extended with header and trailer
+to match the format of Pcapng enhanced packet block. The enhanced packet block has meta-data such as the
+timestamp, port and queue the packet was captured on. It is up to the application consuming the
+packets from the ring to select the format desired.
+
 The library APIs ``rte_pdump_disable()`` and ``rte_pdump_disable_by_deviceid()`` disables the packet capture.
 For the calls to these APIs from secondary process, the library creates the "pdump disable" request and sends
 the request to the primary process over the multi process channel. The primary process takes this request and
@@ -74,5 +88,5 @@ function.
 Use Case: Packet Capturing
 --------------------------
 
-The DPDK ``app/pdump`` tool is developed based on this library to capture packets in DPDK.
-Users can use this as an example to develop their own packet capturing tools.
+The DPDK ``app/dpdk-dumpcap`` utility uses this library
+to capture packets in DPDK.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index ad7c1afec0f7..075e9e544c54 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -91,6 +91,16 @@ New Features
   Added command-line options to specify total number of processes and
   current process ID. Each process owns subset of Rx and Tx queues.
 
+* **Revised packet capture framework.**
+
+  * New dpdk-dumpcap program that has most of the features of the
+    wireshark dumpcap utility including: capture of multiple interfaces,
+    filtering, and stopping after number of bytes, packets.
+  * New library for writing pcapng packet capture files.
+  * Enhancements to the pdump library to support:
+    * Packet filter with BPF.
+    * Pcapng format with timestamps and meta-data.
+    * Fixes packet capture with stripped VLAN tags.
 
 Removed Items
 -------------
diff --git a/doc/guides/tools/dumpcap.rst b/doc/guides/tools/dumpcap.rst
new file mode 100644
index 000000000000..664ea0c79802
--- /dev/null
+++ b/doc/guides/tools/dumpcap.rst
@@ -0,0 +1,86 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2020 Microsoft Corporation.
+
+.. _dumpcap_tool:
+
+dpdk-dumpcap Application
+========================
+
+The ``dpdk-dumpcap`` tool is a Data Plane Development Kit (DPDK)
+network traffic dump tool.  The interface is similar to  the dumpcap tool in Wireshark.
+It runs as a secondary DPDK process and lets you capture packets that are
+coming into and out of a DPDK primary process.
+The ``dpdk-dumpcap`` writes files in Pcapng packet format using
+capture file format is pcapng.
+
+Without any options set it will use DPDK to capture traffic from the first
+available DPDK interface and write the received raw packet data, along
+with timestamps into a pcapng file.
+
+If the ``-w`` option is not specified, ``dpdk-dumpcap`` writes to a newly
+create file with a name chosen based on interface name and timestamp.
+If ``-w`` option is specified, then that file is used.
+
+   .. Note::
+      * The ``dpdk-dumpcap`` tool can only be used in conjunction with a primary
+        application which has the packet capture framework initialized already.
+        In dpdk, only the ``testpmd`` is modified to initialize packet capture
+        framework, other applications remain untouched. So, if the ``dpdk-dumpcap``
+        tool has to be used with any application other than the testpmd, user
+        needs to explicitly modify that application to call packet capture
+        framework initialization code. Refer ``app/test-pmd/testpmd.c``
+        code to see how this is done.
+
+      * The ``dpdk-dumpcap`` tool runs as a DPDK secondary process. It exits when
+        the primary application exits.
+
+
+Running the Application
+-----------------------
+
+To list interfaces available for capture use ``--list-interfaces``.
+
+To filter packets in style of *tshark* use the ``-f`` flag.
+
+To capture on multiple interfaces at once, use multiple ``-I`` flags.
+
+Example
+-------
+
+.. code-block:: console
+
+   # ./<build_dir>/app/dpdk-dumpcap --list-interfaces
+   0. 000:00:03.0
+   1. 000:00:03.1
+
+   # ./<build_dir>/app/dpdk-dumpcap -I 0000:00:03.0 -c 6 -w /tmp/sample.pcapng
+   Packets captured: 6
+   Packets received/dropped on interface '0000:00:03.0' 6/0
+
+   # ./<build_dir>/app/dpdk-dumpcap -f 'tcp port 80'
+   Packets captured: 6
+   Packets received/dropped on interface '0000:00:03.0' 10/8
+
+
+Limitations
+-----------
+The following option of Wireshark ``dumpcap`` is not yet implemented:
+
+   * ``-b|--ring-buffer`` -- more complex file management.
+
+The following options do not make sense in the context of DPDK.
+
+   * ``-C <byte_limit>`` -- its a kernel thing
+
+   * ``-t`` -- use a thread per interface
+
+   * Timestamp type.
+
+   * Link data types. Only EN10MB (Ethernet) is supported.
+
+   * Wireless related options:  ``-I|--monitor-mode`` and  ``-k <freq>``
+
+
+.. Note::
+   * The options to ``dpdk-dumpcap`` are like the Wireshark dumpcap program and
+     are not the same as ``dpdk-pdump`` and other DPDK applications.
diff --git a/doc/guides/tools/index.rst b/doc/guides/tools/index.rst
index 93dde4148e90..b71c12b8f2dd 100644
--- a/doc/guides/tools/index.rst
+++ b/doc/guides/tools/index.rst
@@ -8,6 +8,7 @@ DPDK Tools User Guides
     :maxdepth: 2
     :numbered:
 
+    dumpcap
     proc_info
     pdump
     pmdinfo
-- 
2.30.2


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [PATCH v3 0/2] support IOMMU for DMA device
    @ 2021-09-25 10:03  3% ` Xuan Ding
  2021-09-25 10:33  3% ` [dpdk-dev] [PATCH v4 " Xuan Ding
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 200+ results
From: Xuan Ding @ 2021-09-25 10:03 UTC (permalink / raw)
  To: dev, anatoly.burakov, maxime.coquelin, chenbo.xia
  Cc: jiayu.hu, cheng1.jiang, bruce.richardson, sunil.pai.g,
	yinan.wang, yvonnex.yang, Xuan Ding

This series supports DMA device to use vfio in async vhost.

The first patch extends the capability of current vfio dma mapping
API to allow partial unmapping for adjacent memory if the platform
does not support partial unmapping. The second patch involves the
IOMMU programming for guest memory in async vhost.

v3:
* Move the async_map_status flag to virtio_net structure to avoid
ABI breaking.

v2:
* Add rte_errno filtering for some devices bound in the kernel driver.
* Add a flag to check the status of region mapping.
* Fix one typo.

Xuan Ding (2):
  vfio: allow partially unmapping adjacent memory
  vhost: enable IOMMU for async vhost

 lib/eal/linux/eal_vfio.c | 338 ++++++++++++++++++++++++++-------------
 lib/vhost/vhost.h        |   4 +
 lib/vhost/vhost_user.c   | 112 ++++++++++++-
 3 files changed, 342 insertions(+), 112 deletions(-)

-- 
2.17.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v4 0/2] support IOMMU for DMA device
      2021-09-25 10:03  3% ` [dpdk-dev] [PATCH v3 0/2] support IOMMU for DMA device Xuan Ding
@ 2021-09-25 10:33  3% ` Xuan Ding
  2021-09-27  7:48  3% ` [dpdk-dev] [PATCH v5 " Xuan Ding
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 200+ results
From: Xuan Ding @ 2021-09-25 10:33 UTC (permalink / raw)
  To: dev, anatoly.burakov, maxime.coquelin, chenbo.xia
  Cc: jiayu.hu, cheng1.jiang, bruce.richardson, sunil.pai.g,
	yinan.wang, yvonnex.yang, Xuan Ding

This series supports DMA device to use vfio in async vhost.

The first patch extends the capability of current vfio dma mapping
API to allow partial unmapping for adjacent memory if the platform
does not support partial unmapping. The second patch involves the
IOMMU programming for guest memory in async vhost.

v4:
* Fix a format issue.

v3:
* Move the async_map_status flag to virtio_net structure to avoid
ABI breaking.

v2:
* Add rte_errno filtering for some devices bound in the kernel driver.
* Add a flag to check the status of region mapping.
* Fix one typo.

Xuan Ding (2):
  vfio: allow partially unmapping adjacent memory
  vhost: enable IOMMU for async vhost

 lib/eal/linux/eal_vfio.c | 338 ++++++++++++++++++++++++++-------------
 lib/vhost/vhost.h        |   4 +
 lib/vhost/vhost_user.c   | 112 ++++++++++++-
 3 files changed, 342 insertions(+), 112 deletions(-)

-- 
2.17.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [RFC] ethdev: change queue release callback
  @ 2021-09-26 11:25  0%               ` Xueming(Steven) Li
  0 siblings, 0 replies; 200+ results
From: Xueming(Steven) Li @ 2021-09-26 11:25 UTC (permalink / raw)
  To: andrew.rybchenko, ferruh.yigit, aman.deep.singh
  Cc: NBU-Contact-Thomas Monjalon, dev, Slava Ovsiienko, jerinj

On Wed, 2021-08-11 at 12:57 +0100, Ferruh Yigit wrote:
> On 8/10/2021 10:07 AM, Xueming(Steven) Li wrote:
> > 
> > 
> > > -----Original Message-----
> > > From: Ferruh Yigit <ferruh.yigit@intel.com>
> > > Sent: Tuesday, August 10, 2021 4:54 PM
> > > To: Xueming(Steven) Li <xuemingl@nvidia.com>; Singh, Aman Deep <aman.deep.singh@intel.com>; Andrew Rybchenko
> > > <andrew.rybchenko@oktetlabs.ru>
> > > Cc: dev@dpdk.org; Slava Ovsiienko <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>
> > > Subject: Re: [dpdk-dev] [RFC] ethdev: change queue release callback
> > > 
> > > On 8/10/2021 9:03 AM, Xueming(Steven) Li wrote:
> > > > Hi Singh and Ferruh,
> > > > 
> > > > > -----Original Message-----
> > > > > From: Ferruh Yigit <ferruh.yigit@intel.com>
> > > > > Sent: Monday, August 9, 2021 11:31 PM
> > > > > To: Singh, Aman Deep <aman.deep.singh@intel.com>; Andrew Rybchenko
> > > > > <andrew.rybchenko@oktetlabs.ru>; Xueming(Steven) Li
> > > > > <xuemingl@nvidia.com>
> > > > > Cc: dev@dpdk.org; Slava Ovsiienko <viacheslavo@nvidia.com>;
> > > > > NBU-Contact-Thomas Monjalon <thomas@monjalon.net>
> > > > > Subject: Re: [dpdk-dev] [RFC] ethdev: change queue release callback
> > > > > 
> > > > > On 8/9/2021 3:39 PM, Singh, Aman Deep wrote:
> > > > > > Hi Xueming,
> > > > > > 
> > > > > > On 7/28/2021 1:10 PM, Andrew Rybchenko wrote:
> > > > > > > On 7/27/21 6:41 AM, Xueming Li wrote:
> > > > > > > > To align with other eth device queue configuration callbacks,
> > > > > > > > change RX and TX queue release callback API parameter from queue
> > > > > > > > object to device and queue index.
> > > > > > > > 
> > > > > > > > Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> > > > > > > 
> > > > > > > In fact, there is no strong reasons to do it, but I think it is a
> > > > > > > nice cleanup to use (dev + queue index) on control path.
> > > > > > > 
> > > > > > > Hopefully it will not result in any regressions.
> > > > > > 
> > > > > > Combined there are 100+ API's for Rx/Tx queue_release that need to
> > > > > > be modified for it.
> > > > > > 
> > > > > > I believe all regression possibilities here will be caught, in
> > > > > > compilation phase itself.
> > > > > > 
> > > > > 
> > > > > Same here, it is a good cleanup but there is no strong reason for it.
> > > > > 
> > > > > Since it is all internal, there is no ABI restriction on the patch,
> > > > > and v21.11 will be full ABI break patches, to not cause conflicts with this change, what would you think to have it on v22.02?
> > > > 
> > > > This patch is required by shared-rxq feature which ABI broken, target to 21.11.
> > > 
> > > Why it is required?
> > 
> > In rx burst function, rxq object is used in data path. For best data performance, it's shared-rxq object in case of shared rxq enabled.
> > I think eth api defined rxq object for performance as well, specific on data plane. 
> > Hardware saves port info received packet descriptor for my case.
> > Can't tell which device's queue with this shared rxq object, control path can't use this shared rxq anymore, have to be specific on dev and queue id.
> > 
> 
> I have seen shared Rx queue patch, but that just introduces the offload and
> doesn't have the PMD implementation, so hard to see the dependency, can you
> please put the pseudocode for PMDs for shared-rxq?
> How a queue will know if it is shared or not, during release?
> 
> Btw, shared Rx doesn't mention from this dependency in the patch.

Hi Ferruh, finally get PMD code ported:
http://mails.dpdk.org/archives/dev/2021-September/221326.html
 
> 
> > > 
> > > > I'll do it carefully, fortunately, the change is straightforward.
> > > > 
> > 
> 


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v5 0/2] support IOMMU for DMA device
                     ` (2 preceding siblings ...)
  2021-09-25 10:33  3% ` [dpdk-dev] [PATCH v4 " Xuan Ding
@ 2021-09-27  7:48  3% ` Xuan Ding
  2021-09-29  2:41  3% ` [dpdk-dev] [PATCH v6 " Xuan Ding
  2021-10-11  7:59  3% ` [dpdk-dev] [PATCH v7 0/2] Support " Xuan Ding
  5 siblings, 0 replies; 200+ results
From: Xuan Ding @ 2021-09-27  7:48 UTC (permalink / raw)
  To: dev, maxime.coquelin, chenbo.xia, anatoly.burakov
  Cc: jiayu.hu, cheng1.jiang, bruce.richardson, sunil.pai.g,
	yinan.wang, YvonneX.Yang, Xuan Ding

This series supports DMA device to use vfio in async vhost.

The first patch extends the capability of current vfio dma mapping
API to allow partial unmapping for adjacent memory if the platform
does not support partial unmapping. The second patch involves the
IOMMU programming for guest memory in async vhost.

v5:
* Fix issue of a pointer be freed early.

v4:
* Fix a format issue.

v3:
* Move the async_map_status flag to virtio_net structure to avoid
ABI breaking.

v2:
* Add rte_errno filtering for some devices bound in the kernel driver.
* Add a flag to check the status of region mapping.
* Fix one typo.

Xuan Ding (2):
  vfio: allow partially unmapping adjacent memory
  vhost: enable IOMMU for async vhost

 lib/eal/linux/eal_vfio.c | 338 ++++++++++++++++++++++++++-------------
 lib/vhost/vhost.h        |   4 +
 lib/vhost/vhost_user.c   | 114 ++++++++++++-
 3 files changed, 344 insertions(+), 112 deletions(-)

-- 
2.17.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v4] doc: remove event crypto metadata deprecation note
    @ 2021-09-27 15:22  3%   ` Shijith Thotton
  2021-10-03  9:48  0%     ` Gujjar, Abhinandan S
  1 sibling, 1 reply; 200+ results
From: Shijith Thotton @ 2021-09-27 15:22 UTC (permalink / raw)
  To: dev
  Cc: Shijith Thotton, abhinandan.gujjar, adwivedi, anoobj, gakhil,
	jerinj, pbhagavatula

Proposed change to event crypto metadata is not done as per deprecation
note. Instead, comments are updated in spec to improve readability.

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
v4:
* Removed changes as per deprecation note.
* Updated spec comments.

v3:
* Updated ABI section of release notes.

v2:
* Updated deprecation notice.

v1:
* Rebased.

 doc/guides/rel_notes/deprecation.rst    | 6 ------
 lib/eventdev/rte_event_crypto_adapter.h | 1 +
 2 files changed, 1 insertion(+), 6 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index bf1e07c0a8..fae3abd282 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -254,12 +254,6 @@ Deprecation Notices
   An 8-byte reserved field will be added to the structure ``rte_event_timer`` to
   support future extensions.
 
-* eventdev: Reserved bytes of ``rte_event_crypto_request`` is a space holder
-  for ``response_info``. Both should be decoupled for better clarity.
-  New space for ``response_info`` can be made by changing
-  ``rte_event_crypto_metadata`` type to structure from union.
-  This change is targeted for DPDK 21.11.
-
 * metrics: The function ``rte_metrics_init`` will have a non-void return
   in order to notify errors instead of calling ``rte_exit``.
 
diff --git a/lib/eventdev/rte_event_crypto_adapter.h b/lib/eventdev/rte_event_crypto_adapter.h
index 27fb628eef..edbd5c61a3 100644
--- a/lib/eventdev/rte_event_crypto_adapter.h
+++ b/lib/eventdev/rte_event_crypto_adapter.h
@@ -227,6 +227,7 @@ union rte_event_crypto_metadata {
 	struct rte_event_crypto_request request_info;
 	/**< Request information to be filled in by application
 	 * for RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode.
+	 * First 8 bytes of request_info is reserved for response_info.
 	 */
 	struct rte_event response_info;
 	/**< Response information to be filled in by application
-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [RFC 0/7] hide eth dev related structures
  2021-09-22 15:08  0%       ` Ananyev, Konstantin
@ 2021-09-27 16:14  0%         ` Jerin Jacob
  2021-09-28  9:37  0%           ` Ananyev, Konstantin
  0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2021-09-27 16:14 UTC (permalink / raw)
  To: Ananyev, Konstantin
  Cc: dpdk-dev, Thomas Monjalon, Yigit, Ferruh, Andrew Rybchenko, Yang,
	Qiming, Zhang, Qi Z, Xing, Beilei, techboard

On Wed, Sep 22, 2021 at 8:38 PM Ananyev, Konstantin
<konstantin.ananyev@intel.com> wrote:
>
>
> > > Hi Jerin,
> > >
> > > > > NOTE: This is just an RFC to start further discussion and collect the feedback.
> > > > > Due to significant amount of work, changes required are applied only to two
> > > > > PMDs so far: net/i40e and net/ice.
> > > > > So to build it you'll need to add:
> > > > > -Denable_drivers='common/*,mempool/*,net/ice,net/i40e'
> > > > > to your config options.
> > > >
> > > > >
> > > > > That approach was selected to avoid(/minimize) possible performance losses.
> > > > >
> > > > > So far I done only limited amount functional and performance testing.
> > > > > Didn't spot any functional problems, and performance numbers
> > > > > remains the same before and after the patch on my box (testpmd, macswap fwd).
> > > >
> > > >
> > > > Based on testing on octeonxt2. We see some regression in testpmd and
> > > > bit on l3fwd too.
> > > >
> > > > Without patch: 73.5mpps/core in testpmd iofwd
> > > > With out patch: 72 5mpps/core in testpmd iofwd
> > > >
> > > > Based on my understanding it is due to additional indirection.
> > >
> > > From your patch below, it looks like not actually additional indirection,
> > > but extra memory dereference - func and dev pointers are now stored
> > > at different places.
> >
> > Yup. I meant the same. We are on the same page.
> >
> > > Plus the fact that now we dereference rte_eth_devices[]
> > > data inside PMD function. Which probably prevents compiler and CPU to load
> > >  rte_eth_devices[port_id].data and rte_eth_devices[port_id]. pre_tx_burst_cbs[queue_id]
> > > in advance before calling actual RX/TX function.
> >
> > Yes.
> >
> > > About your approach: I don’t mind to add extra opaque 'void *data' pointer,
> > > but would prefer not to expose callback invocations code into inline function.
> > > Main reason for that - I think it still need to be reworked to allow adding/removing
> > > callbacks without stopping the device. Something similar to what was done for cryptodev
> > > callbacks. To be able to do that in future without another ABI breakage callbacks related part
> > > needs to be kept internal.
> > > Though what we probably can do: add two dynamic arrays of opaque pointers to  rte_eth_burst_api.
> > > One for rx/tx queue data pointers, second for rx/tx callback pointers.
> > > To be more specific, something like:
> > >
> > > typedef uint16_t (*rte_eth_rx_burst_t)( void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, void *cbs);
> > > typedef uint16_t (*rte_eth_tx_burst_t)(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *cbs);
> > > ....
> > >
> > > struct rte_eth_burst_api {
> > >         rte_eth_rx_burst_t rx_pkt_burst;
> > >         /**< PMD receive function. */
> > >         rte_eth_tx_burst_t tx_pkt_burst;
> > >         /**< PMD transmit function. */
> > >         rte_eth_tx_prep_t tx_pkt_prepare;
> > >         /**< PMD transmit prepare function. */
> > >         rte_eth_rx_queue_count_t rx_queue_count;
> > >         /**< Get the number of used RX descriptors. */
> > >         rte_eth_rx_descriptor_status_t rx_descriptor_status;
> > >         /**< Check the status of a Rx descriptor. */
> > >         rte_eth_tx_descriptor_status_t tx_descriptor_status;
> > >         /**< Check the status of a Tx descriptor. */
> > >         struct {
> > >                  void **queue_data;   /* point to rte_eth_devices[port_id].data-> rx_queues */
> > >                  void **cbs;                  /*  points to rte_eth_devices[port_id].post_rx_burst_cbs */
> > >        } rx_data, tx_data;
> > > } __rte_cache_aligned;
> > >
> > > static inline uint16_t
> > > rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
> > >                  struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
> > > {
> > >        struct rte_eth_burst_api *p;
> > >
> > >         if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT)
> > >                 return 0;
> > >
> > >       p =  &rte_eth_burst_api[port_id];
> > >       return p->rx_pkt_burst(p->rx_data.queue_data[queue_id], rx_pkts, nb_pkts, p->rx_data.cbs[queue_id]);
> >
> >
> >
> > That works.
> >
> >
> > > }
> > >
> > > Same for TX.
> > >
> > > If that looks ok to everyone, I'll try to prepare next version based on that.
> >
> >
> > Looks good to me.
> >
> > > In theory that should avoid extra dereference problem and even reduce indirection.
> > > As a drawback data->rxq/txq should always be allocated for RTE_MAX_QUEUES_PER_PORT entries,
> > > but I presume that’s not a big deal.
> > >
> > > As a side question - is there any reason why rte_ethdev_trace_rx_burst() is invoked at very last point,
> > > while rte_ethdev_trace_tx_burst()  after CBs but before actual tx_pkt_burst()?
> > > It would make things simpler if tracng would always be done either on entrance or exit of rx/tx_burst.
> >
> > exit is fine.
> >
> > >
> > > >
> > > > My suggestion to fix the problem by:
> > > > Removing the additional `data` redirection and pull callback function
> > > > pointers back
> > > > and keep rest as opaque as done in the existing patch like [1]
> > > >
> > > > I don't believe this has any real implication on future ABI stability
> > > > as we will not be adding
> > > > any new item in rte_eth_fp in any way as new features can be added in slowpath
> > > > rte_eth_dev as mentioned in the patch.
> >
> > Ack
> >
> > I will happy to test again after the rework and report any performance
> > issues if any.
>
> Thanks Jerin, v2 is out:
> https://patches.dpdk.org/project/dpdk/list/?series=19084
> Please have a look, when you'll get a chance.

Tested the series. Looks good, No performance issue.

>

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v6 2/3] security: add option for faster udata or mdata access
  @ 2021-09-27 17:10  0%     ` Thomas Monjalon
  2021-09-28  8:24  0%       ` [dpdk-dev] [EXT] " Akhil Goyal
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-09-27 17:10 UTC (permalink / raw)
  To: konstantin.ananyev, jerinj, gakhil, Nithin Dabilpuram
  Cc: roy.fan.zhang, hemant.agrawal, matan, dev, ferruh.yigit,
	radu.nicolau, olivier.matz, g.singh, declan.doherty, jiawenwu

15/09/2021 18:30, Nithin Dabilpuram:
> Currently rte_security_set_pkt_metadata() and rte_security_get_userdata()
> methods to set pkt metadata on Inline outbound and get userdata
> after Inline inbound processing is always driver specific callbacks.
> 
> For drivers that do not have much to do in the callbacks but just
> to update metadata in rte_security dynamic field and get userdata
> from rte_security dynamic field, having to just to PMD specific
> callback is costly per packet operation. This patch provides
> a mechanism to do the same in inline function and avoid function
> pointer jump if a driver supports the same.
> 
> Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Acked-by: Akhil Goyal <gakhil@marvell.com>
[...]
> --- a/doc/guides/rel_notes/release_21_08.rst
> +++ b/doc/guides/rel_notes/release_21_08.rst
> @@ -223,6 +223,12 @@ ABI Changes
>  
>  * No ABI change that would break compatibility with 20.11.
>  
> +* security: ``rte_security_set_pkt_metadata`` and ``rte_security_get_userdata``
> +  routines used by Inline outbound and Inline inbound security processing are
> +  made inline and enhanced to do simple 64-bit set/get for PMD's that do not
> +  have much processing in PMD specific callbacks but just 64-bit set/get.
> +  This avoids a per pkt function pointer jump overhead for such PMD's.

Please pay attention it is not the right release notes.



^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [RFC v2 3/5] ethdev: copy ethdev 'burst' API into separate structure
  2021-09-23  5:58  0%     ` Wang, Haiyue
@ 2021-09-27 18:01  0%       ` Jerin Jacob
  2021-09-28  9:42  0%         ` Ananyev, Konstantin
  0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2021-09-27 18:01 UTC (permalink / raw)
  To: Wang, Haiyue
  Cc: Ananyev, Konstantin, dev, Li, Xiaoyun, anoobj, jerinj,
	ndabilpuram, adwivedi, shepard.siegel, ed.czeck, john.miller,
	irusskikh, ajit.khaparde, somnath.kotur, rahul.lakkireddy,
	hemant.agrawal, sachin.saxena, Daley, John, hyonkim, Zhang, Qi Z,
	Wang, Xiao W, humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu,
	Jingjing, Yang, Qiming, matan, viacheslavo, sthemmin, longli,
	heinrich.kuhn, kirankumark, andrew.rybchenko, mczekaj, jiawenwu,
	jianwang, maxime.coquelin, Xia, Chenbo, thomas, Yigit, Ferruh,
	mdr, Jayatheerthan, Jay

On Thu, Sep 23, 2021 at 11:28 AM Wang, Haiyue <haiyue.wang@intel.com> wrote:
>
> > -----Original Message-----
> > From: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > Sent: Wednesday, September 22, 2021 22:10
> > To: dev@dpdk.org
> > Cc: Li, Xiaoyun <xiaoyun.li@intel.com>; anoobj@marvell.com; jerinj@marvell.com;
> > ndabilpuram@marvell.com; adwivedi@marvell.com; shepard.siegel@atomicrules.com;
> > ed.czeck@atomicrules.com; john.miller@atomicrules.com; irusskikh@marvell.com;
> > ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com; rahul.lakkireddy@chelsio.com;
> > hemant.agrawal@nxp.com; sachin.saxena@oss.nxp.com; Wang, Haiyue <haiyue.wang@intel.com>; Daley, John
> > <johndale@cisco.com>; hyonkim@cisco.com; Zhang, Qi Z <qi.z.zhang@intel.com>; Wang, Xiao W
> > <xiao.w.wang@intel.com>; humin29@huawei.com; yisen.zhuang@huawei.com; oulijun@huawei.com; Xing, Beilei
> > <beilei.xing@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Yang, Qiming <qiming.yang@intel.com>;
> > matan@nvidia.com; viacheslavo@nvidia.com; sthemmin@microsoft.com; longli@microsoft.com;
> > heinrich.kuhn@corigine.com; kirankumark@marvell.com; andrew.rybchenko@oktetlabs.ru;
> > mczekaj@marvell.com; jiawenwu@trustnetic.com; jianwang@trustnetic.com; maxime.coquelin@redhat.com; Xia,
> > Chenbo <chenbo.xia@intel.com>; thomas@monjalon.net; Yigit, Ferruh <ferruh.yigit@intel.com>;
> > mdr@ashroe.eu; Jayatheerthan, Jay <jay.jayatheerthan@intel.com>; Ananyev, Konstantin
> > <konstantin.ananyev@intel.com>
> > Subject: [RFC v2 3/5] ethdev: copy ethdev 'burst' API into separate structure
> >
> > Copy public function pointers (rx_pkt_burst(), etc.) and related
> > pointers to internal data from rte_eth_dev structure into a separate flat
> > array. We can keep it public to still use inline functions for 'fast' calls
> > (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
> > The intention is to make rte_eth_dev and related structures internal.
> > That should allow future possible changes to core eth_dev strcutures
> > to be transaprent to the user and help to avoid ABI/API breakages.
> >
> > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > ---
> >  lib/ethdev/ethdev_private.c  | 53 ++++++++++++++++++++++++++++++++++++
> >  lib/ethdev/ethdev_private.h  |  7 +++++
> >  lib/ethdev/rte_ethdev.c      | 17 ++++++++++++
> >  lib/ethdev/rte_ethdev_core.h | 45 ++++++++++++++++++++++++++++++
> >  4 files changed, 122 insertions(+)
> >
> > diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
> > index 012cf73ca2..a1683da77b 100644
> > --- a/lib/ethdev/ethdev_private.c
> > +++ b/lib/ethdev/ethdev_private.c
> > @@ -174,3 +174,56 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
> >               RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
> >       return str == NULL ? -1 : 0;
> >  }
> > +
> > +static uint16_t
> > +dummy_eth_rx_burst(__rte_unused void *rxq,
> > +             __rte_unused struct rte_mbuf **rx_pkts,
> > +             __rte_unused uint16_t nb_pkts)
> > +{
> > +     RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
> > +     rte_errno = ENOTSUP;
> > +     return 0;
> > +}
> > +
> > +static uint16_t
> > +dummy_eth_tx_burst(__rte_unused void *txq,
> > +             __rte_unused struct rte_mbuf **tx_pkts,
> > +             __rte_unused uint16_t nb_pkts)
> > +{
> > +     RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n");
> > +     rte_errno = ENOTSUP;
> > +     return 0;
> > +}
> > +
> > +void
> > +eth_dev_burst_api_reset(struct rte_eth_burst_api *rba)
> > +{
> > +     static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
> > +     static const struct rte_eth_burst_api dummy_api = {
> > +             .rx_pkt_burst = dummy_eth_rx_burst,
> > +             .tx_pkt_burst = dummy_eth_tx_burst,
> > +             .rxq = {.data = dummy_data, .clbk = dummy_data,},
> > +             .txq = {.data = dummy_data, .clbk = dummy_data,},
> > +     };
> > +
> > +     *rba = dummy_api;
> > +}
> > +
> > +void
> > +eth_dev_burst_api_setup(struct rte_eth_burst_api *rba,
> > +             const struct rte_eth_dev *dev)
> > +{
> > +     rba->rx_pkt_burst = dev->rx_pkt_burst;
> > +     rba->tx_pkt_burst = dev->tx_pkt_burst;
> > +     rba->tx_pkt_prepare = dev->tx_pkt_prepare;
> > +     rba->rx_queue_count = dev->rx_queue_count;
> > +     rba->rx_descriptor_status = dev->rx_descriptor_status;
> > +     rba->tx_descriptor_status = dev->tx_descriptor_status;
> > +
> > +     rba->rxq.data = dev->data->rx_queues;
> > +     rba->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
> > +
> > +     rba->txq.data = dev->data->tx_queues;
> > +     rba->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
> > +}
> > +
> > diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
> > index 9bb0879538..54921f4860 100644
> > --- a/lib/ethdev/ethdev_private.h
> > +++ b/lib/ethdev/ethdev_private.h
> > @@ -30,6 +30,13 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
> >  /* Parse devargs value for representor parameter. */
> >  int rte_eth_devargs_parse_representor_ports(char *str, void *data);
> >
> > +/* reset eth 'burst' API to dummy values */
> > +void eth_dev_burst_api_reset(struct rte_eth_burst_api *rba);
> > +
> > +/* setup eth 'burst' API to ethdev values */
> > +void eth_dev_burst_api_setup(struct rte_eth_burst_api *rba,
> > +             const struct rte_eth_dev *dev);
> > +
> >  #ifdef __cplusplus
> >  }
> >  #endif
> > diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> > index 424bc260fa..5904bb7bae 100644
> > --- a/lib/ethdev/rte_ethdev.c
> > +++ b/lib/ethdev/rte_ethdev.c
> > @@ -44,6 +44,9 @@
> >  static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
> >  struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
> >
> > +/* public 'fast/burst' API */
> > +struct rte_eth_burst_api rte_eth_burst_api[RTE_MAX_ETHPORTS];
> > +
> >  /* spinlock for eth device callbacks */
> >  static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
> >
> > @@ -1788,6 +1791,9 @@ rte_eth_dev_start(uint16_t port_id)
> >               (*dev->dev_ops->link_update)(dev, 0);
> >       }
> >
> > +     /* expose selection of PMD rx/tx function */
> > +     eth_dev_burst_api_setup(rte_eth_burst_api + port_id, dev);
> > +
> >       rte_ethdev_trace_start(port_id);
> >       return 0;
> >  }
> > @@ -1810,6 +1816,9 @@ rte_eth_dev_stop(uint16_t port_id)
> >               return 0;
> >       }
> >
> > +     /* point rx/tx functions to dummy ones */
> > +     eth_dev_burst_api_reset(rte_eth_burst_api + port_id);
> > +
> >       dev->data->dev_started = 0;
> >       ret = (*dev->dev_ops->dev_stop)(dev);
> >       rte_ethdev_trace_stop(port_id, ret);
> > @@ -4568,6 +4577,14 @@ rte_eth_mirror_rule_reset(uint16_t port_id, uint8_t rule_id)
> >       return eth_err(port_id, (*dev->dev_ops->mirror_rule_reset)(dev, rule_id));
> >  }
> >
> > +RTE_INIT(eth_dev_init_burst_api)
> > +{
> > +     uint32_t i;
> > +
> > +     for (i = 0; i != RTE_DIM(rte_eth_burst_api); i++)
> > +             eth_dev_burst_api_reset(rte_eth_burst_api + i);
> > +}
> > +
> >  RTE_INIT(eth_dev_init_cb_lists)
> >  {
> >       uint16_t i;
> > diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
> > index 00f27c643a..da6de5de43 100644
> > --- a/lib/ethdev/rte_ethdev_core.h
> > +++ b/lib/ethdev/rte_ethdev_core.h
> > @@ -53,6 +53,51 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
> >  typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
> >  /**< @internal Check the status of a Tx descriptor */
> >
> > +/**
> > + * @internal
> > + * Structure used to hold opaque pointernals to internal ethdev RX/TXi
> > + * queues data.
> > + * The main purpose to expose these pointers at all - allow compiler
> > + * to fetch this data for 'fast' ethdev inline functions in advance.
> > + */
> > +struct rte_ethdev_qdata {
> > +     void **data;
> > +     /**< points to array of internal queue data pointers */
> > +     void **clbk;
> > +     /**< points to array of queue callback data pointers */
> > +};
> > +
> > +/**
> > + * @internal
> > + * 'fast' ethdev funcions and related data are hold in a flat array.
> > + * one entry per ethdev.
> > + */
> > +struct rte_eth_burst_api {
>
> 'ops' is better ? Like "struct rte_eth_burst_ops". ;-)

Since all fastpath APIs are not in bust in nature. IMO, rte_eth_fp_ops
or so may be better.

>
> > +
> > +     /** first 64B line */
> > +     eth_rx_burst_t rx_pkt_burst;
> > +     /**< PMD receive function. */
> > +     eth_tx_burst_t tx_pkt_burst;
> > +     /**< PMD transmit function. */
> > +     eth_tx_prep_t tx_pkt_prepare;
> > +     /**< PMD transmit prepare function. */
> > +     eth_rx_queue_count_t rx_queue_count;
> > +     /**< Get the number of used RX descriptors. */
> > +     eth_rx_descriptor_status_t rx_descriptor_status;
> > +     /**< Check the status of a Rx descriptor. */
> > +     eth_tx_descriptor_status_t tx_descriptor_status;
> > +     /**< Check the status of a Tx descriptor. */
> > +     uintptr_t reserved[2];
> > +
>
> How about 32 bit system ? Does it need something like :
>
> __rte_cache_aligned for rxq to make sure 64B line?

__rte_cache_aligned_min for 128B CL systems.

>
> > +     /** second 64B line */
> > +     struct rte_ethdev_qdata rxq;
> > +     struct rte_ethdev_qdata txq;
> > +     uintptr_t reserved2[4];
> > +
> > +} __rte_cache_aligned;
> > +
> > +extern struct rte_eth_burst_api rte_eth_burst_api[RTE_MAX_ETHPORTS];
> > +
> >
> >  /**
> >   * @internal
> > --
> > 2.26.3
>

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v2 01/15] ethdev: introduce shared Rx queue
  @ 2021-09-28  5:50  0%                         ` Xueming(Steven) Li
  0 siblings, 0 replies; 200+ results
From: Xueming(Steven) Li @ 2021-09-28  5:50 UTC (permalink / raw)
  To: jerinjacobk
  Cc: NBU-Contact-Thomas Monjalon, andrew.rybchenko, dev, ferruh.yigit

On Thu, 2021-09-16 at 09:46 +0530, Jerin Jacob wrote:
> On Wed, Sep 15, 2021 at 8:15 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
> > 
> > Hi Jerin,
> > 
> > On Mon, 2021-08-30 at 15:01 +0530, Jerin Jacob wrote:
> > > On Sat, Aug 28, 2021 at 7:46 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
> > > > 
> > > > 
> > > > 
> > > > > -----Original Message-----
> > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > Sent: Thursday, August 26, 2021 7:58 PM
> > > > > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > > > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit <ferruh.yigit@intel.com>; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>;
> > > > > Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> > > > > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx queue
> > > > > 
> > > > > On Thu, Aug 19, 2021 at 5:39 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > > -----Original Message-----
> > > > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > > > Sent: Thursday, August 19, 2021 1:27 PM
> > > > > > > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > > > > > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit <ferruh.yigit@intel.com>;
> > > > > > > NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Andrew Rybchenko
> > > > > > > <andrew.rybchenko@oktetlabs.ru>
> > > > > > > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx queue
> > > > > > > 
> > > > > > > On Wed, Aug 18, 2021 at 4:44 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
> > > > > > > > 
> > > > > > > > 
> > > > > > > > 
> > > > > > > > > -----Original Message-----
> > > > > > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > > > > > Sent: Tuesday, August 17, 2021 11:12 PM
> > > > > > > > > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > > > > > > > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit
> > > > > > > > > <ferruh.yigit@intel.com>; NBU-Contact-Thomas Monjalon
> > > > > > > > > <thomas@monjalon.net>; Andrew Rybchenko
> > > > > > > > > <andrew.rybchenko@oktetlabs.ru>
> > > > > > > > > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx queue
> > > > > > > > > 
> > > > > > > > > On Tue, Aug 17, 2021 at 5:01 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > > -----Original Message-----
> > > > > > > > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > > > > > > > Sent: Tuesday, August 17, 2021 5:33 PM
> > > > > > > > > > > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > > > > > > > > > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit
> > > > > > > > > > > <ferruh.yigit@intel.com>; NBU-Contact-Thomas Monjalon
> > > > > > > > > > > <thomas@monjalon.net>; Andrew Rybchenko
> > > > > > > > > > > <andrew.rybchenko@oktetlabs.ru>
> > > > > > > > > > > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx
> > > > > > > > > > > queue
> > > > > > > > > > > 
> > > > > > > > > > > On Wed, Aug 11, 2021 at 7:34 PM Xueming Li <xuemingl@nvidia.com> wrote:
> > > > > > > > > > > > 
> > > > > > > > > > > > In current DPDK framework, each RX queue is pre-loaded
> > > > > > > > > > > > with mbufs for incoming packets. When number of
> > > > > > > > > > > > representors scale out in a switch domain, the memory
> > > > > > > > > > > > consumption became significant. Most important, polling
> > > > > > > > > > > > all ports leads to high cache miss, high latency and low throughput.
> > > > > > > > > > > > 
> > > > > > > > > > > > This patch introduces shared RX queue. Ports with same
> > > > > > > > > > > > configuration in a switch domain could share RX queue set by specifying sharing group.
> > > > > > > > > > > > Polling any queue using same shared RX queue receives
> > > > > > > > > > > > packets from all member ports. Source port is identified by mbuf->port.
> > > > > > > > > > > > 
> > > > > > > > > > > > Port queue number in a shared group should be identical.
> > > > > > > > > > > > Queue index is
> > > > > > > > > > > > 1:1 mapped in shared group.
> > > > > > > > > > > > 
> > > > > > > > > > > > Share RX queue must be polled on single thread or core.
> > > > > > > > > > > > 
> > > > > > > > > > > > Multiple groups is supported by group ID.
> > > > > > > > > > > > 
> > > > > > > > > > > > Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> > > > > > > > > > > > Cc: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > > > > > > > > ---
> > > > > > > > > > > > Rx queue object could be used as shared Rx queue object,
> > > > > > > > > > > > it's important to clear all queue control callback api that using queue object:
> > > > > > > > > > > > 
> > > > > > > > > > > > https://mails.dpdk.org/archives/dev/2021-July/215574.html
> > > > > > > > > > > 
> > > > > > > > > > > >  #undef RTE_RX_OFFLOAD_BIT2STR diff --git
> > > > > > > > > > > > a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index
> > > > > > > > > > > > d2b27c351f..a578c9db9d 100644
> > > > > > > > > > > > --- a/lib/ethdev/rte_ethdev.h
> > > > > > > > > > > > +++ b/lib/ethdev/rte_ethdev.h
> > > > > > > > > > > > @@ -1047,6 +1047,7 @@ struct rte_eth_rxconf {
> > > > > > > > > > > >         uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */
> > > > > > > > > > > >         uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
> > > > > > > > > > > >         uint16_t rx_nseg; /**< Number of descriptions in rx_seg array.
> > > > > > > > > > > > */
> > > > > > > > > > > > +       uint32_t shared_group; /**< Shared port group
> > > > > > > > > > > > + index in switch domain. */
> > > > > > > > > > > 
> > > > > > > > > > > Not to able to see anyone setting/creating this group ID test application.
> > > > > > > > > > > How this group is created?
> > > > > > > > > > 
> > > > > > > > > > Nice catch, the initial testpmd version only support one default group(0).
> > > > > > > > > > All ports that supports shared-rxq assigned in same group.
> > > > > > > > > > 
> > > > > > > > > > We should be able to change "--rxq-shared" to "--rxq-shared-group"
> > > > > > > > > > to support group other than default.
> > > > > > > > > > 
> > > > > > > > > > To support more groups simultaneously, need to consider
> > > > > > > > > > testpmd forwarding stream core assignment, all streams in same group need to stay on same core.
> > > > > > > > > > It's possible to specify how many ports to increase group
> > > > > > > > > > number, but user must schedule stream affinity carefully - error prone.
> > > > > > > > > > 
> > > > > > > > > > On the other hand, one group should be sufficient for most
> > > > > > > > > > customer, the doubt is whether it valuable to support multiple groups test.
> > > > > > > > > 
> > > > > > > > > Ack. One group is enough in testpmd.
> > > > > > > > > 
> > > > > > > > > My question was more about who and how this group is created,
> > > > > > > > > Should n't we need API to create shared_group? If we do the following, at least, I can think, how it can be implemented in SW
> > > > > or other HW.
> > > > > > > > > 
> > > > > > > > > - Create aggregation queue group
> > > > > > > > > - Attach multiple  Rx queues to the aggregation queue group
> > > > > > > > > - Pull the packets from the queue group(which internally fetch
> > > > > > > > > from the Rx queues _attached_)
> > > > > > > > > 
> > > > > > > > > Does the above kind of sequence, break your representor use case?
> > > > > > > > 
> > > > > > > > Seems more like a set of EAL wrapper. Current API tries to minimize the application efforts to adapt shared-rxq.
> > > > > > > > - step 1, not sure how important it is to create group with API, in rte_flow, group is created on demand.
> > > > > > > 
> > > > > > > Which rte_flow pattern/action for this?
> > > > > > 
> > > > > > No rte_flow for this, just recalled that the group in rte_flow is not created along with flow, not via api.
> > > > > > I don’t see anything else to create along with group, just double whether it valuable to introduce a new api set to manage group.
> > > > > 
> > > > > See below.
> > > > > 
> > > > > > 
> > > > > > > 
> > > > > > > > - step 2, currently, the attaching is done in rte_eth_rx_queue_setup, specify offload and group in rx_conf struct.
> > > > > > > > - step 3, define a dedicate api to receive packets from shared rxq? Looks clear to receive packets from shared rxq.
> > > > > > > >   currently, rxq objects in share group is same - the shared rxq, so the eth callback eth_rx_burst_t(rxq_obj, mbufs, n) could
> > > > > > > >   be used to receive packets from any ports in group, normally the first port(PF) in group.
> > > > > > > >   An alternative way is defining a vdev with same queue number and copy rxq objects will make the vdev a proxy of
> > > > > > > >   the shared rxq group - this could be an helper API.
> > > > > > > > 
> > > > > > > > Anyway the wrapper doesn't break use case, step 3 api is more clear, need to understand how to implement efficiently.
> > > > > > > 
> > > > > > > Are you doing this feature based on any HW support or it just pure
> > > > > > > SW thing, If it is SW, It is better to have just new vdev for like drivers/net/bonding/. This we can help aggregate multiple Rxq across
> > > > > the multiple ports of same the driver.
> > > > > > 
> > > > > > Based on HW support.
> > > > > 
> > > > > In Marvel HW, we do some support, I will outline here and some queries on this.
> > > > > 
> > > > > # We need to create some new HW structure for aggregation # Connect each Rxq to the new HW structure for aggregation # Use
> > > > > rx_burst from the new HW structure.
> > > > > 
> > > > > Could you outline your HW support?
> > > > > 
> > > > > Also, I am not able to understand how this will reduce the memory, atleast in our HW need creating more memory now to deal this as
> > > > > we need to deal new HW structure.
> > > > > 
> > > > > How is in your HW it reduces the memory? Also, if memory is the constraint, why NOT reduce the number of queues.
> > > > > 
> > > > 
> > > > Glad to know that Marvel is working on this, what's the status of driver implementation?
> > > > 
> > > > In my PMD implementation, it's very similar, a new HW object shared memory pool is created to replace per rxq memory pool.
> > > > Legacy rxq feed queue with allocated mbufs as number of descriptors, now shared rxqs share the same pool, no need to supply
> > > > mbufs for each rxq, just feed the shared rxq.
> > > > 
> > > > So the memory saving reflects to mbuf per rxq, even 1000 representors in shared rxq group, the mbufs consumed is one rxq.
> > > > In other words, new members in shared rxq doesn’t allocate new mbufs to feed rxq, just share with existing shared rxq(HW mempool).
> > > > The memory required to setup each rxq doesn't change too much, agree.
> > > 
> > > We can ask the application to configure the same mempool for multiple
> > > RQ too. RIght? If the saving is based on sharing the mempool
> > > with multiple RQs.
> > > 
> > > > 
> > > > > # Also, I was thinking, one way to avoid the fast path or ABI change would like.
> > > > > 
> > > > > # Driver Initializes one more eth_dev_ops in driver as aggregator ethdev # devargs of new ethdev or specific API like
> > > > > drivers/net/bonding/rte_eth_bond.h can take the argument (port, queue) tuples which needs to aggregate by new ethdev port # No
> > > > > change in fastpath or ABI is required in this model.
> > > > > 
> > > > 
> > > > This could be an option to access shared rxq. What's the difference of the new PMD?
> > > 
> > > No ABI and fast change are required.
> > > 
> > > > What's the difference of PMD driver to create the new device?
> > > > 
> > > > Is it important in your implementation? Does it work with existing rx_burst api?
> > > 
> > > Yes . It will work with the existing rx_burst API.
> > > 
> > 
> > The aggregator ethdev required by user is a port, maybe it good to add
> > a callback for PMD to prepare a complete ethdev just like creating
> > representor ethdev - pmd register new port internally. If the PMD
> > doens't provide the callback, ethdev api fallback to initialize an
> > empty ethdev by copy rxq data(shared) and rx_burst api from source port
> > and share group. Actually users can do this fallback themselves or with
> > an util api.
> > 
> > IIUC, an aggregator ethdev not a must, do you think we can continue and
> > leave that design in later stage?
> 
> 
> IMO aggregator ethdev reduces the complexity for application hence
> avoid any change in
> test application etc. IMO, I prefer to take that. I will leave the
> decision to ethdev maintainers.

Hi Jerin, new API added for aggregator, the last one in v3, thanks! 

> 
> 
> > 
> > > > 
> > > > > 
> > > > > 
> > > > > > Most user might uses PF in group as the anchor port to rx burst, current definition should be easy for them to migrate.
> > > > > > but some user might prefer grouping some hot
> > > > > > plug/unpluggedrepresentors, EAL could provide wrappers, users could do that either due to the strategy not complex enough.
> > > > > Anyway, welcome any suggestion.
> > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > > 
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > >         /**
> > > > > > > > > > > >          * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
> > > > > > > > > > > >          * Only offloads set on rx_queue_offload_capa or
> > > > > > > > > > > > rx_offload_capa @@ -1373,6 +1374,12 @@ struct rte_eth_conf
> > > > > > > > > > > > { #define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM  0x00040000
> > > > > > > > > > > >  #define DEV_RX_OFFLOAD_RSS_HASH                0x00080000
> > > > > > > > > > > >  #define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT 0x00100000
> > > > > > > > > > > > +/**
> > > > > > > > > > > > + * Rx queue is shared among ports in same switch domain
> > > > > > > > > > > > +to save memory,
> > > > > > > > > > > > + * avoid polling each port. Any port in group can be used to receive packets.
> > > > > > > > > > > > + * Real source port number saved in mbuf->port field.
> > > > > > > > > > > > + */
> > > > > > > > > > > > +#define RTE_ETH_RX_OFFLOAD_SHARED_RXQ   0x00200000
> > > > > > > > > > > > 
> > > > > > > > > > > >  #define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
> > > > > > > > > > > >                                  DEV_RX_OFFLOAD_UDP_CKSUM
> > > > > > > > > > > > > \
> > > > > > > > > > > > --
> > > > > > > > > > > > 2.25.1
> > > > > > > > > > > > 
> > 


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [EXT] Re: [PATCH v6 2/3] security: add option for faster udata or mdata access
  2021-09-27 17:10  0%     ` Thomas Monjalon
@ 2021-09-28  8:24  0%       ` Akhil Goyal
  0 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2021-09-28  8:24 UTC (permalink / raw)
  To: Thomas Monjalon, konstantin.ananyev, Jerin Jacob Kollanukkaran,
	Nithin Kumar Dabilpuram
  Cc: roy.fan.zhang, hemant.agrawal, matan, dev, ferruh.yigit,
	radu.nicolau, olivier.matz, g.singh, declan.doherty, jiawenwu

> > --- a/doc/guides/rel_notes/release_21_08.rst
> > +++ b/doc/guides/rel_notes/release_21_08.rst
> > @@ -223,6 +223,12 @@ ABI Changes
> >
> >  * No ABI change that would break compatibility with 20.11.
> >
> > +* security: ``rte_security_set_pkt_metadata`` and
> ``rte_security_get_userdata``
> > +  routines used by Inline outbound and Inline inbound security processing
> are
> > +  made inline and enhanced to do simple 64-bit set/get for PMD's that do
> not
> > +  have much processing in PMD specific callbacks but just 64-bit set/get.
> > +  This avoids a per pkt function pointer jump overhead for such PMD's.
> 
> Please pay attention it is not the right release notes.
> 
Fixed... My bad.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [RFC 0/7] hide eth dev related structures
  2021-09-27 16:14  0%         ` Jerin Jacob
@ 2021-09-28  9:37  0%           ` Ananyev, Konstantin
  0 siblings, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2021-09-28  9:37 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dpdk-dev, Thomas Monjalon, Yigit, Ferruh, Andrew Rybchenko, Yang,
	Qiming, Zhang, Qi Z, Xing, Beilei, techboard


> On Wed, Sep 22, 2021 at 8:38 PM Ananyev, Konstantin
> <konstantin.ananyev@intel.com> wrote:
> >
> >
> > > > Hi Jerin,
> > > >
> > > > > > NOTE: This is just an RFC to start further discussion and collect the feedback.
> > > > > > Due to significant amount of work, changes required are applied only to two
> > > > > > PMDs so far: net/i40e and net/ice.
> > > > > > So to build it you'll need to add:
> > > > > > -Denable_drivers='common/*,mempool/*,net/ice,net/i40e'
> > > > > > to your config options.
> > > > >
> > > > > >
> > > > > > That approach was selected to avoid(/minimize) possible performance losses.
> > > > > >
> > > > > > So far I done only limited amount functional and performance testing.
> > > > > > Didn't spot any functional problems, and performance numbers
> > > > > > remains the same before and after the patch on my box (testpmd, macswap fwd).
> > > > >
> > > > >
> > > > > Based on testing on octeonxt2. We see some regression in testpmd and
> > > > > bit on l3fwd too.
> > > > >
> > > > > Without patch: 73.5mpps/core in testpmd iofwd
> > > > > With out patch: 72 5mpps/core in testpmd iofwd
> > > > >
> > > > > Based on my understanding it is due to additional indirection.
> > > >
> > > > From your patch below, it looks like not actually additional indirection,
> > > > but extra memory dereference - func and dev pointers are now stored
> > > > at different places.
> > >
> > > Yup. I meant the same. We are on the same page.
> > >
> > > > Plus the fact that now we dereference rte_eth_devices[]
> > > > data inside PMD function. Which probably prevents compiler and CPU to load
> > > >  rte_eth_devices[port_id].data and rte_eth_devices[port_id]. pre_tx_burst_cbs[queue_id]
> > > > in advance before calling actual RX/TX function.
> > >
> > > Yes.
> > >
> > > > About your approach: I don’t mind to add extra opaque 'void *data' pointer,
> > > > but would prefer not to expose callback invocations code into inline function.
> > > > Main reason for that - I think it still need to be reworked to allow adding/removing
> > > > callbacks without stopping the device. Something similar to what was done for cryptodev
> > > > callbacks. To be able to do that in future without another ABI breakage callbacks related part
> > > > needs to be kept internal.
> > > > Though what we probably can do: add two dynamic arrays of opaque pointers to  rte_eth_burst_api.
> > > > One for rx/tx queue data pointers, second for rx/tx callback pointers.
> > > > To be more specific, something like:
> > > >
> > > > typedef uint16_t (*rte_eth_rx_burst_t)( void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, void *cbs);
> > > > typedef uint16_t (*rte_eth_tx_burst_t)(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *cbs);
> > > > ....
> > > >
> > > > struct rte_eth_burst_api {
> > > >         rte_eth_rx_burst_t rx_pkt_burst;
> > > >         /**< PMD receive function. */
> > > >         rte_eth_tx_burst_t tx_pkt_burst;
> > > >         /**< PMD transmit function. */
> > > >         rte_eth_tx_prep_t tx_pkt_prepare;
> > > >         /**< PMD transmit prepare function. */
> > > >         rte_eth_rx_queue_count_t rx_queue_count;
> > > >         /**< Get the number of used RX descriptors. */
> > > >         rte_eth_rx_descriptor_status_t rx_descriptor_status;
> > > >         /**< Check the status of a Rx descriptor. */
> > > >         rte_eth_tx_descriptor_status_t tx_descriptor_status;
> > > >         /**< Check the status of a Tx descriptor. */
> > > >         struct {
> > > >                  void **queue_data;   /* point to rte_eth_devices[port_id].data-> rx_queues */
> > > >                  void **cbs;                  /*  points to rte_eth_devices[port_id].post_rx_burst_cbs */
> > > >        } rx_data, tx_data;
> > > > } __rte_cache_aligned;
> > > >
> > > > static inline uint16_t
> > > > rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
> > > >                  struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
> > > > {
> > > >        struct rte_eth_burst_api *p;
> > > >
> > > >         if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT)
> > > >                 return 0;
> > > >
> > > >       p =  &rte_eth_burst_api[port_id];
> > > >       return p->rx_pkt_burst(p->rx_data.queue_data[queue_id], rx_pkts, nb_pkts, p->rx_data.cbs[queue_id]);
> > >
> > >
> > >
> > > That works.
> > >
> > >
> > > > }
> > > >
> > > > Same for TX.
> > > >
> > > > If that looks ok to everyone, I'll try to prepare next version based on that.
> > >
> > >
> > > Looks good to me.
> > >
> > > > In theory that should avoid extra dereference problem and even reduce indirection.
> > > > As a drawback data->rxq/txq should always be allocated for RTE_MAX_QUEUES_PER_PORT entries,
> > > > but I presume that’s not a big deal.
> > > >
> > > > As a side question - is there any reason why rte_ethdev_trace_rx_burst() is invoked at very last point,
> > > > while rte_ethdev_trace_tx_burst()  after CBs but before actual tx_pkt_burst()?
> > > > It would make things simpler if tracng would always be done either on entrance or exit of rx/tx_burst.
> > >
> > > exit is fine.
> > >
> > > >
> > > > >
> > > > > My suggestion to fix the problem by:
> > > > > Removing the additional `data` redirection and pull callback function
> > > > > pointers back
> > > > > and keep rest as opaque as done in the existing patch like [1]
> > > > >
> > > > > I don't believe this has any real implication on future ABI stability
> > > > > as we will not be adding
> > > > > any new item in rte_eth_fp in any way as new features can be added in slowpath
> > > > > rte_eth_dev as mentioned in the patch.
> > >
> > > Ack
> > >
> > > I will happy to test again after the rework and report any performance
> > > issues if any.
> >
> > Thanks Jerin, v2 is out:
> > https://patches.dpdk.org/project/dpdk/list/?series=19084
> > Please have a look, when you'll get a chance.
> 
> Tested the series. Looks good, No performance issue.

That's great news, thanks for testing it.
Plan to proceed with proper v3 in next few days.
 


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [RFC v2 3/5] ethdev: copy ethdev 'burst' API into separate structure
  2021-09-27 18:01  0%       ` Jerin Jacob
@ 2021-09-28  9:42  0%         ` Ananyev, Konstantin
  0 siblings, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2021-09-28  9:42 UTC (permalink / raw)
  To: Jerin Jacob, Wang, Haiyue
  Cc: dev, Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W, humin29,
	yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang, Qiming,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	Xia, Chenbo, thomas, Yigit, Ferruh, mdr, Jayatheerthan, Jay


> > >
> > > Copy public function pointers (rx_pkt_burst(), etc.) and related
> > > pointers to internal data from rte_eth_dev structure into a separate flat
> > > array. We can keep it public to still use inline functions for 'fast' calls
> > > (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
> > > The intention is to make rte_eth_dev and related structures internal.
> > > That should allow future possible changes to core eth_dev strcutures
> > > to be transaprent to the user and help to avoid ABI/API breakages.
> > >
> > > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > > ---
> > >  lib/ethdev/ethdev_private.c  | 53 ++++++++++++++++++++++++++++++++++++
> > >  lib/ethdev/ethdev_private.h  |  7 +++++
> > >  lib/ethdev/rte_ethdev.c      | 17 ++++++++++++
> > >  lib/ethdev/rte_ethdev_core.h | 45 ++++++++++++++++++++++++++++++
> > >  4 files changed, 122 insertions(+)
> > >
> > > diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
> > > index 012cf73ca2..a1683da77b 100644
> > > --- a/lib/ethdev/ethdev_private.c
> > > +++ b/lib/ethdev/ethdev_private.c
> > > @@ -174,3 +174,56 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
> > >               RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
> > >       return str == NULL ? -1 : 0;
> > >  }
> > > +
> > > +static uint16_t
> > > +dummy_eth_rx_burst(__rte_unused void *rxq,
> > > +             __rte_unused struct rte_mbuf **rx_pkts,
> > > +             __rte_unused uint16_t nb_pkts)
> > > +{
> > > +     RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
> > > +     rte_errno = ENOTSUP;
> > > +     return 0;
> > > +}
> > > +
> > > +static uint16_t
> > > +dummy_eth_tx_burst(__rte_unused void *txq,
> > > +             __rte_unused struct rte_mbuf **tx_pkts,
> > > +             __rte_unused uint16_t nb_pkts)
> > > +{
> > > +     RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n");
> > > +     rte_errno = ENOTSUP;
> > > +     return 0;
> > > +}
> > > +
> > > +void
> > > +eth_dev_burst_api_reset(struct rte_eth_burst_api *rba)
> > > +{
> > > +     static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
> > > +     static const struct rte_eth_burst_api dummy_api = {
> > > +             .rx_pkt_burst = dummy_eth_rx_burst,
> > > +             .tx_pkt_burst = dummy_eth_tx_burst,
> > > +             .rxq = {.data = dummy_data, .clbk = dummy_data,},
> > > +             .txq = {.data = dummy_data, .clbk = dummy_data,},
> > > +     };
> > > +
> > > +     *rba = dummy_api;
> > > +}
> > > +
> > > +void
> > > +eth_dev_burst_api_setup(struct rte_eth_burst_api *rba,
> > > +             const struct rte_eth_dev *dev)
> > > +{
> > > +     rba->rx_pkt_burst = dev->rx_pkt_burst;
> > > +     rba->tx_pkt_burst = dev->tx_pkt_burst;
> > > +     rba->tx_pkt_prepare = dev->tx_pkt_prepare;
> > > +     rba->rx_queue_count = dev->rx_queue_count;
> > > +     rba->rx_descriptor_status = dev->rx_descriptor_status;
> > > +     rba->tx_descriptor_status = dev->tx_descriptor_status;
> > > +
> > > +     rba->rxq.data = dev->data->rx_queues;
> > > +     rba->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
> > > +
> > > +     rba->txq.data = dev->data->tx_queues;
> > > +     rba->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
> > > +}
> > > +
> > > diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
> > > index 9bb0879538..54921f4860 100644
> > > --- a/lib/ethdev/ethdev_private.h
> > > +++ b/lib/ethdev/ethdev_private.h
> > > @@ -30,6 +30,13 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
> > >  /* Parse devargs value for representor parameter. */
> > >  int rte_eth_devargs_parse_representor_ports(char *str, void *data);
> > >
> > > +/* reset eth 'burst' API to dummy values */
> > > +void eth_dev_burst_api_reset(struct rte_eth_burst_api *rba);
> > > +
> > > +/* setup eth 'burst' API to ethdev values */
> > > +void eth_dev_burst_api_setup(struct rte_eth_burst_api *rba,
> > > +             const struct rte_eth_dev *dev);
> > > +
> > >  #ifdef __cplusplus
> > >  }
> > >  #endif
> > > diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> > > index 424bc260fa..5904bb7bae 100644
> > > --- a/lib/ethdev/rte_ethdev.c
> > > +++ b/lib/ethdev/rte_ethdev.c
> > > @@ -44,6 +44,9 @@
> > >  static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
> > >  struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
> > >
> > > +/* public 'fast/burst' API */
> > > +struct rte_eth_burst_api rte_eth_burst_api[RTE_MAX_ETHPORTS];
> > > +
> > >  /* spinlock for eth device callbacks */
> > >  static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
> > >
> > > @@ -1788,6 +1791,9 @@ rte_eth_dev_start(uint16_t port_id)
> > >               (*dev->dev_ops->link_update)(dev, 0);
> > >       }
> > >
> > > +     /* expose selection of PMD rx/tx function */
> > > +     eth_dev_burst_api_setup(rte_eth_burst_api + port_id, dev);
> > > +
> > >       rte_ethdev_trace_start(port_id);
> > >       return 0;
> > >  }
> > > @@ -1810,6 +1816,9 @@ rte_eth_dev_stop(uint16_t port_id)
> > >               return 0;
> > >       }
> > >
> > > +     /* point rx/tx functions to dummy ones */
> > > +     eth_dev_burst_api_reset(rte_eth_burst_api + port_id);
> > > +
> > >       dev->data->dev_started = 0;
> > >       ret = (*dev->dev_ops->dev_stop)(dev);
> > >       rte_ethdev_trace_stop(port_id, ret);
> > > @@ -4568,6 +4577,14 @@ rte_eth_mirror_rule_reset(uint16_t port_id, uint8_t rule_id)
> > >       return eth_err(port_id, (*dev->dev_ops->mirror_rule_reset)(dev, rule_id));
> > >  }
> > >
> > > +RTE_INIT(eth_dev_init_burst_api)
> > > +{
> > > +     uint32_t i;
> > > +
> > > +     for (i = 0; i != RTE_DIM(rte_eth_burst_api); i++)
> > > +             eth_dev_burst_api_reset(rte_eth_burst_api + i);
> > > +}
> > > +
> > >  RTE_INIT(eth_dev_init_cb_lists)
> > >  {
> > >       uint16_t i;
> > > diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
> > > index 00f27c643a..da6de5de43 100644
> > > --- a/lib/ethdev/rte_ethdev_core.h
> > > +++ b/lib/ethdev/rte_ethdev_core.h
> > > @@ -53,6 +53,51 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
> > >  typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
> > >  /**< @internal Check the status of a Tx descriptor */
> > >
> > > +/**
> > > + * @internal
> > > + * Structure used to hold opaque pointernals to internal ethdev RX/TXi
> > > + * queues data.
> > > + * The main purpose to expose these pointers at all - allow compiler
> > > + * to fetch this data for 'fast' ethdev inline functions in advance.
> > > + */
> > > +struct rte_ethdev_qdata {
> > > +     void **data;
> > > +     /**< points to array of internal queue data pointers */
> > > +     void **clbk;
> > > +     /**< points to array of queue callback data pointers */
> > > +};
> > > +
> > > +/**
> > > + * @internal
> > > + * 'fast' ethdev funcions and related data are hold in a flat array.
> > > + * one entry per ethdev.
> > > + */
> > > +struct rte_eth_burst_api {
> >
> > 'ops' is better ? Like "struct rte_eth_burst_ops". ;-)
> 
> Since all fastpath APIs are not in bust in nature. IMO, rte_eth_fp_ops
> or so may be better.

I don't have any strong opinion about the name.
Whatever majority will decide is ok by me here.

> 
> >
> > > +
> > > +     /** first 64B line */
> > > +     eth_rx_burst_t rx_pkt_burst;
> > > +     /**< PMD receive function. */
> > > +     eth_tx_burst_t tx_pkt_burst;
> > > +     /**< PMD transmit function. */
> > > +     eth_tx_prep_t tx_pkt_prepare;
> > > +     /**< PMD transmit prepare function. */
> > > +     eth_rx_queue_count_t rx_queue_count;
> > > +     /**< Get the number of used RX descriptors. */
> > > +     eth_rx_descriptor_status_t rx_descriptor_status;
> > > +     /**< Check the status of a Rx descriptor. */
> > > +     eth_tx_descriptor_status_t tx_descriptor_status;
> > > +     /**< Check the status of a Tx descriptor. */
> > > +     uintptr_t reserved[2];
> > > +
> >
> > How about 32 bit system ? Does it need something like :

I don't think we need to anything special for 32-bit version here.
We have 16 pointers here in total right now,
on 32-bit systems it would fit into 64B anyway.

> >
> > __rte_cache_aligned for rxq to make sure 64B line?
> 
> __rte_cache_aligned_min for 128B CL systems.


> 
> >
> > > +     /** second 64B line */
> > > +     struct rte_ethdev_qdata rxq;
> > > +     struct rte_ethdev_qdata txq;
> > > +     uintptr_t reserved2[4];
> > > +
> > > +} __rte_cache_aligned;
> > > +
> > > +extern struct rte_eth_burst_api rte_eth_burst_api[RTE_MAX_ETHPORTS];
> > > +
> > >
> > >  /**
> > >   * @internal
> > > --
> > > 2.26.3
> >

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v3 1/6] security: add SA lifetime configuration
  @ 2021-09-28 10:07  8%   ` Anoob Joseph
    1 sibling, 0 replies; 200+ results
From: Anoob Joseph @ 2021-09-28 10:07 UTC (permalink / raw)
  To: Akhil Goyal, Declan Doherty, Fan Zhang, Konstantin Ananyev
  Cc: Anoob Joseph, Jerin Jacob, Archana Muniganti, Tejasree Kondoj,
	Hemant Agrawal, Radu Nicolau, Ciara Power, Gagandeep Singh, dev

Add SA lifetime configuration to register soft and hard expiry limits.
Expiry can be in units of number of packets or bytes. Crypto op
status is also updated to include new field, aux_flags, which can be
used to indicate cases such as soft expiry in case of lookaside
protocol operations.

In case of soft expiry, the packets are successfully IPsec processed but
the soft expiry would indicate that SA needs to be reconfigured. For
inline protocol capable ethdev, this would result in an eth event while
for lookaside protocol capable cryptodev, this can be communicated via
`rte_crypto_op.aux_flags` field.

In case of hard expiry, the packets will not be IPsec processed and
would result in error.

Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

---
 .../test_cryptodev_security_ipsec_test_vectors.h   |  3 ---
 doc/guides/rel_notes/deprecation.rst               |  5 ----
 doc/guides/rel_notes/release_21_11.rst             | 13 ++++++++++
 examples/ipsec-secgw/ipsec.c                       |  2 +-
 examples/ipsec-secgw/ipsec.h                       |  2 +-
 lib/cryptodev/rte_crypto.h                         | 18 +++++++++++++-
 lib/security/rte_security.h                        | 28 ++++++++++++++++++++--
 7 files changed, 58 insertions(+), 13 deletions(-)

diff --git a/app/test/test_cryptodev_security_ipsec_test_vectors.h b/app/test/test_cryptodev_security_ipsec_test_vectors.h
index ae9cd24..38ea43d 100644
--- a/app/test/test_cryptodev_security_ipsec_test_vectors.h
+++ b/app/test/test_cryptodev_security_ipsec_test_vectors.h
@@ -98,7 +98,6 @@ struct ipsec_test_data pkt_aes_128_gcm = {
 		.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
 		.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
 		.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4,
-		.esn_soft_limit = 0,
 		.replay_win_sz = 0,
 	},
 
@@ -195,7 +194,6 @@ struct ipsec_test_data pkt_aes_192_gcm = {
 		.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
 		.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
 		.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4,
-		.esn_soft_limit = 0,
 		.replay_win_sz = 0,
 	},
 
@@ -295,7 +293,6 @@ struct ipsec_test_data pkt_aes_256_gcm = {
 		.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
 		.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
 		.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4,
-		.esn_soft_limit = 0,
 		.replay_win_sz = 0,
 	},
 
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 70ef45e..69fbde0 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -275,8 +275,3 @@ Deprecation Notices
 * cmdline: ``cmdline`` structure will be made opaque to hide platform-specific
   content. On Linux and FreeBSD, supported prior to DPDK 20.11,
   original structure will be kept until DPDK 21.11.
-
-* cryptodev: The structure ``rte_crypto_op`` would be updated to reduce
-  reserved bytes to 2 (from 3), and use 1 byte to indicate warnings and other
-  information from the crypto/security operation. This field will be used to
-  communicate events such as soft expiry with IPsec in lookaside mode.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index eef7f79..0b7ffa5 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -147,6 +147,13 @@ API Changes
   as it is for drivers only and should be private to DPDK, and not
   installed for app use.
 
+* cryptodev: use 1 reserved byte from ``rte_crypto_op`` for aux flags
+
+  * Updated the structure ``rte_crypto_op`` to reduce reserved bytes to
+  2 (from 3), and use 1 byte to indicate warnings and other information from
+  the crypto/security operation. This field will be used to communicate events
+  such as soft expiry with IPsec in lookaside mode.
+
 
 ABI Changes
 -----------
@@ -168,6 +175,12 @@ ABI Changes
   * Added IPsec SA option to disable IV generation to allow known vector
     tests as well as usage of application provided IV on supported PMDs.
 
+* security: add IPsec SA lifetime configuration
+
+  * Added IPsec SA lifetime configuration to allow applications to configure
+    soft and hard SA expiry limits. Limits can be either in units of packets or
+    bytes.
+
 
 Known Issues
 ------------
diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c
index 5b032fe..4868294 100644
--- a/examples/ipsec-secgw/ipsec.c
+++ b/examples/ipsec-secgw/ipsec.c
@@ -49,7 +49,7 @@ set_ipsec_conf(struct ipsec_sa *sa, struct rte_security_ipsec_xform *ipsec)
 		}
 		/* TODO support for Transport */
 	}
-	ipsec->esn_soft_limit = IPSEC_OFFLOAD_ESN_SOFTLIMIT;
+	ipsec->life.packets_soft_limit = IPSEC_OFFLOAD_PKTS_SOFTLIMIT;
 	ipsec->replay_win_sz = app_sa_prm.window_size;
 	ipsec->options.esn = app_sa_prm.enable_esn;
 	ipsec->options.udp_encap = sa->udp_encap;
diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h
index ae5058d..90c81c1 100644
--- a/examples/ipsec-secgw/ipsec.h
+++ b/examples/ipsec-secgw/ipsec.h
@@ -23,7 +23,7 @@
 
 #define MAX_DIGEST_SIZE 32 /* Bytes -- 256 bits */
 
-#define IPSEC_OFFLOAD_ESN_SOFTLIMIT 0xffffff00
+#define IPSEC_OFFLOAD_PKTS_SOFTLIMIT 0xffffff00
 
 #define IV_OFFSET		(sizeof(struct rte_crypto_op) + \
 				sizeof(struct rte_crypto_sym_op))
diff --git a/lib/cryptodev/rte_crypto.h b/lib/cryptodev/rte_crypto.h
index fd5ef3a..d602183 100644
--- a/lib/cryptodev/rte_crypto.h
+++ b/lib/cryptodev/rte_crypto.h
@@ -66,6 +66,17 @@ enum rte_crypto_op_sess_type {
 };
 
 /**
+ * Auxiliary flags to indicate additional info from the operation
+ */
+
+/**
+ * Auxiliary flags related to IPsec offload with RTE_SECURITY
+ */
+
+#define RTE_CRYPTO_OP_AUX_FLAGS_IPSEC_SOFT_EXPIRY (1 << 0)
+/**< SA soft expiry limit has been reached */
+
+/**
  * Cryptographic Operation.
  *
  * This structure contains data relating to performing cryptographic
@@ -93,7 +104,12 @@ struct rte_crypto_op {
 			 */
 			uint8_t sess_type;
 			/**< operation session type */
-			uint8_t reserved[3];
+			uint8_t aux_flags;
+			/**< Operation specific auxiliary/additional flags.
+			 * These flags carry additional information from the
+			 * operation. Processing of the same is optional.
+			 */
+			uint8_t reserved[2];
 			/**< Reserved bytes to fill 64 bits for
 			 * future additions
 			 */
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index f9e6591..88147e1 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -217,6 +217,30 @@ enum rte_security_ipsec_sa_direction {
 };
 
 /**
+ * Configure soft and hard lifetime of an IPsec SA
+ *
+ * Lifetime of an IPsec SA would specify the maximum number of packets or bytes
+ * that can be processed. IPsec operations would start failing once any hard
+ * limit is reached.
+ *
+ * Soft limits can be specified to generate notification when the SA is
+ * approaching hard limits for lifetime. For inline operations, reaching soft
+ * expiry limit would result in raising an eth event for the same. For lookaside
+ * operations, this would result in a warning returned in
+ * ``rte_crypto_op.aux_flags``.
+ */
+struct rte_security_ipsec_lifetime {
+	uint64_t packets_soft_limit;
+	/**< Soft expiry limit in number of packets */
+	uint64_t bytes_soft_limit;
+	/**< Soft expiry limit in bytes */
+	uint64_t packets_hard_limit;
+	/**< Soft expiry limit in number of packets */
+	uint64_t bytes_hard_limit;
+	/**< Soft expiry limit in bytes */
+};
+
+/**
  * IPsec security association configuration data.
  *
  * This structure contains data required to create an IPsec SA security session.
@@ -236,8 +260,8 @@ struct rte_security_ipsec_xform {
 	/**< IPsec SA Mode - transport/tunnel */
 	struct rte_security_ipsec_tunnel_param tunnel;
 	/**< Tunnel parameters, NULL for transport mode */
-	uint64_t esn_soft_limit;
-	/**< ESN for which the overflow event need to be raised */
+	struct rte_security_ipsec_lifetime life;
+	/**< IPsec SA lifetime */
 	uint32_t replay_win_sz;
 	/**< Anti replay window size to enable sequence replay attack handling.
 	 * replay checking is disabled if the window size is 0.
-- 
2.7.4


^ permalink raw reply	[relevance 8%]

* [dpdk-dev] [PATCH v4 1/6] security: add SA lifetime configuration
  @ 2021-09-28 10:59  8%     ` Anoob Joseph
  0 siblings, 0 replies; 200+ results
From: Anoob Joseph @ 2021-09-28 10:59 UTC (permalink / raw)
  To: Akhil Goyal, Declan Doherty, Fan Zhang, Konstantin Ananyev
  Cc: Anoob Joseph, Jerin Jacob, Archana Muniganti, Tejasree Kondoj,
	Hemant Agrawal, Radu Nicolau, Ciara Power, Gagandeep Singh, dev

Add SA lifetime configuration to register soft and hard expiry limits.
Expiry can be in units of number of packets or bytes. Crypto op
status is also updated to include new field, aux_flags, which can be
used to indicate cases such as soft expiry in case of lookaside
protocol operations.

In case of soft expiry, the packets are successfully IPsec processed but
the soft expiry would indicate that SA needs to be reconfigured. For
inline protocol capable ethdev, this would result in an eth event while
for lookaside protocol capable cryptodev, this can be communicated via
`rte_crypto_op.aux_flags` field.

In case of hard expiry, the packets will not be IPsec processed and
would result in error.

Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

---
 .../test_cryptodev_security_ipsec_test_vectors.h   |  3 ---
 doc/guides/rel_notes/deprecation.rst               |  5 ----
 doc/guides/rel_notes/release_21_11.rst             | 13 ++++++++++
 examples/ipsec-secgw/ipsec.c                       |  2 +-
 examples/ipsec-secgw/ipsec.h                       |  2 +-
 lib/cryptodev/rte_crypto.h                         | 12 +++++++++-
 lib/security/rte_security.h                        | 28 ++++++++++++++++++++--
 7 files changed, 52 insertions(+), 13 deletions(-)

diff --git a/app/test/test_cryptodev_security_ipsec_test_vectors.h b/app/test/test_cryptodev_security_ipsec_test_vectors.h
index ae9cd24..38ea43d 100644
--- a/app/test/test_cryptodev_security_ipsec_test_vectors.h
+++ b/app/test/test_cryptodev_security_ipsec_test_vectors.h
@@ -98,7 +98,6 @@ struct ipsec_test_data pkt_aes_128_gcm = {
 		.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
 		.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
 		.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4,
-		.esn_soft_limit = 0,
 		.replay_win_sz = 0,
 	},
 
@@ -195,7 +194,6 @@ struct ipsec_test_data pkt_aes_192_gcm = {
 		.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
 		.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
 		.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4,
-		.esn_soft_limit = 0,
 		.replay_win_sz = 0,
 	},
 
@@ -295,7 +293,6 @@ struct ipsec_test_data pkt_aes_256_gcm = {
 		.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
 		.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
 		.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4,
-		.esn_soft_limit = 0,
 		.replay_win_sz = 0,
 	},
 
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 70ef45e..69fbde0 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -275,8 +275,3 @@ Deprecation Notices
 * cmdline: ``cmdline`` structure will be made opaque to hide platform-specific
   content. On Linux and FreeBSD, supported prior to DPDK 20.11,
   original structure will be kept until DPDK 21.11.
-
-* cryptodev: The structure ``rte_crypto_op`` would be updated to reduce
-  reserved bytes to 2 (from 3), and use 1 byte to indicate warnings and other
-  information from the crypto/security operation. This field will be used to
-  communicate events such as soft expiry with IPsec in lookaside mode.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index c93cc20..114631e 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -152,6 +152,13 @@ API Changes
   as it is for drivers only and should be private to DPDK, and not
   installed for app use.
 
+* cryptodev: use 1 reserved byte from ``rte_crypto_op`` for aux flags
+
+  * Updated the structure ``rte_crypto_op`` to reduce reserved bytes to
+  2 (from 3), and use 1 byte to indicate warnings and other information from
+  the crypto/security operation. This field will be used to communicate events
+  such as soft expiry with IPsec in lookaside mode.
+
 
 ABI Changes
 -----------
@@ -174,6 +181,12 @@ ABI Changes
   have much processing in PMD specific callbacks but just 64-bit set/get.
   This avoids a per pkt function pointer jump overhead for such PMD's.
 
+* security: add IPsec SA lifetime configuration
+
+  * Added IPsec SA lifetime configuration to allow applications to configure
+    soft and hard SA expiry limits. Limits can be either in units of packets or
+    bytes.
+
 
 Known Issues
 ------------
diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c
index 5b032fe..4868294 100644
--- a/examples/ipsec-secgw/ipsec.c
+++ b/examples/ipsec-secgw/ipsec.c
@@ -49,7 +49,7 @@ set_ipsec_conf(struct ipsec_sa *sa, struct rte_security_ipsec_xform *ipsec)
 		}
 		/* TODO support for Transport */
 	}
-	ipsec->esn_soft_limit = IPSEC_OFFLOAD_ESN_SOFTLIMIT;
+	ipsec->life.packets_soft_limit = IPSEC_OFFLOAD_PKTS_SOFTLIMIT;
 	ipsec->replay_win_sz = app_sa_prm.window_size;
 	ipsec->options.esn = app_sa_prm.enable_esn;
 	ipsec->options.udp_encap = sa->udp_encap;
diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h
index ae5058d..90c81c1 100644
--- a/examples/ipsec-secgw/ipsec.h
+++ b/examples/ipsec-secgw/ipsec.h
@@ -23,7 +23,7 @@
 
 #define MAX_DIGEST_SIZE 32 /* Bytes -- 256 bits */
 
-#define IPSEC_OFFLOAD_ESN_SOFTLIMIT 0xffffff00
+#define IPSEC_OFFLOAD_PKTS_SOFTLIMIT 0xffffff00
 
 #define IV_OFFSET		(sizeof(struct rte_crypto_op) + \
 				sizeof(struct rte_crypto_sym_op))
diff --git a/lib/cryptodev/rte_crypto.h b/lib/cryptodev/rte_crypto.h
index fd5ef3a..a864f50 100644
--- a/lib/cryptodev/rte_crypto.h
+++ b/lib/cryptodev/rte_crypto.h
@@ -65,6 +65,11 @@ enum rte_crypto_op_sess_type {
 	RTE_CRYPTO_OP_SECURITY_SESSION	/**< Security session crypto operation */
 };
 
+/* Auxiliary flags related to IPsec offload with RTE_SECURITY */
+
+#define RTE_CRYPTO_OP_AUX_FLAGS_IPSEC_SOFT_EXPIRY (1 << 0)
+/**< SA soft expiry limit has been reached */
+
 /**
  * Cryptographic Operation.
  *
@@ -93,7 +98,12 @@ struct rte_crypto_op {
 			 */
 			uint8_t sess_type;
 			/**< operation session type */
-			uint8_t reserved[3];
+			uint8_t aux_flags;
+			/**< Operation specific auxiliary/additional flags.
+			 * These flags carry additional information from the
+			 * operation. Processing of the same is optional.
+			 */
+			uint8_t reserved[2];
 			/**< Reserved bytes to fill 64 bits for
 			 * future additions
 			 */
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index f9e6591..88147e1 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -217,6 +217,30 @@ enum rte_security_ipsec_sa_direction {
 };
 
 /**
+ * Configure soft and hard lifetime of an IPsec SA
+ *
+ * Lifetime of an IPsec SA would specify the maximum number of packets or bytes
+ * that can be processed. IPsec operations would start failing once any hard
+ * limit is reached.
+ *
+ * Soft limits can be specified to generate notification when the SA is
+ * approaching hard limits for lifetime. For inline operations, reaching soft
+ * expiry limit would result in raising an eth event for the same. For lookaside
+ * operations, this would result in a warning returned in
+ * ``rte_crypto_op.aux_flags``.
+ */
+struct rte_security_ipsec_lifetime {
+	uint64_t packets_soft_limit;
+	/**< Soft expiry limit in number of packets */
+	uint64_t bytes_soft_limit;
+	/**< Soft expiry limit in bytes */
+	uint64_t packets_hard_limit;
+	/**< Soft expiry limit in number of packets */
+	uint64_t bytes_hard_limit;
+	/**< Soft expiry limit in bytes */
+};
+
+/**
  * IPsec security association configuration data.
  *
  * This structure contains data required to create an IPsec SA security session.
@@ -236,8 +260,8 @@ struct rte_security_ipsec_xform {
 	/**< IPsec SA Mode - transport/tunnel */
 	struct rte_security_ipsec_tunnel_param tunnel;
 	/**< Tunnel parameters, NULL for transport mode */
-	uint64_t esn_soft_limit;
-	/**< ESN for which the overflow event need to be raised */
+	struct rte_security_ipsec_lifetime life;
+	/**< IPsec SA lifetime */
 	uint32_t replay_win_sz;
 	/**< Anti replay window size to enable sequence replay attack handling.
 	 * replay checking is disabled if the window size is 0.
-- 
2.7.4


^ permalink raw reply	[relevance 8%]

* [dpdk-dev] [PATCH v2 1/3] security: add option to configure tunnel header verification
  @ 2021-09-28 12:07  5% ` Tejasree Kondoj
  0 siblings, 0 replies; 200+ results
From: Tejasree Kondoj @ 2021-09-28 12:07 UTC (permalink / raw)
  To: Akhil Goyal, Radu Nicolau, Declan Doherty
  Cc: Tejasree Kondoj, Anoob Joseph, Ankur Dwivedi, Jerin Jacob,
	Konstantin Ananyev, Ciara Power, Hemant Agrawal, Gagandeep Singh,
	Fan Zhang, Archana Muniganti, dev

Add option to indicate whether outer header verification
need to be done as part of inbound IPsec processing.

With inline IPsec processing, SA lookup would be happening
in the Rx path of rte_ethdev. When rte_flow is configured to
support more than one SA, SPI would be used to lookup SA.
In such cases, additional verification would be required to
ensure duplicate SPIs are not getting processed in the inline path.

For lookaside cases, the same option can be used by application
to offload tunnel verification to the PMD.

These verifications would help in averting possible DoS attacks.

Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
---
 doc/guides/rel_notes/deprecation.rst   |  2 +-
 doc/guides/rel_notes/release_21_11.rst |  5 +++++
 lib/security/rte_security.h            | 17 +++++++++++++++++
 3 files changed, 23 insertions(+), 1 deletion(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 69fbde0c70..80ae9a6372 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -238,7 +238,7 @@ Deprecation Notices
 
 * security: The IPsec SA config options ``struct rte_security_ipsec_sa_options``
   will be updated with new fields to support new features like IPsec inner
-  checksum, tunnel header verification, TSO in case of protocol offload.
+  checksum, TSO in case of protocol offload.
 
 * ipsec: The structure ``rte_ipsec_sa_prm`` will be extended with a new field
   ``hdr_l3_len`` to configure tunnel L3 header length.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 0b7ffa5e50..623b52d9c9 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -181,6 +181,11 @@ ABI Changes
     soft and hard SA expiry limits. Limits can be either in units of packets or
     bytes.
 
+* security: add IPsec SA option to configure tunnel header verification
+
+  * Added SA option to indicate whether outer header verification need to be
+    done as part of inbound IPsec processing.
+
 
 Known Issues
 ------------
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 88147e1f57..a10c9b5f00 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -55,6 +55,14 @@ enum rte_security_ipsec_tunnel_type {
 	/**< Outer header is IPv6 */
 };
 
+/**
+ * IPSEC tunnel header verification mode
+ *
+ * Controls how outer IP header is verified in inbound.
+ */
+#define RTE_SECURITY_IPSEC_TUNNEL_VERIFY_DST_ADDR     0x1
+#define RTE_SECURITY_IPSEC_TUNNEL_VERIFY_SRC_DST_ADDR 0x2
+
 /**
  * Security context for crypto/eth devices
  *
@@ -206,6 +214,15 @@ struct rte_security_ipsec_sa_options {
 	 * by the PMD.
 	 */
 	uint32_t iv_gen_disable : 1;
+
+	/** Verify tunnel header in inbound
+	 * * ``RTE_SECURITY_IPSEC_TUNNEL_VERIFY_DST_ADDR``: Verify destination
+	 *   IP address.
+	 *
+	 * * ``RTE_SECURITY_IPSEC_TUNNEL_VERIFY_SRC_DST_ADDR``: Verify both
+	 *   source and destination IP addresses.
+	 */
+	uint32_t tunnel_hdr_verify : 2;
 };
 
 /** IPSec security association direction */
-- 
2.27.0


^ permalink raw reply	[relevance 5%]

* [dpdk-dev] [PATCH 1/3] security: add SA config option for inner pkt csum
  @ 2021-09-28 13:26  5% ` Archana Muniganti
  0 siblings, 0 replies; 200+ results
From: Archana Muniganti @ 2021-09-28 13:26 UTC (permalink / raw)
  To: gakhil, radu.nicolau, roy.fan.zhang, hemant.agrawal, konstantin.ananyev
  Cc: Archana Muniganti, anoobj, ktejasree, adwivedi, jerinj, dev

Add inner packet IPv4 hdr and L4 checksum enable options
in conf. These will be used in case of protocol offload.
Per SA, application could specify whether the
checksum(compute/verify) can be offloaded to security device.

Signed-off-by: Archana Muniganti <marchana@marvell.com>
---
 doc/guides/rel_notes/deprecation.rst   |  4 ++--
 doc/guides/rel_notes/release_21_11.rst |  5 +++++
 lib/cryptodev/rte_cryptodev.h          |  2 ++
 lib/security/rte_security.h            | 18 ++++++++++++++++++
 4 files changed, 27 insertions(+), 2 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 80ae9a6372..ae2d6ffe33 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -237,8 +237,8 @@ Deprecation Notices
   IPsec payload MSS (Maximum Segment Size), and ESN (Extended Sequence Number).
 
 * security: The IPsec SA config options ``struct rte_security_ipsec_sa_options``
-  will be updated with new fields to support new features like IPsec inner
-  checksum, TSO in case of protocol offload.
+  will be updated with new fields to support new features like TSO in case of
+  protocol offload.
 
 * ipsec: The structure ``rte_ipsec_sa_prm`` will be extended with a new field
   ``hdr_l3_len`` to configure tunnel L3 header length.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index e84a8863e9..42ed9ee580 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -197,6 +197,11 @@ ABI Changes
   * Added SA option to indicate whether UDP ports verification need to be
     done as part of inbound IPsec processing.
 
+* security: add IPsec SA config option for inner packet checksum
+
+  * Added inner packet IPv4 hdr and L4 checksum enable options in conf.
+    Per SA, application could specify whether the checksum(compute/verify)
+    can be offloaded to security device.
 
 Known Issues
 ------------
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index bb01f0f195..d9271a6c45 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -479,6 +479,8 @@ rte_cryptodev_asym_get_xform_enum(enum rte_crypto_asym_xform_type *xform_enum,
 /**< Support operations on multiple data-units message */
 #define RTE_CRYPTODEV_FF_CIPHER_WRAPPED_KEY		(1ULL << 26)
 /**< Support wrapped key in cipher xform  */
+#define RTE_CRYPTODEV_FF_SECURITY_INNER_CSUM		(1ULL << 27)
+/**< Support inner checksum computation/verification */
 
 /**
  * Get the name of a crypto device feature flag
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index ae5a2e09c3..47d0b5689c 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -230,6 +230,24 @@ struct rte_security_ipsec_sa_options {
 	 *   source and destination IP addresses.
 	 */
 	uint32_t tunnel_hdr_verify : 2;
+
+	/** Compute/verify inner packet IPv4 header checksum in tunnel mode
+	 *
+	 * * 1: For outbound, compute inner packet IPv4 header checksum
+	 *      before tunnel encapsulation and for inbound, verify after
+	 *      tunnel decapsulation.
+	 * * 0: Inner packet IP header checksum is not computed/verified.
+	 */
+	uint32_t ip_csum_enable : 1;
+
+	/** Compute/verify inner packet L4 checksum in tunnel mode
+	 *
+	 * * 1: For outbound, compute inner packet L4 checksum before
+	 *      tunnel encapsulation and for inbound, verify after
+	 *      tunnel decapsulation.
+	 * * 0: Inner packet L4 checksum is not computed/verified.
+	 */
+	uint32_t l4_csum_enable : 1;
 };
 
 /** IPSec security association direction */
-- 
2.22.0


^ permalink raw reply	[relevance 5%]

* Re: [dpdk-dev] [PATCH] ethdev: remove deprecated shared counter attribute
  @ 2021-09-28 15:58  4% ` Stephen Hemminger
  2021-09-28 16:25  0%   ` Andrew Rybchenko
  0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2021-09-28 15:58 UTC (permalink / raw)
  To: Andrew Rybchenko
  Cc: Ori Kam, Xiaoyun Li, Ray Kinsella, Ajit Khaparde, Somnath Kotur,
	Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
	Min Hu (Connor),
	Yisen Zhuang, Lijun Ou, Qiming Yang, Qi Zhang, Matan Azrad,
	Viacheslav Ovsiienko, Jerin Jacob, Jasvinder Singh,
	Cristian Dumitrescu, Thomas Monjalon, Ferruh Yigit, dev

On Tue, 28 Sep 2021 18:23:00 +0300
Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> wrote:

> @@ -2498,24 +2498,11 @@ struct rte_flow_query_age {
>   * Counters can be retrieved and reset through ``rte_flow_query()``, see
>   * ``struct rte_flow_query_count``.
>   *
> - * @deprecated Shared attribute is deprecated, use generic
> - * RTE_FLOW_ACTION_TYPE_INDIRECT action.
> - *
> - * The shared flag indicates whether the counter is unique to the flow rule the
> - * action is specified with, or whether it is a shared counter.
> - *
> - * For a count action with the shared flag set, then then a global device
> - * namespace is assumed for the counter id, so that any matched flow rules using
> - * a count action with the same counter id on the same port will contribute to
> - * that counter.
> - *
>   * For ports within the same switch domain then the counter id namespace extends
>   * to all ports within that switch domain.
>   */
>  struct rte_flow_action_count {
> -	/** @deprecated Share counter ID with other flow rules. */
> -	uint32_t shared:1;
> -	uint32_t reserved:31; /**< Reserved, must be zero. */
> +	uint32_t reserved; /**< Reserved, must be zero. */
>  	uint32_t id; /**< Counter ID. */

Reserved fields are often source of future problems.
You should change each driver to check that reserved field
return -ENOTSUP if non-zero.  That way if reserved field is ever
used in future it won't break API/ABI.

The other option is to just remove the reserved field and take the API/ABI
hit now.

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH 1/3] security: add option to configure UDP ports verification
  @ 2021-09-28 16:11  0%   ` Akhil Goyal
  0 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2021-09-28 16:11 UTC (permalink / raw)
  To: Tejasree Kondoj, Radu Nicolau, Declan Doherty
  Cc: Tejasree Kondoj, Anoob Joseph, Ankur Dwivedi,
	Jerin Jacob Kollanukkaran, Konstantin Ananyev, Ciara Power,
	Hemant Agrawal, Gagandeep Singh, Fan Zhang, Archana Muniganti,
	dev

> Add option to indicate whether UDP encapsulation ports
> verification need to be done as part of inbound
> IPsec processing.
> 
> Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
> ---
>  doc/guides/rel_notes/release_21_11.rst | 5 +++++
>  lib/security/rte_security.h            | 7 +++++++
>  2 files changed, 12 insertions(+)
> 
> diff --git a/doc/guides/rel_notes/release_21_11.rst
> b/doc/guides/rel_notes/release_21_11.rst
> index b0606cb542..afeba0105b 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -141,6 +141,11 @@ ABI Changes
>    * Added SA option to indicate whether outer header verification need to be
>      done as part of inbound IPsec processing.
> 
> +* security: add IPsec SA option to configure UDP ports verification
> +
> +  * Added SA option to indicate whether UDP ports verification need to be
> +    done as part of inbound IPsec processing.
> +
Reword as 
+* security: A new option ``udp_ports_verify`` is added in structure
+  ``rte_security_ipsec_sa_options`` to indicate whether UDP ports
+  verification need to be done as part of inbound IPsec processing.
+

> 
>  Known Issues
>  ------------
> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> index 2a61cad885..18b0f02c44 100644
> --- a/lib/security/rte_security.h
> +++ b/lib/security/rte_security.h
> @@ -139,6 +139,13 @@ struct rte_security_ipsec_sa_options {
>  	 */
>  	uint32_t udp_encap : 1;
> 
> +	/** Verify UDP encapsulation ports in inbound
> +	 *
> +	 * * 1: Match UDP source and destination ports
> +	 * * 0: Do not match UDP ports
> +	 */
> +	uint32_t udp_ports_verify : 1;
> +
>  	/** Copy DSCP bits
>  	 *
>  	 * * 1: Copy IPv4 or IPv6 DSCP bits from inner IP header to

All new options should be added in the end of this structure for backward compatibility.


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] ethdev: remove deprecated shared counter attribute
  2021-09-28 15:58  4% ` Stephen Hemminger
@ 2021-09-28 16:25  0%   ` Andrew Rybchenko
  0 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-09-28 16:25 UTC (permalink / raw)
  To: Stephen Hemminger
  Cc: Ori Kam, Xiaoyun Li, Ray Kinsella, Ajit Khaparde, Somnath Kotur,
	Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
	Min Hu (Connor),
	Yisen Zhuang, Lijun Ou, Qiming Yang, Qi Zhang, Matan Azrad,
	Viacheslav Ovsiienko, Jerin Jacob, Jasvinder Singh,
	Cristian Dumitrescu, Thomas Monjalon, Ferruh Yigit, dev

On 9/28/21 6:58 PM, Stephen Hemminger wrote:
> On Tue, 28 Sep 2021 18:23:00 +0300
> Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> wrote:
> 
>> @@ -2498,24 +2498,11 @@ struct rte_flow_query_age {
>>   * Counters can be retrieved and reset through ``rte_flow_query()``, see
>>   * ``struct rte_flow_query_count``.
>>   *
>> - * @deprecated Shared attribute is deprecated, use generic
>> - * RTE_FLOW_ACTION_TYPE_INDIRECT action.
>> - *
>> - * The shared flag indicates whether the counter is unique to the flow rule the
>> - * action is specified with, or whether it is a shared counter.
>> - *
>> - * For a count action with the shared flag set, then then a global device
>> - * namespace is assumed for the counter id, so that any matched flow rules using
>> - * a count action with the same counter id on the same port will contribute to
>> - * that counter.
>> - *
>>   * For ports within the same switch domain then the counter id namespace extends
>>   * to all ports within that switch domain.
>>   */
>>  struct rte_flow_action_count {
>> -	/** @deprecated Share counter ID with other flow rules. */
>> -	uint32_t shared:1;
>> -	uint32_t reserved:31; /**< Reserved, must be zero. */
>> +	uint32_t reserved; /**< Reserved, must be zero. */
>>  	uint32_t id; /**< Counter ID. */
> 
> Reserved fields are often source of future problems.
> You should change each driver to check that reserved field
> return -ENOTSUP if non-zero.  That way if reserved field is ever
> used in future it won't break API/ABI.
> 
> The other option is to just remove the reserved field and take the API/ABI
> hit now.
> 

I agree. If there is no objections, I'll remove it in v2.


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] eal: promote rte_mcfg_get_single_file_segment to stable ABI
  @ 2021-09-28 19:39  4% ` David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2021-09-28 19:39 UTC (permalink / raw)
  To: Jakub Grajciar -X (jgrajcia - PANTHEON TECH SRO at Cisco); +Cc: dev

Hello Jakub,

On Mon, Sep 13, 2021 at 10:44 AM Jakub Grajciar -X (jgrajcia -
PANTHEON TECH SRO at Cisco) <jgrajcia@cisco.com> wrote:
>
> Signed-off-by: Jakub Grajciar jgrajcia@cisco.com<mailto:jgrajcia@cisco.com>

Anatoly already sent a patch for this symbol:
https://patchwork.dpdk.org/project/dpdk/patch/37d174136f8f8dcaa47f875e196bf2d9ec7d5228.1631277001.git.anatoly.burakov@intel.com/

I marked your patch as rejected and I'll merge Anatoly series.
Thanks.


-- 
David Marchand


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v2 0/2] cmdline: reduce ABI
    2021-09-20 11:11  4%   ` David Marchand
@ 2021-09-28 19:47  4%   ` Dmitry Kozlyuk
  2021-09-28 19:47  4%     ` [dpdk-dev] [PATCH v2 1/2] cmdline: make struct cmdline opaque Dmitry Kozlyuk
  2021-10-05  0:55  4%     ` [dpdk-dev] [PATCH v3 0/2] cmdline: reduce ABI Dmitry Kozlyuk
  1 sibling, 2 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-09-28 19:47 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Olivier Matz, Dmitry Kozlyuk

Hide struct cmdline following the deprecation notice.
Hide struct rdline following the v1 discussion.

Dmitry Kozlyuk (2):
  cmdline: make struct cmdline opaque
  cmdline: make struct rdline opaque

 app/test-cmdline/commands.c            |  2 +-
 app/test/test_cmdline_lib.c            | 19 +++---
 doc/guides/rel_notes/deprecation.rst   |  4 --
 doc/guides/rel_notes/release_21_11.rst |  2 +
 lib/cmdline/cmdline.c                  |  3 +-
 lib/cmdline/cmdline.h                  | 19 ------
 lib/cmdline/cmdline_cirbuf.c           |  1 -
 lib/cmdline/cmdline_private.h          | 57 +++++++++++++++++-
 lib/cmdline/cmdline_rdline.c           | 50 +++++++++++++++-
 lib/cmdline/cmdline_rdline.h           | 83 +++++++++-----------------
 lib/cmdline/version.map                |  8 ++-
 11 files changed, 155 insertions(+), 93 deletions(-)

-- 
2.29.3


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v2 1/2] cmdline: make struct cmdline opaque
  2021-09-28 19:47  4%   ` [dpdk-dev] [PATCH v2 0/2] " Dmitry Kozlyuk
@ 2021-09-28 19:47  4%     ` Dmitry Kozlyuk
  2021-10-05  0:55  4%     ` [dpdk-dev] [PATCH v3 0/2] cmdline: reduce ABI Dmitry Kozlyuk
  1 sibling, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-09-28 19:47 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Olivier Matz, Dmitry Kozlyuk

Remove the definition of `struct cmdline` from public header.
Deprecation notice:
https://mails.dpdk.org/archives/dev/2020-September/183310.html

Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 doc/guides/rel_notes/deprecation.rst   |  4 ----
 doc/guides/rel_notes/release_21_11.rst |  2 ++
 lib/cmdline/cmdline.h                  | 19 -------------------
 lib/cmdline/cmdline_private.h          |  8 +++++++-
 4 files changed, 9 insertions(+), 24 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 76a4abfd6b..a404276fa2 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -275,10 +275,6 @@ Deprecation Notices
 * metrics: The function ``rte_metrics_init`` will have a non-void return
   in order to notify errors instead of calling ``rte_exit``.
 
-* cmdline: ``cmdline`` structure will be made opaque to hide platform-specific
-  content. On Linux and FreeBSD, supported prior to DPDK 20.11,
-  original structure will be kept until DPDK 21.11.
-
 * security: The functions ``rte_security_set_pkt_metadata`` and
   ``rte_security_get_userdata`` will be made inline functions and additional
   flags will be added in structure ``rte_security_ctx`` in DPDK 21.11.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index b55900936d..18377e5813 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -101,6 +101,8 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
+* cmdline: Made ``cmdline`` structure definition hidden on Linux and FreeBSD.
+
 
 ABI Changes
 -----------
diff --git a/lib/cmdline/cmdline.h b/lib/cmdline/cmdline.h
index c29762ddae..96674dfda2 100644
--- a/lib/cmdline/cmdline.h
+++ b/lib/cmdline/cmdline.h
@@ -7,10 +7,6 @@
 #ifndef _CMDLINE_H_
 #define _CMDLINE_H_
 
-#ifndef RTE_EXEC_ENV_WINDOWS
-#include <termios.h>
-#endif
-
 #include <rte_common.h>
 #include <rte_compat.h>
 
@@ -27,23 +23,8 @@
 extern "C" {
 #endif
 
-#ifndef RTE_EXEC_ENV_WINDOWS
-
-struct cmdline {
-	int s_in;
-	int s_out;
-	cmdline_parse_ctx_t *ctx;
-	struct rdline rdl;
-	char prompt[RDLINE_PROMPT_SIZE];
-	struct termios oldterm;
-};
-
-#else
-
 struct cmdline;
 
-#endif /* RTE_EXEC_ENV_WINDOWS */
-
 struct cmdline *cmdline_new(cmdline_parse_ctx_t *ctx, const char *prompt, int s_in, int s_out);
 void cmdline_set_prompt(struct cmdline *cl, const char *prompt);
 void cmdline_free(struct cmdline *cl);
diff --git a/lib/cmdline/cmdline_private.h b/lib/cmdline/cmdline_private.h
index a87c45275c..2e93674c66 100644
--- a/lib/cmdline/cmdline_private.h
+++ b/lib/cmdline/cmdline_private.h
@@ -11,6 +11,8 @@
 #include <rte_os_shim.h>
 #ifdef RTE_EXEC_ENV_WINDOWS
 #include <rte_windows.h>
+#else
+#include <termios.h>
 #endif
 
 #include <cmdline.h>
@@ -22,6 +24,7 @@ struct terminal {
 	int is_console_input;
 	int is_console_output;
 };
+#endif
 
 struct cmdline {
 	int s_in;
@@ -29,11 +32,14 @@ struct cmdline {
 	cmdline_parse_ctx_t *ctx;
 	struct rdline rdl;
 	char prompt[RDLINE_PROMPT_SIZE];
+#ifdef RTE_EXEC_ENV_WINDOWS
 	struct terminal oldterm;
 	char repeated_char;
 	WORD repeat_count;
-};
+#else
+	struct termios oldterm;
 #endif
+};
 
 /* Disable buffering and echoing, save previous settings to oldterm. */
 void terminal_adjust(struct cmdline *cl);
-- 
2.29.3


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v2 1/3] security: add option to configure UDP ports verification
  @ 2021-09-29  3:25  5% ` Tejasree Kondoj
  0 siblings, 0 replies; 200+ results
From: Tejasree Kondoj @ 2021-09-29  3:25 UTC (permalink / raw)
  To: Akhil Goyal, Radu Nicolau, Declan Doherty
  Cc: Tejasree Kondoj, Anoob Joseph, Ankur Dwivedi, Jerin Jacob,
	Konstantin Ananyev, Ciara Power, Hemant Agrawal, Gagandeep Singh,
	Fan Zhang, Archana Muniganti, dev

Add option to indicate whether UDP encapsulation ports
verification need to be done as part of inbound
IPsec processing.

Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
---
 doc/guides/rel_notes/release_21_11.rst | 4 ++++
 lib/security/rte_security.h            | 7 +++++++
 2 files changed, 11 insertions(+)

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index f85dc99c8b..8da851cccc 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -185,6 +185,10 @@ ABI Changes
   ``rte_security_ipsec_sa_options`` to indicate whether outer header
   verification need to be done as part of inbound IPsec processing.
 
+* security: A new option ``udp_ports_verify`` was added in structure
+  ``rte_security_ipsec_sa_options`` to indicate whether UDP ports
+  verification need to be done as part of inbound IPsec processing.
+
 * security: A new structure ``rte_security_ipsec_lifetime`` was added to
   replace ``esn_soft_limit`` in IPsec configuration structure
   ``rte_security_ipsec_xform`` to allow applications to configure SA soft
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index a10c9b5f00..ab1a6e1f65 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -223,6 +223,13 @@ struct rte_security_ipsec_sa_options {
 	 *   source and destination IP addresses.
 	 */
 	uint32_t tunnel_hdr_verify : 2;
+
+	/** Verify UDP encapsulation ports in inbound
+	 *
+	 * * 1: Match UDP source and destination ports
+	 * * 0: Do not match UDP ports
+	 */
+	uint32_t udp_ports_verify : 1;
 };
 
 /** IPSec security association direction */
-- 
2.27.0


^ permalink raw reply	[relevance 5%]

* [dpdk-dev] [PATCH v6 0/2] support IOMMU for DMA device
                     ` (3 preceding siblings ...)
  2021-09-27  7:48  3% ` [dpdk-dev] [PATCH v5 " Xuan Ding
@ 2021-09-29  2:41  3% ` Xuan Ding
  2021-10-11  7:59  3% ` [dpdk-dev] [PATCH v7 0/2] Support " Xuan Ding
  5 siblings, 0 replies; 200+ results
From: Xuan Ding @ 2021-09-29  2:41 UTC (permalink / raw)
  To: dev, anatoly.burakov, maxime.coquelin, chenbo.xia
  Cc: jiayu.hu, cheng1.jiang, bruce.richardson, sunil.pai.g,
	yinan.wang, yvonnex.yang, Xuan Ding

This series supports DMA device to use vfio in async vhost.

The first patch extends the capability of current vfio dma mapping
API to allow partial unmapping for adjacent memory if the platform
does not support partial unmapping. The second patch involves the
IOMMU programming for guest memory in async vhost.

v6:
* Fix a potential memory leak.

v5:
* Fix issue of a pointer be freed early.

v4:
* Fix a format issue.

v3:
* Move the async_map_status flag to virtio_net structure to avoid
ABI breaking.

v2:
* Add rte_errno filtering for some devices bound in the kernel driver.
* Add a flag to check the status of region mapping.
* Fix one typo.

Xuan Ding (2):
  vfio: allow partially unmapping adjacent memory
  vhost: enable IOMMU for async vhost

 lib/eal/linux/eal_vfio.c | 338 ++++++++++++++++++++++++++-------------
 lib/vhost/vhost.h        |   4 +
 lib/vhost/vhost_user.c   | 116 +++++++++++++-
 3 files changed, 346 insertions(+), 112 deletions(-)

-- 
2.17.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v1 1/7] eal: promote IPC API's to stable
  @ 2021-09-29  5:48  3%   ` David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2021-09-29  5:48 UTC (permalink / raw)
  To: Kinsella, Ray, Anatoly Burakov; +Cc: dev, Thomas Monjalon, Yigit, Ferruh

On Fri, Sep 10, 2021 at 5:51 PM Kinsella, Ray <mdr@ashroe.eu> wrote:
> On 10/09/2021 13:30, Anatoly Burakov wrote:
> > As per ABI policy, move the formerly experimental API's to the stable
> > section.
> >
> > Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> Acked-by: Ray Kinsella <mdr@ashroe.eu>

Series applied.

For the record, I sorted the version.map file (issue on patch 2) with:
$ git rebase -i -x './devtools/update-abi.sh 22.0 ABI_VERSION
lib/eal/' origin/main


-- 
David Marchand


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v2 1/3] security: add SA config option for inner pkt csum
  @ 2021-09-29  9:08  5% ` Archana Muniganti
  2021-09-29 10:56  0%   ` Ananyev, Konstantin
  0 siblings, 1 reply; 200+ results
From: Archana Muniganti @ 2021-09-29  9:08 UTC (permalink / raw)
  To: gakhil, radu.nicolau, roy.fan.zhang, hemant.agrawal, konstantin.ananyev
  Cc: Archana Muniganti, anoobj, ktejasree, adwivedi, jerinj, dev

Add inner packet IPv4 hdr and L4 checksum enable options
in conf. These will be used in case of protocol offload.
Per SA, application could specify whether the
checksum(compute/verify) can be offloaded to security device.

Signed-off-by: Archana Muniganti <marchana@marvell.com>
---
 doc/guides/cryptodevs/features/default.ini |  1 +
 doc/guides/rel_notes/deprecation.rst       |  4 ++--
 doc/guides/rel_notes/release_21_11.rst     |  4 ++++
 lib/cryptodev/rte_cryptodev.h              |  2 ++
 lib/security/rte_security.h                | 18 ++++++++++++++++++
 5 files changed, 27 insertions(+), 2 deletions(-)

diff --git a/doc/guides/cryptodevs/features/default.ini b/doc/guides/cryptodevs/features/default.ini
index c24814de98..96d95ddc81 100644
--- a/doc/guides/cryptodevs/features/default.ini
+++ b/doc/guides/cryptodevs/features/default.ini
@@ -33,6 +33,7 @@ Non-Byte aligned data  =
 Sym raw data path API  =
 Cipher multiple data units =
 Cipher wrapped key     =
+Inner checksum         =
 
 ;
 ; Supported crypto algorithms of a default crypto driver.
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 05fc2fdee7..8308e00ed4 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -232,8 +232,8 @@ Deprecation Notices
   IPsec payload MSS (Maximum Segment Size), and ESN (Extended Sequence Number).
 
 * security: The IPsec SA config options ``struct rte_security_ipsec_sa_options``
-  will be updated with new fields to support new features like IPsec inner
-  checksum, TSO in case of protocol offload.
+  will be updated with new fields to support new features like TSO in case of
+  protocol offload.
 
 * ipsec: The structure ``rte_ipsec_sa_prm`` will be extended with a new field
   ``hdr_l3_len`` to configure tunnel L3 header length.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 8da851cccc..93d1b36889 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -194,6 +194,10 @@ ABI Changes
   ``rte_security_ipsec_xform`` to allow applications to configure SA soft
   and hard expiry limits. Limits can be either in number of packets or bytes.
 
+* security: The new options ``ip_csum_enable`` and ``l4_csum_enable`` were added
+  in structure ``rte_security_ipsec_sa_options`` to indicate whether inner
+  packet IPv4 header checksum and L4 checksum need to be offloaded to
+  security device.
 
 Known Issues
 ------------
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index bb01f0f195..d9271a6c45 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -479,6 +479,8 @@ rte_cryptodev_asym_get_xform_enum(enum rte_crypto_asym_xform_type *xform_enum,
 /**< Support operations on multiple data-units message */
 #define RTE_CRYPTODEV_FF_CIPHER_WRAPPED_KEY		(1ULL << 26)
 /**< Support wrapped key in cipher xform  */
+#define RTE_CRYPTODEV_FF_SECURITY_INNER_CSUM		(1ULL << 27)
+/**< Support inner checksum computation/verification */
 
 /**
  * Get the name of a crypto device feature flag
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index ab1a6e1f65..945f45ad76 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -230,6 +230,24 @@ struct rte_security_ipsec_sa_options {
 	 * * 0: Do not match UDP ports
 	 */
 	uint32_t udp_ports_verify : 1;
+
+	/** Compute/verify inner packet IPv4 header checksum in tunnel mode
+	 *
+	 * * 1: For outbound, compute inner packet IPv4 header checksum
+	 *      before tunnel encapsulation and for inbound, verify after
+	 *      tunnel decapsulation.
+	 * * 0: Inner packet IP header checksum is not computed/verified.
+	 */
+	uint32_t ip_csum_enable : 1;
+
+	/** Compute/verify inner packet L4 checksum in tunnel mode
+	 *
+	 * * 1: For outbound, compute inner packet L4 checksum before
+	 *      tunnel encapsulation and for inbound, verify after
+	 *      tunnel decapsulation.
+	 * * 0: Inner packet L4 checksum is not computed/verified.
+	 */
+	uint32_t l4_csum_enable : 1;
 };
 
 /** IPSec security association direction */
-- 
2.22.0


^ permalink raw reply	[relevance 5%]

* Re: [dpdk-dev] [PATCH v2 1/3] security: add SA config option for inner pkt csum
  2021-09-29  9:08  5% ` [dpdk-dev] [PATCH v2 1/3] security: " Archana Muniganti
@ 2021-09-29 10:56  0%   ` Ananyev, Konstantin
  2021-09-29 11:03  0%     ` Anoob Joseph
  0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2021-09-29 10:56 UTC (permalink / raw)
  To: Archana Muniganti, gakhil, Nicolau, Radu, Zhang, Roy Fan, hemant.agrawal
  Cc: anoobj, ktejasree, adwivedi, jerinj, dev

> Add inner packet IPv4 hdr and L4 checksum enable options
> in conf. These will be used in case of protocol offload.
> Per SA, application could specify whether the
> checksum(compute/verify) can be offloaded to security device.
> 
> Signed-off-by: Archana Muniganti <marchana@marvell.com>
> ---
>  doc/guides/cryptodevs/features/default.ini |  1 +
>  doc/guides/rel_notes/deprecation.rst       |  4 ++--
>  doc/guides/rel_notes/release_21_11.rst     |  4 ++++
>  lib/cryptodev/rte_cryptodev.h              |  2 ++
>  lib/security/rte_security.h                | 18 ++++++++++++++++++
>  5 files changed, 27 insertions(+), 2 deletions(-)
> 
> diff --git a/doc/guides/cryptodevs/features/default.ini b/doc/guides/cryptodevs/features/default.ini
> index c24814de98..96d95ddc81 100644
> --- a/doc/guides/cryptodevs/features/default.ini
> +++ b/doc/guides/cryptodevs/features/default.ini
> @@ -33,6 +33,7 @@ Non-Byte aligned data  =
>  Sym raw data path API  =
>  Cipher multiple data units =
>  Cipher wrapped key     =
> +Inner checksum         =
> 
>  ;
>  ; Supported crypto algorithms of a default crypto driver.
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 05fc2fdee7..8308e00ed4 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -232,8 +232,8 @@ Deprecation Notices
>    IPsec payload MSS (Maximum Segment Size), and ESN (Extended Sequence Number).
> 
>  * security: The IPsec SA config options ``struct rte_security_ipsec_sa_options``
> -  will be updated with new fields to support new features like IPsec inner
> -  checksum, TSO in case of protocol offload.
> +  will be updated with new fields to support new features like TSO in case of
> +  protocol offload.
> 
>  * ipsec: The structure ``rte_ipsec_sa_prm`` will be extended with a new field
>    ``hdr_l3_len`` to configure tunnel L3 header length.
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index 8da851cccc..93d1b36889 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -194,6 +194,10 @@ ABI Changes
>    ``rte_security_ipsec_xform`` to allow applications to configure SA soft
>    and hard expiry limits. Limits can be either in number of packets or bytes.
> 
> +* security: The new options ``ip_csum_enable`` and ``l4_csum_enable`` were added
> +  in structure ``rte_security_ipsec_sa_options`` to indicate whether inner
> +  packet IPv4 header checksum and L4 checksum need to be offloaded to
> +  security device.
> 
>  Known Issues
>  ------------
> diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
> index bb01f0f195..d9271a6c45 100644
> --- a/lib/cryptodev/rte_cryptodev.h
> +++ b/lib/cryptodev/rte_cryptodev.h
> @@ -479,6 +479,8 @@ rte_cryptodev_asym_get_xform_enum(enum rte_crypto_asym_xform_type *xform_enum,
>  /**< Support operations on multiple data-units message */
>  #define RTE_CRYPTODEV_FF_CIPHER_WRAPPED_KEY		(1ULL << 26)
>  /**< Support wrapped key in cipher xform  */
> +#define RTE_CRYPTODEV_FF_SECURITY_INNER_CSUM		(1ULL << 27)
> +/**< Support inner checksum computation/verification */
> 
>  /**
>   * Get the name of a crypto device feature flag
> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> index ab1a6e1f65..945f45ad76 100644
> --- a/lib/security/rte_security.h
> +++ b/lib/security/rte_security.h
> @@ -230,6 +230,24 @@ struct rte_security_ipsec_sa_options {
>  	 * * 0: Do not match UDP ports
>  	 */
>  	uint32_t udp_ports_verify : 1;
> +
> +	/** Compute/verify inner packet IPv4 header checksum in tunnel mode
> +	 *
> +	 * * 1: For outbound, compute inner packet IPv4 header checksum
> +	 *      before tunnel encapsulation and for inbound, verify after
> +	 *      tunnel decapsulation.
> +	 * * 0: Inner packet IP header checksum is not computed/verified.
> +	 */
> +	uint32_t ip_csum_enable : 1;
> +
> +	/** Compute/verify inner packet L4 checksum in tunnel mode
> +	 *
> +	 * * 1: For outbound, compute inner packet L4 checksum before
> +	 *      tunnel encapsulation and for inbound, verify after
> +	 *      tunnel decapsulation.
> +	 * * 0: Inner packet L4 checksum is not computed/verified.
> +	 */
> +	uint32_t l4_csum_enable : 1;

As I understand these 2 new flags serve two purposes:
1. report HW/PMD ability to perform these offloads.
2. allow user to enable/disable this offload on SA basis.

One question I have - how it will work on data-path?
Would decision to perform these offloads be based on mbuf->ol_flags value
(same as we doing for ethdev TX offloads)?
Or some other approach is implied?

>  };
> 
>  /** IPSec security association direction */
> --
> 2.22.0


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v2 1/3] security: add SA config option for inner pkt csum
  2021-09-29 10:56  0%   ` Ananyev, Konstantin
@ 2021-09-29 11:03  0%     ` Anoob Joseph
  2021-09-29 11:39  0%       ` Ananyev, Konstantin
  0 siblings, 1 reply; 200+ results
From: Anoob Joseph @ 2021-09-29 11:03 UTC (permalink / raw)
  To: Ananyev, Konstantin, Archana Muniganti, Akhil Goyal, Nicolau,
	Radu, Zhang, Roy Fan, hemant.agrawal
  Cc: Tejasree Kondoj, Ankur Dwivedi, Jerin Jacob Kollanukkaran, dev

Hi Konstanin,

Please see inline.

Thanks,
Anoob

> -----Original Message-----
> From: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Sent: Wednesday, September 29, 2021 4:26 PM
> To: Archana Muniganti <marchana@marvell.com>; Akhil Goyal
> <gakhil@marvell.com>; Nicolau, Radu <radu.nicolau@intel.com>; Zhang, Roy
> Fan <roy.fan.zhang@intel.com>; hemant.agrawal@nxp.com
> Cc: Anoob Joseph <anoobj@marvell.com>; Tejasree Kondoj
> <ktejasree@marvell.com>; Ankur Dwivedi <adwivedi@marvell.com>; Jerin Jacob
> Kollanukkaran <jerinj@marvell.com>; dev@dpdk.org
> Subject: [EXT] RE: [PATCH v2 1/3] security: add SA config option for inner pkt
> csum
> 
> External Email
> 
> ----------------------------------------------------------------------
> > Add inner packet IPv4 hdr and L4 checksum enable options in conf.
> > These will be used in case of protocol offload.
> > Per SA, application could specify whether the
> > checksum(compute/verify) can be offloaded to security device.
> >
> > Signed-off-by: Archana Muniganti <marchana@marvell.com>
> > ---
> >  doc/guides/cryptodevs/features/default.ini |  1 +
> >  doc/guides/rel_notes/deprecation.rst       |  4 ++--
> >  doc/guides/rel_notes/release_21_11.rst     |  4 ++++
> >  lib/cryptodev/rte_cryptodev.h              |  2 ++
> >  lib/security/rte_security.h                | 18 ++++++++++++++++++
> >  5 files changed, 27 insertions(+), 2 deletions(-)
> >
> > diff --git a/doc/guides/cryptodevs/features/default.ini
> > b/doc/guides/cryptodevs/features/default.ini
> > index c24814de98..96d95ddc81 100644
> > --- a/doc/guides/cryptodevs/features/default.ini
> > +++ b/doc/guides/cryptodevs/features/default.ini
> > @@ -33,6 +33,7 @@ Non-Byte aligned data  =  Sym raw data path API  =
> > Cipher multiple data units =
> >  Cipher wrapped key     =
> > +Inner checksum         =
> >
> >  ;
> >  ; Supported crypto algorithms of a default crypto driver.
> > diff --git a/doc/guides/rel_notes/deprecation.rst
> > b/doc/guides/rel_notes/deprecation.rst
> > index 05fc2fdee7..8308e00ed4 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -232,8 +232,8 @@ Deprecation Notices
> >    IPsec payload MSS (Maximum Segment Size), and ESN (Extended Sequence
> Number).
> >
> >  * security: The IPsec SA config options ``struct
> > rte_security_ipsec_sa_options``
> > -  will be updated with new fields to support new features like IPsec
> > inner
> > -  checksum, TSO in case of protocol offload.
> > +  will be updated with new fields to support new features like TSO in
> > + case of  protocol offload.
> >
> >  * ipsec: The structure ``rte_ipsec_sa_prm`` will be extended with a new field
> >    ``hdr_l3_len`` to configure tunnel L3 header length.
> > diff --git a/doc/guides/rel_notes/release_21_11.rst
> > b/doc/guides/rel_notes/release_21_11.rst
> > index 8da851cccc..93d1b36889 100644
> > --- a/doc/guides/rel_notes/release_21_11.rst
> > +++ b/doc/guides/rel_notes/release_21_11.rst
> > @@ -194,6 +194,10 @@ ABI Changes
> >    ``rte_security_ipsec_xform`` to allow applications to configure SA soft
> >    and hard expiry limits. Limits can be either in number of packets or bytes.
> >
> > +* security: The new options ``ip_csum_enable`` and ``l4_csum_enable``
> > +were added
> > +  in structure ``rte_security_ipsec_sa_options`` to indicate whether
> > +inner
> > +  packet IPv4 header checksum and L4 checksum need to be offloaded to
> > +  security device.
> >
> >  Known Issues
> >  ------------
> > diff --git a/lib/cryptodev/rte_cryptodev.h
> > b/lib/cryptodev/rte_cryptodev.h index bb01f0f195..d9271a6c45 100644
> > --- a/lib/cryptodev/rte_cryptodev.h
> > +++ b/lib/cryptodev/rte_cryptodev.h
> > @@ -479,6 +479,8 @@ rte_cryptodev_asym_get_xform_enum(enum
> > rte_crypto_asym_xform_type *xform_enum,  /**< Support operations on
> multiple data-units message */
> >  #define RTE_CRYPTODEV_FF_CIPHER_WRAPPED_KEY		(1ULL << 26)
> >  /**< Support wrapped key in cipher xform  */
> > +#define RTE_CRYPTODEV_FF_SECURITY_INNER_CSUM		(1ULL
> << 27)
> > +/**< Support inner checksum computation/verification */
> >
> >  /**
> >   * Get the name of a crypto device feature flag diff --git
> > a/lib/security/rte_security.h b/lib/security/rte_security.h index
> > ab1a6e1f65..945f45ad76 100644
> > --- a/lib/security/rte_security.h
> > +++ b/lib/security/rte_security.h
> > @@ -230,6 +230,24 @@ struct rte_security_ipsec_sa_options {
> >  	 * * 0: Do not match UDP ports
> >  	 */
> >  	uint32_t udp_ports_verify : 1;
> > +
> > +	/** Compute/verify inner packet IPv4 header checksum in tunnel mode
> > +	 *
> > +	 * * 1: For outbound, compute inner packet IPv4 header checksum
> > +	 *      before tunnel encapsulation and for inbound, verify after
> > +	 *      tunnel decapsulation.
> > +	 * * 0: Inner packet IP header checksum is not computed/verified.
> > +	 */
> > +	uint32_t ip_csum_enable : 1;
> > +
> > +	/** Compute/verify inner packet L4 checksum in tunnel mode
> > +	 *
> > +	 * * 1: For outbound, compute inner packet L4 checksum before
> > +	 *      tunnel encapsulation and for inbound, verify after
> > +	 *      tunnel decapsulation.
> > +	 * * 0: Inner packet L4 checksum is not computed/verified.
> > +	 */
> > +	uint32_t l4_csum_enable : 1;
> 
> As I understand these 2 new flags serve two purposes:
> 1. report HW/PMD ability to perform these offloads.
> 2. allow user to enable/disable this offload on SA basis.

[Anoob] Correct
 
> 
> One question I have - how it will work on data-path?
> Would decision to perform these offloads be based on mbuf->ol_flags value
> (same as we doing for ethdev TX offloads)?
> Or some other approach is implied?

[Anoob] There will be two settings. It can enabled per SA or enabled per packet. 
 
> 
> >  };
> >
> >  /** IPSec security association direction */
> > --
> > 2.22.0


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v5] ethdev: fix representor port ID search by name
  @ 2021-09-29 11:13  0%   ` Singh, Aman Deep
  2021-09-30 12:03  0%     ` Andrew Rybchenko
  2021-10-01 11:39  0%   ` Andrew Rybchenko
  1 sibling, 1 reply; 200+ results
From: Singh, Aman Deep @ 2021-09-29 11:13 UTC (permalink / raw)
  To: Andrew Rybchenko, Ajit Khaparde, Somnath Kotur, John Daley,
	Hyong Youb Kim, Beilei Xing, Qiming Yang, Qi Zhang, Haiyue Wang,
	Matan Azrad, Viacheslav Ovsiienko, Thomas Monjalon, Ferruh Yigit
  Cc: dev, Viacheslav Galaktionov


On 9/13/2021 4:56 PM, Andrew Rybchenko wrote:
> From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
>
> Getting a list of representors from a representor does not make sense.
> Instead, a parent device should be used.
>
> To this end, extend the rte_eth_dev_data structure to include the port ID
> of the backing device for representors.
>
> Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Acked-by: Haiyue Wang <haiyue.wang@intel.com>
> Acked-by: Beilei Xing <beilei.xing@intel.com>
> ---
> The new field is added into the hole in rte_eth_dev_data structure.
> The patch does not change ABI, but extra care is required since ABI
> check is disabled for the structure because of the libabigail bug [1].
> It should not be a problem anyway since 21.11 is a ABI breaking release.
>
> Potentially it is bad for out-of-tree drivers which implement
> representors but do not fill in a new parert_port_id field in
> rte_eth_dev_data structure. Get ID by name will not work.
Did we change name of new field from parert_port_id to backer_port_id.
>
> mlx5 changes should be reviwed by maintainers very carefully, since
> we are not sure if we patch it correctly.
>
> [1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060
>
> v5:
>      - try to improve name: backer_port_id instead of parent_port_id
>      - init new field to RTE_MAX_ETHPORTS on allocation to avoid
>        zero port usage by default
>
> v4:
>      - apply mlx5 review notes: remove fallback from generic ethdev
>        code and add fallback to mlx5 code to handle legacy usecase
>
> v3:
>      - fix mlx5 build breakage
>
> v2:
>      - fix mlx5 review notes
>      - try device port ID first before parent in order to address
>        backward compatibility issue
>
>   drivers/net/bnxt/bnxt_reps.c             |  1 +
>   drivers/net/enic/enic_vf_representor.c   |  1 +
>   drivers/net/i40e/i40e_vf_representor.c   |  1 +
>   drivers/net/ice/ice_dcf_vf_representor.c |  1 +
>   drivers/net/ixgbe/ixgbe_vf_representor.c |  1 +
>   drivers/net/mlx5/linux/mlx5_os.c         | 13 +++++++++++++
>   drivers/net/mlx5/windows/mlx5_os.c       | 13 +++++++++++++
>   lib/ethdev/ethdev_driver.h               |  6 +++---
>   lib/ethdev/rte_class_eth.c               |  2 +-
>   lib/ethdev/rte_ethdev.c                  |  9 +++++----
>   lib/ethdev/rte_ethdev_core.h             |  6 ++++++
>   11 files changed, 46 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
> index bdbad53b7d..0d50c0f1da 100644
> --- a/drivers/net/bnxt/bnxt_reps.c
> +++ b/drivers/net/bnxt/bnxt_reps.c
> @@ -187,6 +187,7 @@ int bnxt_representor_init(struct rte_eth_dev *eth_dev, void *params)
>   	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
>   					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
>   	eth_dev->data->representor_id = rep_params->vf_id;
> +	eth_dev->data->backer_port_id = rep_params->parent_dev->data->port_id;
>   
>   	rte_eth_random_addr(vf_rep_bp->dflt_mac_addr);
>   	memcpy(vf_rep_bp->mac_addr, vf_rep_bp->dflt_mac_addr,
> diff --git a/drivers/net/enic/enic_vf_representor.c b/drivers/net/enic/enic_vf_representor.c
> index 79dd6e5640..fedb09ecd6 100644
> --- a/drivers/net/enic/enic_vf_representor.c
> +++ b/drivers/net/enic/enic_vf_representor.c
> @@ -662,6 +662,7 @@ int enic_vf_representor_init(struct rte_eth_dev *eth_dev, void *init_params)
>   	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
>   					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
>   	eth_dev->data->representor_id = vf->vf_id;
> +	eth_dev->data->backer_port_id = pf->port_id;
>   	eth_dev->data->mac_addrs = rte_zmalloc("enic_mac_addr_vf",
>   		sizeof(struct rte_ether_addr) *
>   		ENIC_UNICAST_PERFECT_FILTERS, 0);
> diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
> index 0481b55381..d65b821a01 100644
> --- a/drivers/net/i40e/i40e_vf_representor.c
> +++ b/drivers/net/i40e/i40e_vf_representor.c
> @@ -514,6 +514,7 @@ i40e_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
>   	ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
>   					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
>   	ethdev->data->representor_id = representor->vf_id;
> +	ethdev->data->backer_port_id = pf->dev_data->port_id;
>   
>   	/* Setting the number queues allocated to the VF */
>   	ethdev->data->nb_rx_queues = vf->vsi->nb_qps;
> diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
> index 970461f3e9..e51d0aa6b9 100644
> --- a/drivers/net/ice/ice_dcf_vf_representor.c
> +++ b/drivers/net/ice/ice_dcf_vf_representor.c
> @@ -418,6 +418,7 @@ ice_dcf_vf_repr_init(struct rte_eth_dev *vf_rep_eth_dev, void *init_param)
>   
>   	vf_rep_eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
>   	vf_rep_eth_dev->data->representor_id = repr->vf_id;
> +	vf_rep_eth_dev->data->backer_port_id = repr->dcf_eth_dev->data->port_id;
>   
>   	vf_rep_eth_dev->data->mac_addrs = &repr->mac_addr;
>   
> diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
> index d5b636a194..9fa75984fb 100644
> --- a/drivers/net/ixgbe/ixgbe_vf_representor.c
> +++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
> @@ -197,6 +197,7 @@ ixgbe_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
>   
>   	ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
>   	ethdev->data->representor_id = representor->vf_id;
> +	ethdev->data->backer_port_id = representor->pf_ethdev->data->port_id;
>   
>   	/* Set representor device ops */
>   	ethdev->dev_ops = &ixgbe_vf_representor_dev_ops;
> diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
> index 470b16cb9a..1cddaaba1a 100644
> --- a/drivers/net/mlx5/linux/mlx5_os.c
> +++ b/drivers/net/mlx5/linux/mlx5_os.c
> @@ -1677,6 +1677,19 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
>   	if (priv->representor) {
>   		eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
>   		eth_dev->data->representor_id = priv->representor_id;
> +		MLX5_ETH_FOREACH_DEV(port_id, &priv->pci_dev->device) {
> +			struct mlx5_priv *opriv =
> +				rte_eth_devices[port_id].data->dev_private;
> +			if (opriv &&
> +			    opriv->master &&
> +			    opriv->domain_id == priv->domain_id &&
> +			    opriv->sh == priv->sh) {
> +				eth_dev->data->backer_port_id = port_id;
> +				break;
> +			}
> +		}
> +		if (port_id >= RTE_MAX_ETHPORTS)
> +			eth_dev->data->backer_port_id = eth_dev->data->port_id;
>   	}
>   	priv->mp_id.port_id = eth_dev->data->port_id;
>   	strlcpy(priv->mp_id.name, MLX5_MP_NAME, RTE_MP_MAX_NAME_LEN);
> diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
> index 26fa927039..a9c244c7dc 100644
> --- a/drivers/net/mlx5/windows/mlx5_os.c
> +++ b/drivers/net/mlx5/windows/mlx5_os.c
> @@ -543,6 +543,19 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
>   	if (priv->representor) {
>   		eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
>   		eth_dev->data->representor_id = priv->representor_id;
> +		MLX5_ETH_FOREACH_DEV(port_id, &priv->pci_dev->device) {
> +			struct mlx5_priv *opriv =
> +				rte_eth_devices[port_id].data->dev_private;
> +			if (opriv &&
> +			    opriv->master &&
> +			    opriv->domain_id == priv->domain_id &&
> +			    opriv->sh == priv->sh) {
> +				eth_dev->data->backer_port_id = port_id;
> +				break;
> +			}
> +		}
> +		if (port_id >= RTE_MAX_ETHPORTS)
> +			eth_dev->data->backer_port_id = eth_dev->data->port_id;
>   	}
>   	/*
>   	 * Store associated network device interface index. This index
> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
> index 40e474aa7e..b940e6cb38 100644
> --- a/lib/ethdev/ethdev_driver.h
> +++ b/lib/ethdev/ethdev_driver.h
> @@ -1248,8 +1248,8 @@ struct rte_eth_devargs {
>    * For backward compatibility, if no representor info, direct
>    * map legacy VF (no controller and pf).
>    *
> - * @param ethdev
> - *  Handle of ethdev port.
> + * @param port_id
> + *  Port ID of the backing device.
>    * @param type
>    *  Representor type.
>    * @param controller
> @@ -1266,7 +1266,7 @@ struct rte_eth_devargs {
>    */
>   __rte_internal
>   int
> -rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
> +rte_eth_representor_id_get(uint16_t port_id,
>   			   enum rte_eth_representor_type type,
>   			   int controller, int pf, int representor_port,
>   			   uint16_t *repr_id);
> diff --git a/lib/ethdev/rte_class_eth.c b/lib/ethdev/rte_class_eth.c
> index 1fe5fa1f36..eda216ced5 100644
> --- a/lib/ethdev/rte_class_eth.c
> +++ b/lib/ethdev/rte_class_eth.c
> @@ -95,7 +95,7 @@ eth_representor_cmp(const char *key __rte_unused,
>   		c = i / (np * nf);
>   		p = (i / nf) % np;
>   		f = i % nf;
> -		if (rte_eth_representor_id_get(edev,
> +		if (rte_eth_representor_id_get(edev->data->backer_port_id,
>   			eth_da.type,
>   			eth_da.nb_mh_controllers == 0 ? -1 :
>   					eth_da.mh_controllers[c],
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index daf5ca9242..7c9b0d6b3b 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -524,6 +524,7 @@ rte_eth_dev_allocate(const char *name)
>   	eth_dev = eth_dev_get(port_id);
>   	strlcpy(eth_dev->data->name, name, sizeof(eth_dev->data->name));
>   	eth_dev->data->port_id = port_id;
> +	eth_dev->data->backer_port_id = RTE_MAX_ETHPORTS;
>   	eth_dev->data->mtu = RTE_ETHER_MTU;
>   	pthread_mutex_init(&eth_dev->data->flow_ops_mutex, NULL);
>   
> @@ -5996,7 +5997,7 @@ rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da)
>   }
>   
>   int
> -rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
> +rte_eth_representor_id_get(uint16_t port_id,
>   			   enum rte_eth_representor_type type,
>   			   int controller, int pf, int representor_port,
>   			   uint16_t *repr_id)
> @@ -6012,7 +6013,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
>   		return -EINVAL;
>   
>   	/* Get PMD representor range info. */
> -	ret = rte_eth_representor_info_get(ethdev->data->port_id, NULL);
> +	ret = rte_eth_representor_info_get(port_id, NULL);
>   	if (ret == -ENOTSUP && type == RTE_ETH_REPRESENTOR_VF &&
>   	    controller == -1 && pf == -1) {
>   		/* Direct mapping for legacy VF representor. */
> @@ -6027,7 +6028,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
>   	if (info == NULL)
>   		return -ENOMEM;
>   	info->nb_ranges_alloc = n;
> -	ret = rte_eth_representor_info_get(ethdev->data->port_id, info);
> +	ret = rte_eth_representor_info_get(port_id, info);
>   	if (ret < 0)
>   		goto out;
>   
> @@ -6046,7 +6047,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
>   			continue;
>   		if (info->ranges[i].id_end < info->ranges[i].id_base) {
>   			RTE_LOG(WARNING, EAL, "Port %hu invalid representor ID Range %u - %u, entry %d\n",
> -				ethdev->data->port_id, info->ranges[i].id_base,
> +				port_id, info->ranges[i].id_base,
>   				info->ranges[i].id_end, i);
>   			continue;
>   
> diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
> index edf96de2dc..48b814e8a1 100644
> --- a/lib/ethdev/rte_ethdev_core.h
> +++ b/lib/ethdev/rte_ethdev_core.h
> @@ -185,6 +185,12 @@ struct rte_eth_dev_data {
>   			/**< Switch-specific identifier.
>   			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
>   			 */
> +	uint16_t backer_port_id;
> +			/**< Port ID of the backing device.
> +			 *   This device will be used to query representor
> +			 *   info and calculate representor IDs.
> +			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
> +			 */
>   
>   	pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
>   	uint64_t reserved_64s[4]; /**< Reserved for future fields */

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v3 1/3] security: add SA config option for inner pkt csum
  @ 2021-09-29 11:23  5% ` Archana Muniganti
  0 siblings, 0 replies; 200+ results
From: Archana Muniganti @ 2021-09-29 11:23 UTC (permalink / raw)
  To: gakhil, radu.nicolau, roy.fan.zhang, hemant.agrawal, konstantin.ananyev
  Cc: Archana Muniganti, anoobj, ktejasree, adwivedi, jerinj, dev

Add inner packet IPv4 hdr and L4 checksum enable options
in conf. These will be used in case of protocol offload.
Per SA, application could specify whether the
checksum(compute/verify) can be offloaded to security device.

Signed-off-by: Archana Muniganti <marchana@marvell.com>
---
 doc/guides/cryptodevs/features/default.ini |  1 +
 doc/guides/rel_notes/deprecation.rst       |  4 ++--
 doc/guides/rel_notes/release_21_11.rst     |  4 ++++
 lib/cryptodev/rte_cryptodev.h              |  2 ++
 lib/security/rte_security.h                | 18 ++++++++++++++++++
 5 files changed, 27 insertions(+), 2 deletions(-)

diff --git a/doc/guides/cryptodevs/features/default.ini b/doc/guides/cryptodevs/features/default.ini
index c24814de98..96d95ddc81 100644
--- a/doc/guides/cryptodevs/features/default.ini
+++ b/doc/guides/cryptodevs/features/default.ini
@@ -33,6 +33,7 @@ Non-Byte aligned data  =
 Sym raw data path API  =
 Cipher multiple data units =
 Cipher wrapped key     =
+Inner checksum         =
 
 ;
 ; Supported crypto algorithms of a default crypto driver.
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 05fc2fdee7..8308e00ed4 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -232,8 +232,8 @@ Deprecation Notices
   IPsec payload MSS (Maximum Segment Size), and ESN (Extended Sequence Number).
 
 * security: The IPsec SA config options ``struct rte_security_ipsec_sa_options``
-  will be updated with new fields to support new features like IPsec inner
-  checksum, TSO in case of protocol offload.
+  will be updated with new fields to support new features like TSO in case of
+  protocol offload.
 
 * ipsec: The structure ``rte_ipsec_sa_prm`` will be extended with a new field
   ``hdr_l3_len`` to configure tunnel L3 header length.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 8da851cccc..93d1b36889 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -194,6 +194,10 @@ ABI Changes
   ``rte_security_ipsec_xform`` to allow applications to configure SA soft
   and hard expiry limits. Limits can be either in number of packets or bytes.
 
+* security: The new options ``ip_csum_enable`` and ``l4_csum_enable`` were added
+  in structure ``rte_security_ipsec_sa_options`` to indicate whether inner
+  packet IPv4 header checksum and L4 checksum need to be offloaded to
+  security device.
 
 Known Issues
 ------------
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index bb01f0f195..d9271a6c45 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -479,6 +479,8 @@ rte_cryptodev_asym_get_xform_enum(enum rte_crypto_asym_xform_type *xform_enum,
 /**< Support operations on multiple data-units message */
 #define RTE_CRYPTODEV_FF_CIPHER_WRAPPED_KEY		(1ULL << 26)
 /**< Support wrapped key in cipher xform  */
+#define RTE_CRYPTODEV_FF_SECURITY_INNER_CSUM		(1ULL << 27)
+/**< Support inner checksum computation/verification */
 
 /**
  * Get the name of a crypto device feature flag
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index ab1a6e1f65..945f45ad76 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -230,6 +230,24 @@ struct rte_security_ipsec_sa_options {
 	 * * 0: Do not match UDP ports
 	 */
 	uint32_t udp_ports_verify : 1;
+
+	/** Compute/verify inner packet IPv4 header checksum in tunnel mode
+	 *
+	 * * 1: For outbound, compute inner packet IPv4 header checksum
+	 *      before tunnel encapsulation and for inbound, verify after
+	 *      tunnel decapsulation.
+	 * * 0: Inner packet IP header checksum is not computed/verified.
+	 */
+	uint32_t ip_csum_enable : 1;
+
+	/** Compute/verify inner packet L4 checksum in tunnel mode
+	 *
+	 * * 1: For outbound, compute inner packet L4 checksum before
+	 *      tunnel encapsulation and for inbound, verify after
+	 *      tunnel decapsulation.
+	 * * 0: Inner packet L4 checksum is not computed/verified.
+	 */
+	uint32_t l4_csum_enable : 1;
 };
 
 /** IPSec security association direction */
-- 
2.22.0


^ permalink raw reply	[relevance 5%]

* Re: [dpdk-dev] [PATCH v2 1/3] security: add SA config option for inner pkt csum
  2021-09-29 11:03  0%     ` Anoob Joseph
@ 2021-09-29 11:39  0%       ` Ananyev, Konstantin
  2021-09-30  5:05  0%         ` Anoob Joseph
  0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2021-09-29 11:39 UTC (permalink / raw)
  To: Anoob Joseph, Archana Muniganti, Akhil Goyal, Nicolau, Radu,
	Zhang, Roy Fan, hemant.agrawal
  Cc: Tejasree Kondoj, Ankur Dwivedi, Jerin Jacob Kollanukkaran, dev

Hi Anoob,

> Hi Konstanin,
> 
> Please see inline.
> 
> Thanks,
> Anoob
> 
> > -----Original Message-----
> > From: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > Sent: Wednesday, September 29, 2021 4:26 PM
> > To: Archana Muniganti <marchana@marvell.com>; Akhil Goyal
> > <gakhil@marvell.com>; Nicolau, Radu <radu.nicolau@intel.com>; Zhang, Roy
> > Fan <roy.fan.zhang@intel.com>; hemant.agrawal@nxp.com
> > Cc: Anoob Joseph <anoobj@marvell.com>; Tejasree Kondoj
> > <ktejasree@marvell.com>; Ankur Dwivedi <adwivedi@marvell.com>; Jerin Jacob
> > Kollanukkaran <jerinj@marvell.com>; dev@dpdk.org
> > Subject: [EXT] RE: [PATCH v2 1/3] security: add SA config option for inner pkt
> > csum
> >
> > External Email
> >
> > ----------------------------------------------------------------------
> > > Add inner packet IPv4 hdr and L4 checksum enable options in conf.
> > > These will be used in case of protocol offload.
> > > Per SA, application could specify whether the
> > > checksum(compute/verify) can be offloaded to security device.
> > >
> > > Signed-off-by: Archana Muniganti <marchana@marvell.com>
> > > ---
> > >  doc/guides/cryptodevs/features/default.ini |  1 +
> > >  doc/guides/rel_notes/deprecation.rst       |  4 ++--
> > >  doc/guides/rel_notes/release_21_11.rst     |  4 ++++
> > >  lib/cryptodev/rte_cryptodev.h              |  2 ++
> > >  lib/security/rte_security.h                | 18 ++++++++++++++++++
> > >  5 files changed, 27 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/doc/guides/cryptodevs/features/default.ini
> > > b/doc/guides/cryptodevs/features/default.ini
> > > index c24814de98..96d95ddc81 100644
> > > --- a/doc/guides/cryptodevs/features/default.ini
> > > +++ b/doc/guides/cryptodevs/features/default.ini
> > > @@ -33,6 +33,7 @@ Non-Byte aligned data  =  Sym raw data path API  =
> > > Cipher multiple data units =
> > >  Cipher wrapped key     =
> > > +Inner checksum         =
> > >
> > >  ;
> > >  ; Supported crypto algorithms of a default crypto driver.
> > > diff --git a/doc/guides/rel_notes/deprecation.rst
> > > b/doc/guides/rel_notes/deprecation.rst
> > > index 05fc2fdee7..8308e00ed4 100644
> > > --- a/doc/guides/rel_notes/deprecation.rst
> > > +++ b/doc/guides/rel_notes/deprecation.rst
> > > @@ -232,8 +232,8 @@ Deprecation Notices
> > >    IPsec payload MSS (Maximum Segment Size), and ESN (Extended Sequence
> > Number).
> > >
> > >  * security: The IPsec SA config options ``struct
> > > rte_security_ipsec_sa_options``
> > > -  will be updated with new fields to support new features like IPsec
> > > inner
> > > -  checksum, TSO in case of protocol offload.
> > > +  will be updated with new fields to support new features like TSO in
> > > + case of  protocol offload.
> > >
> > >  * ipsec: The structure ``rte_ipsec_sa_prm`` will be extended with a new field
> > >    ``hdr_l3_len`` to configure tunnel L3 header length.
> > > diff --git a/doc/guides/rel_notes/release_21_11.rst
> > > b/doc/guides/rel_notes/release_21_11.rst
> > > index 8da851cccc..93d1b36889 100644
> > > --- a/doc/guides/rel_notes/release_21_11.rst
> > > +++ b/doc/guides/rel_notes/release_21_11.rst
> > > @@ -194,6 +194,10 @@ ABI Changes
> > >    ``rte_security_ipsec_xform`` to allow applications to configure SA soft
> > >    and hard expiry limits. Limits can be either in number of packets or bytes.
> > >
> > > +* security: The new options ``ip_csum_enable`` and ``l4_csum_enable``
> > > +were added
> > > +  in structure ``rte_security_ipsec_sa_options`` to indicate whether
> > > +inner
> > > +  packet IPv4 header checksum and L4 checksum need to be offloaded to
> > > +  security device.
> > >
> > >  Known Issues
> > >  ------------
> > > diff --git a/lib/cryptodev/rte_cryptodev.h
> > > b/lib/cryptodev/rte_cryptodev.h index bb01f0f195..d9271a6c45 100644
> > > --- a/lib/cryptodev/rte_cryptodev.h
> > > +++ b/lib/cryptodev/rte_cryptodev.h
> > > @@ -479,6 +479,8 @@ rte_cryptodev_asym_get_xform_enum(enum
> > > rte_crypto_asym_xform_type *xform_enum,  /**< Support operations on
> > multiple data-units message */
> > >  #define RTE_CRYPTODEV_FF_CIPHER_WRAPPED_KEY		(1ULL << 26)
> > >  /**< Support wrapped key in cipher xform  */
> > > +#define RTE_CRYPTODEV_FF_SECURITY_INNER_CSUM		(1ULL
> > << 27)
> > > +/**< Support inner checksum computation/verification */
> > >
> > >  /**
> > >   * Get the name of a crypto device feature flag diff --git
> > > a/lib/security/rte_security.h b/lib/security/rte_security.h index
> > > ab1a6e1f65..945f45ad76 100644
> > > --- a/lib/security/rte_security.h
> > > +++ b/lib/security/rte_security.h
> > > @@ -230,6 +230,24 @@ struct rte_security_ipsec_sa_options {
> > >  	 * * 0: Do not match UDP ports
> > >  	 */
> > >  	uint32_t udp_ports_verify : 1;
> > > +
> > > +	/** Compute/verify inner packet IPv4 header checksum in tunnel mode
> > > +	 *
> > > +	 * * 1: For outbound, compute inner packet IPv4 header checksum
> > > +	 *      before tunnel encapsulation and for inbound, verify after
> > > +	 *      tunnel decapsulation.
> > > +	 * * 0: Inner packet IP header checksum is not computed/verified.
> > > +	 */
> > > +	uint32_t ip_csum_enable : 1;
> > > +
> > > +	/** Compute/verify inner packet L4 checksum in tunnel mode
> > > +	 *
> > > +	 * * 1: For outbound, compute inner packet L4 checksum before
> > > +	 *      tunnel encapsulation and for inbound, verify after
> > > +	 *      tunnel decapsulation.
> > > +	 * * 0: Inner packet L4 checksum is not computed/verified.
> > > +	 */
> > > +	uint32_t l4_csum_enable : 1;
> >
> > As I understand these 2 new flags serve two purposes:
> > 1. report HW/PMD ability to perform these offloads.
> > 2. allow user to enable/disable this offload on SA basis.
> 
> [Anoob] Correct
> 
> >
> > One question I have - how it will work on data-path?
> > Would decision to perform these offloads be based on mbuf->ol_flags value
> > (same as we doing for ethdev TX offloads)?
> > Or some other approach is implied?
> 
> [Anoob] There will be two settings. It can enabled per SA or enabled per packet.

Ok, will it be documented somewhere?
Or probably it already is, and I just missed/forgot it somehow?

> >
> > >  };
> > >
> > >  /** IPSec security association direction */
> > > --
> > > 2.22.0


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v2 2/4] mempool: add non-IO flag
  @ 2021-09-29 14:52  4%   ` dkozlyuk
  0 siblings, 0 replies; 200+ results
From: dkozlyuk @ 2021-09-29 14:52 UTC (permalink / raw)
  To: dev; +Cc: Dmitry Kozlyuk, Matan Azrad, Olivier Matz, Andrew Rybchenko

From: Dmitry Kozlyuk <dkozlyuk@oss.nvidia.com>

Mempool is a generic allocator that is not necessarily used for device
IO operations and its memory for DMA. Add MEMPOOL_F_NON_IO flag to mark
such mempools.

Signed-off-by: Dmitry Kozlyuk <dkozlyuk@oss.nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 doc/guides/rel_notes/release_21_11.rst | 3 +++
 lib/mempool/rte_mempool.h              | 4 ++++
 2 files changed, 7 insertions(+)

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index f85dc99c8b..873beda633 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -155,6 +155,9 @@ API Changes
   the crypto/security operation. This field will be used to communicate
   events such as soft expiry with IPsec in lookaside mode.
 
+* mempool: Added ``MEMPOOL_F_NON_IO`` flag to give a hint to DPDK components
+  that objects from this pool will not be used for device IO (e.g. DMA).
+
 
 ABI Changes
 -----------
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index c81e488851..4d18957d6d 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -263,6 +263,7 @@ struct rte_mempool {
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
 #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
 #define MEMPOOL_F_NO_IOVA_CONTIG 0x0020 /**< Don't need IOVA contiguous objs. */
+#define MEMPOOL_F_NON_IO         0x0040 /**< Not used for device IO (DMA). */
 
 /**
  * @internal When debug is enabled, store some statistics.
@@ -992,6 +993,9 @@ typedef void (rte_mempool_ctor_t)(struct rte_mempool *, void *);
  *     "single-consumer". Otherwise, it is "multi-consumers".
  *   - MEMPOOL_F_NO_IOVA_CONTIG: If set, allocated objects won't
  *     necessarily be contiguous in IO memory.
+ *   - MEMPOOL_F_NO_IO: If set, the mempool is considered to be
+ *     never used for device IO, i.e. DMA operations,
+ *     which may affect some PMD behavior.
  * @return
  *   The pointer to the new allocated mempool, on success. NULL on error
  *   with rte_errno set appropriately. Possible rte_errno values include:
-- 
2.25.1


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH 3/3] mbuf: add rte prefix to offload flags
  @ 2021-09-29 21:48  1% ` Olivier Matz
  0 siblings, 0 replies; 200+ results
From: Olivier Matz @ 2021-09-29 21:48 UTC (permalink / raw)
  To: dev

Fix the mbuf offload flags namespace by adding an RTE_ prefix to the
name. The old flags remain usable, but a deprecation warning is issued
at compilation.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 app/test-pmd/csumonly.c                       |  62 +--
 app/test-pmd/flowgen.c                        |   8 +-
 app/test-pmd/ieee1588fwd.c                    |   6 +-
 app/test-pmd/macfwd.c                         |   8 +-
 app/test-pmd/macswap_common.h                 |  12 +-
 app/test-pmd/txonly.c                         |   8 +-
 app/test-pmd/util.c                           |  18 +-
 app/test/test_ipsec.c                         |   4 +-
 app/test/test_mbuf.c                          | 144 +++----
 doc/guides/nics/bnxt.rst                      |   8 +-
 doc/guides/nics/enic.rst                      |   8 +-
 doc/guides/nics/features.rst                  |  70 +--
 doc/guides/nics/ixgbe.rst                     |   2 +-
 doc/guides/nics/mlx5.rst                      |   6 +-
 .../generic_segmentation_offload_lib.rst      |   4 +-
 doc/guides/prog_guide/mbuf_lib.rst            |  18 +-
 doc/guides/prog_guide/metrics_lib.rst         |   2 +-
 doc/guides/prog_guide/rte_flow.rst            |  14 +-
 doc/guides/rel_notes/deprecation.rst          |   5 -
 doc/guides/rel_notes/release_21_11.rst        |   4 +
 drivers/compress/mlx5/mlx5_compress.c         |   2 +-
 drivers/crypto/mlx5/mlx5_crypto.c             |   2 +-
 drivers/event/octeontx/ssovf_worker.c         |  36 +-
 drivers/event/octeontx/ssovf_worker.h         |   2 +-
 drivers/event/octeontx2/otx2_worker.h         |   2 +-
 drivers/net/af_packet/rte_eth_af_packet.c     |   4 +-
 drivers/net/atlantic/atl_rxtx.c               |  46 +-
 drivers/net/avp/avp_ethdev.c                  |   8 +-
 drivers/net/axgbe/axgbe_rxtx.c                |  64 +--
 drivers/net/axgbe/axgbe_rxtx_vec_sse.c        |   2 +-
 drivers/net/bnx2x/bnx2x.c                     |   2 +-
 drivers/net/bnx2x/bnx2x_rxtx.c                |   2 +-
 drivers/net/bnxt/bnxt_rxr.c                   |  50 +--
 drivers/net/bnxt/bnxt_rxr.h                   |  32 +-
 drivers/net/bnxt/bnxt_txr.c                   |  40 +-
 drivers/net/bnxt/bnxt_txr.h                   |  38 +-
 drivers/net/bonding/rte_eth_bond_pmd.c        |   2 +-
 drivers/net/cnxk/cn10k_ethdev.c               |  18 +-
 drivers/net/cnxk/cn10k_rx.h                   |  26 +-
 drivers/net/cnxk/cn10k_tx.h                   | 172 ++++----
 drivers/net/cnxk/cn9k_ethdev.c                |  18 +-
 drivers/net/cnxk/cn9k_rx.h                    |  26 +-
 drivers/net/cnxk/cn9k_tx.h                    | 170 ++++----
 drivers/net/cnxk/cnxk_ethdev.h                |  10 +-
 drivers/net/cnxk/cnxk_lookup.c                |  40 +-
 drivers/net/cxgbe/sge.c                       |  46 +-
 drivers/net/dpaa/dpaa_ethdev.h                |   7 +-
 drivers/net/dpaa/dpaa_rxtx.c                  |  10 +-
 drivers/net/dpaa2/dpaa2_rxtx.c                |  30 +-
 drivers/net/e1000/em_rxtx.c                   |  39 +-
 drivers/net/e1000/igb_rxtx.c                  |  81 ++--
 drivers/net/ena/ena_ethdev.c                  |  53 ++-
 drivers/net/enetc/enetc_rxtx.c                |  44 +-
 drivers/net/enic/enic_main.c                  |  10 +-
 drivers/net/enic/enic_res.c                   |  12 +-
 drivers/net/enic/enic_rxtx.c                  |  24 +-
 drivers/net/enic/enic_rxtx_common.h           |  18 +-
 drivers/net/enic/enic_rxtx_vec_avx2.c         |  80 ++--
 drivers/net/fm10k/fm10k_rxtx.c                |  43 +-
 drivers/net/fm10k/fm10k_rxtx_vec.c            |  25 +-
 drivers/net/hinic/hinic_pmd_rx.c              |  22 +-
 drivers/net/hinic/hinic_pmd_tx.c              |  56 +--
 drivers/net/hinic/hinic_pmd_tx.h              |  13 +-
 drivers/net/hns3/hns3_ethdev.h                |   2 +-
 drivers/net/hns3/hns3_rxtx.c                  | 108 ++---
 drivers/net/hns3/hns3_rxtx.h                  |  25 +-
 drivers/net/hns3/hns3_rxtx_vec_neon.h         |   2 +-
 drivers/net/hns3/hns3_rxtx_vec_sve.c          |   2 +-
 drivers/net/i40e/i40e_rxtx.c                  | 157 ++++---
 drivers/net/i40e/i40e_rxtx_vec_altivec.c      |  22 +-
 drivers/net/i40e/i40e_rxtx_vec_avx2.c         |  70 +--
 drivers/net/i40e/i40e_rxtx_vec_avx512.c       |  62 +--
 drivers/net/i40e/i40e_rxtx_vec_neon.c         |  50 +--
 drivers/net/i40e/i40e_rxtx_vec_sse.c          |  60 +--
 drivers/net/iavf/iavf_rxtx.c                  |  90 ++--
 drivers/net/iavf/iavf_rxtx.h                  |  28 +-
 drivers/net/iavf/iavf_rxtx_vec_avx2.c         | 140 +++---
 drivers/net/iavf/iavf_rxtx_vec_avx512.c       | 140 +++---
 drivers/net/iavf/iavf_rxtx_vec_common.h       |  16 +-
 drivers/net/iavf/iavf_rxtx_vec_sse.c          | 112 ++---
 drivers/net/ice/ice_rxtx.c                    | 107 +++--
 drivers/net/ice/ice_rxtx_vec_avx2.c           | 158 +++----
 drivers/net/ice/ice_rxtx_vec_avx512.c         | 158 +++----
 drivers/net/ice/ice_rxtx_vec_common.h         |  16 +-
 drivers/net/ice/ice_rxtx_vec_sse.c            | 112 ++---
 drivers/net/igc/igc_txrx.c                    |  67 +--
 drivers/net/ionic/ionic_rxtx.c                |  59 ++-
 drivers/net/ixgbe/ixgbe_ethdev.c              |   4 +-
 drivers/net/ixgbe/ixgbe_rxtx.c                | 113 +++--
 drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c       |  38 +-
 drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c        |  44 +-
 drivers/net/liquidio/lio_rxtx.c               |  16 +-
 drivers/net/mlx4/mlx4_rxtx.c                  |  22 +-
 drivers/net/mlx5/mlx5_flow.c                  |   2 +-
 drivers/net/mlx5/mlx5_rx.c                    |  18 +-
 drivers/net/mlx5/mlx5_rx.h                    |   4 +-
 drivers/net/mlx5/mlx5_rxq.c                   |   2 +-
 drivers/net/mlx5/mlx5_rxtx.c                  |  18 +-
 drivers/net/mlx5/mlx5_rxtx_vec_altivec.h      |  76 ++--
 drivers/net/mlx5/mlx5_rxtx_vec_neon.h         |  36 +-
 drivers/net/mlx5/mlx5_rxtx_vec_sse.h          |  38 +-
 drivers/net/mlx5/mlx5_tx.h                    | 102 ++---
 drivers/net/mvneta/mvneta_ethdev.h            |   6 +-
 drivers/net/mvneta/mvneta_rxtx.c              |  16 +-
 drivers/net/mvpp2/mrvl_ethdev.c               |  22 +-
 drivers/net/netvsc/hn_rxtx.c                  |  28 +-
 drivers/net/nfp/nfp_rxtx.c                    |  26 +-
 drivers/net/octeontx/octeontx_rxtx.h          |  38 +-
 drivers/net/octeontx2/otx2_ethdev.c           |  18 +-
 drivers/net/octeontx2/otx2_lookup.c           |  40 +-
 drivers/net/octeontx2/otx2_rx.c               |  12 +-
 drivers/net/octeontx2/otx2_rx.h               |  22 +-
 drivers/net/octeontx2/otx2_tx.c               |  86 ++--
 drivers/net/octeontx2/otx2_tx.h               |  70 +--
 drivers/net/qede/qede_rxtx.c                  | 104 ++---
 drivers/net/qede/qede_rxtx.h                  |  20 +-
 drivers/net/sfc/sfc_dp_tx.h                   |  14 +-
 drivers/net/sfc/sfc_ef100_rx.c                |  18 +-
 drivers/net/sfc/sfc_ef100_tx.c                |  52 +--
 drivers/net/sfc/sfc_ef10_essb_rx.c            |   6 +-
 drivers/net/sfc/sfc_ef10_rx.c                 |   6 +-
 drivers/net/sfc/sfc_ef10_rx_ev.h              |  16 +-
 drivers/net/sfc/sfc_ef10_tx.c                 |  18 +-
 drivers/net/sfc/sfc_rx.c                      |  22 +-
 drivers/net/sfc/sfc_tso.c                     |   2 +-
 drivers/net/sfc/sfc_tso.h                     |   2 +-
 drivers/net/sfc/sfc_tx.c                      |   4 +-
 drivers/net/tap/rte_eth_tap.c                 |  28 +-
 drivers/net/thunderx/nicvf_rxtx.c             |  24 +-
 drivers/net/thunderx/nicvf_rxtx.h             |   2 +-
 drivers/net/txgbe/txgbe_ethdev.c              |   4 +-
 drivers/net/txgbe/txgbe_rxtx.c                | 172 ++++----
 drivers/net/vhost/rte_eth_vhost.c             |   2 +-
 drivers/net/virtio/virtio_rxtx.c              |  14 +-
 drivers/net/virtio/virtio_rxtx_packed.h       |   6 +-
 drivers/net/virtio/virtqueue.h                |  14 +-
 drivers/net/vmxnet3/vmxnet3_rxtx.c            |  59 ++-
 drivers/regex/mlx5/mlx5_regex_fastpath.c      |   2 +-
 examples/bpf/t2.c                             |   4 +-
 examples/ip_fragmentation/main.c              |   2 +-
 examples/ip_reassembly/main.c                 |   2 +-
 examples/ipsec-secgw/esp.c                    |   6 +-
 examples/ipsec-secgw/ipsec-secgw.c            |  20 +-
 examples/ipsec-secgw/ipsec_worker.c           |  12 +-
 examples/ipsec-secgw/sa.c                     |   2 +-
 examples/ptpclient/ptpclient.c                |   4 +-
 examples/qos_meter/main.c                     |  12 +-
 examples/vhost/main.c                         |  12 +-
 lib/ethdev/rte_ethdev.h                       |   4 +-
 lib/ethdev/rte_flow.h                         |  33 +-
 lib/eventdev/rte_event_eth_rx_adapter.c       |   4 +-
 lib/gso/gso_common.h                          |  40 +-
 lib/gso/gso_tunnel_tcp4.c                     |   2 +-
 lib/gso/rte_gso.c                             |  10 +-
 lib/gso/rte_gso.h                             |   4 +-
 lib/ipsec/esp_inb.c                           |  10 +-
 lib/ipsec/esp_outb.c                          |   4 +-
 lib/ipsec/misc.h                              |   2 +-
 lib/ipsec/rte_ipsec_group.h                   |   6 +-
 lib/ipsec/sa.c                                |   2 +-
 lib/mbuf/rte_mbuf.c                           | 220 +++++-----
 lib/mbuf/rte_mbuf.h                           |  30 +-
 lib/mbuf/rte_mbuf_core.h                      | 398 ++++++++++++------
 lib/mbuf/rte_mbuf_dyn.c                       |   2 +-
 lib/net/rte_ether.h                           |   6 +-
 lib/net/rte_ip.h                              |   4 +-
 lib/net/rte_net.h                             |  22 +-
 lib/pipeline/rte_table_action.c               |  10 +-
 lib/vhost/virtio_net.c                        |  42 +-
 169 files changed, 3085 insertions(+), 2975 deletions(-)

diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 38cc256533..826c06c4b5 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -481,12 +481,12 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 	if (info->ethertype == _htons(RTE_ETHER_TYPE_IPV4)) {
 		ipv4_hdr = l3_hdr;
 
-		ol_flags |= PKT_TX_IPV4;
+		ol_flags |= RTE_MBUF_F_TX_IPV4;
 		if (info->l4_proto == IPPROTO_TCP && tso_segsz) {
-			ol_flags |= PKT_TX_IP_CKSUM;
+			ol_flags |= RTE_MBUF_F_TX_IP_CKSUM;
 		} else {
 			if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) {
-				ol_flags |= PKT_TX_IP_CKSUM;
+				ol_flags |= RTE_MBUF_F_TX_IP_CKSUM;
 			} else {
 				ipv4_hdr->hdr_checksum = 0;
 				ipv4_hdr->hdr_checksum =
@@ -494,7 +494,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 			}
 		}
 	} else if (info->ethertype == _htons(RTE_ETHER_TYPE_IPV6))
-		ol_flags |= PKT_TX_IPV6;
+		ol_flags |= RTE_MBUF_F_TX_IPV6;
 	else
 		return 0; /* packet type not supported, nothing to do */
 
@@ -503,7 +503,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 		/* do not recalculate udp cksum if it was 0 */
 		if (udp_hdr->dgram_cksum != 0) {
 			if (tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
-				ol_flags |= PKT_TX_UDP_CKSUM;
+				ol_flags |= RTE_MBUF_F_TX_UDP_CKSUM;
 			} else {
 				udp_hdr->dgram_cksum = 0;
 				udp_hdr->dgram_cksum =
@@ -512,13 +512,13 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 			}
 		}
 		if (info->gso_enable)
-			ol_flags |= PKT_TX_UDP_SEG;
+			ol_flags |= RTE_MBUF_F_TX_UDP_SEG;
 	} else if (info->l4_proto == IPPROTO_TCP) {
 		tcp_hdr = (struct rte_tcp_hdr *)((char *)l3_hdr + info->l3_len);
 		if (tso_segsz)
-			ol_flags |= PKT_TX_TCP_SEG;
+			ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 		else if (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) {
-			ol_flags |= PKT_TX_TCP_CKSUM;
+			ol_flags |= RTE_MBUF_F_TX_TCP_CKSUM;
 		} else {
 			tcp_hdr->cksum = 0;
 			tcp_hdr->cksum =
@@ -526,7 +526,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 					info->ethertype);
 		}
 		if (info->gso_enable)
-			ol_flags |= PKT_TX_TCP_SEG;
+			ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 	} else if (info->l4_proto == IPPROTO_SCTP) {
 		sctp_hdr = (struct rte_sctp_hdr *)
 			((char *)l3_hdr + info->l3_len);
@@ -534,7 +534,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 		 * offloaded */
 		if ((tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
 			((ipv4_hdr->total_length & 0x3) == 0)) {
-			ol_flags |= PKT_TX_SCTP_CKSUM;
+			ol_flags |= RTE_MBUF_F_TX_SCTP_CKSUM;
 		} else {
 			sctp_hdr->cksum = 0;
 			/* XXX implement CRC32c, example available in
@@ -557,14 +557,14 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
 
 	if (info->outer_ethertype == _htons(RTE_ETHER_TYPE_IPV4)) {
 		ipv4_hdr->hdr_checksum = 0;
-		ol_flags |= PKT_TX_OUTER_IPV4;
+		ol_flags |= RTE_MBUF_F_TX_OUTER_IPV4;
 
 		if (tx_offloads	& DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
-			ol_flags |= PKT_TX_OUTER_IP_CKSUM;
+			ol_flags |= RTE_MBUF_F_TX_OUTER_IP_CKSUM;
 		else
 			ipv4_hdr->hdr_checksum = rte_ipv4_cksum(ipv4_hdr);
 	} else
-		ol_flags |= PKT_TX_OUTER_IPV6;
+		ol_flags |= RTE_MBUF_F_TX_OUTER_IPV6;
 
 	if (info->outer_l4_proto != IPPROTO_UDP)
 		return ol_flags;
@@ -573,7 +573,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
 		((char *)outer_l3_hdr + info->outer_l3_len);
 
 	if (tso_enabled)
-		ol_flags |= PKT_TX_TCP_SEG;
+		ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 
 	/* Skip SW outer UDP checksum generation if HW supports it */
 	if (tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) {
@@ -584,7 +584,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
 			udp_hdr->dgram_cksum
 				= rte_ipv6_phdr_cksum(ipv6_hdr, ol_flags);
 
-		ol_flags |= PKT_TX_OUTER_UDP_CKSUM;
+		ol_flags |= RTE_MBUF_F_TX_OUTER_UDP_CKSUM;
 		return ol_flags;
 	}
 
@@ -855,17 +855,17 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 		info.is_tunnel = 0;
 		info.pkt_len = rte_pktmbuf_pkt_len(m);
 		tx_ol_flags = m->ol_flags &
-			      (IND_ATTACHED_MBUF | EXT_ATTACHED_MBUF);
+			      (RTE_MBUF_F_INDIRECT | RTE_MBUF_F_EXTERNAL);
 		rx_ol_flags = m->ol_flags;
 
 		/* Update the L3/L4 checksum error packet statistics */
-		if ((rx_ol_flags & PKT_RX_IP_CKSUM_MASK) == PKT_RX_IP_CKSUM_BAD)
+		if ((rx_ol_flags & RTE_MBUF_F_RX_IP_CKSUM_MASK) == RTE_MBUF_F_RX_IP_CKSUM_BAD)
 			rx_bad_ip_csum += 1;
-		if ((rx_ol_flags & PKT_RX_L4_CKSUM_MASK) == PKT_RX_L4_CKSUM_BAD)
+		if ((rx_ol_flags & RTE_MBUF_F_RX_L4_CKSUM_MASK) == RTE_MBUF_F_RX_L4_CKSUM_BAD)
 			rx_bad_l4_csum += 1;
-		if (rx_ol_flags & PKT_RX_OUTER_L4_CKSUM_BAD)
+		if (rx_ol_flags & RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD)
 			rx_bad_outer_l4_csum += 1;
-		if (rx_ol_flags & PKT_RX_OUTER_IP_CKSUM_BAD)
+		if (rx_ol_flags & RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD)
 			rx_bad_outer_ip_csum += 1;
 
 		/* step 1: dissect packet, parsing optional vlan, ip4/ip6, vxlan
@@ -888,26 +888,26 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 					((char *)l3_hdr + info.l3_len);
 				parse_gtp(udp_hdr, &info);
 				if (info.is_tunnel) {
-					tx_ol_flags |= PKT_TX_TUNNEL_GTP;
+					tx_ol_flags |= RTE_MBUF_F_TX_TUNNEL_GTP;
 					goto tunnel_update;
 				}
 				parse_vxlan_gpe(udp_hdr, &info);
 				if (info.is_tunnel) {
 					tx_ol_flags |=
-						PKT_TX_TUNNEL_VXLAN_GPE;
+						RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE;
 					goto tunnel_update;
 				}
 				parse_vxlan(udp_hdr, &info,
 					    m->packet_type);
 				if (info.is_tunnel) {
 					tx_ol_flags |=
-						PKT_TX_TUNNEL_VXLAN;
+						RTE_MBUF_F_TX_TUNNEL_VXLAN;
 					goto tunnel_update;
 				}
 				parse_geneve(udp_hdr, &info);
 				if (info.is_tunnel) {
 					tx_ol_flags |=
-						PKT_TX_TUNNEL_GENEVE;
+						RTE_MBUF_F_TX_TUNNEL_GENEVE;
 					goto tunnel_update;
 				}
 			} else if (info.l4_proto == IPPROTO_GRE) {
@@ -917,14 +917,14 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 					((char *)l3_hdr + info.l3_len);
 				parse_gre(gre_hdr, &info);
 				if (info.is_tunnel)
-					tx_ol_flags |= PKT_TX_TUNNEL_GRE;
+					tx_ol_flags |= RTE_MBUF_F_TX_TUNNEL_GRE;
 			} else if (info.l4_proto == IPPROTO_IPIP) {
 				void *encap_ip_hdr;
 
 				encap_ip_hdr = (char *)l3_hdr + info.l3_len;
 				parse_encap_ip(encap_ip_hdr, &info);
 				if (info.is_tunnel)
-					tx_ol_flags |= PKT_TX_TUNNEL_IPIP;
+					tx_ol_flags |= RTE_MBUF_F_TX_TUNNEL_IPIP;
 			}
 		}
 
@@ -950,7 +950,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 		if (info.is_tunnel == 1) {
 			tx_ol_flags |= process_outer_cksums(outer_l3_hdr, &info,
 					tx_offloads,
-					!!(tx_ol_flags & PKT_TX_TCP_SEG));
+					!!(tx_ol_flags & RTE_MBUF_F_TX_TCP_SEG));
 		}
 
 		/* step 3: fill the mbuf meta data (flags and header lengths) */
@@ -1014,7 +1014,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 				"l4_proto=%d l4_len=%d flags=%s\n",
 				info.l2_len, rte_be_to_cpu_16(info.ethertype),
 				info.l3_len, info.l4_proto, info.l4_len, buf);
-			if (rx_ol_flags & PKT_RX_LRO)
+			if (rx_ol_flags & RTE_MBUF_F_RX_LRO)
 				printf("rx: m->lro_segsz=%u\n", m->tso_segsz);
 			if (info.is_tunnel == 1)
 				printf("rx: outer_l2_len=%d outer_ethertype=%x "
@@ -1035,17 +1035,17 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 				    DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
 				    (tx_offloads &
 				    DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) ||
-				    (tx_ol_flags & PKT_TX_OUTER_IPV6))
+				    (tx_ol_flags & RTE_MBUF_F_TX_OUTER_IPV6))
 					printf("tx: m->outer_l2_len=%d "
 						"m->outer_l3_len=%d\n",
 						m->outer_l2_len,
 						m->outer_l3_len);
 				if (info.tunnel_tso_segsz != 0 &&
-						(m->ol_flags & PKT_TX_TCP_SEG))
+						(m->ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 					printf("tx: m->tso_segsz=%d\n",
 						m->tso_segsz);
 			} else if (info.tso_segsz != 0 &&
-					(m->ol_flags & PKT_TX_TCP_SEG))
+					(m->ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 				printf("tx: m->tso_segsz=%d\n", m->tso_segsz);
 			rte_get_tx_ol_flag_list(m->ol_flags, buf, sizeof(buf));
 			printf("tx: flags=%s", buf);
diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index d0360b4363..8d9eeb057c 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -100,11 +100,11 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
 
 	tx_offloads = ports[fs->tx_port].dev_conf.txmode.offloads;
 	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
-		ol_flags |= PKT_TX_VLAN;
+		ol_flags |= RTE_MBUF_F_TX_VLAN;
 	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
-		ol_flags |= PKT_TX_QINQ;
+		ol_flags |= RTE_MBUF_F_TX_QINQ;
 	if (tx_offloads	& DEV_TX_OFFLOAD_MACSEC_INSERT)
-		ol_flags |= PKT_TX_MACSEC;
+		ol_flags |= RTE_MBUF_F_TX_MACSEC;
 
 	for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) {
 		if (!nb_pkt || !nb_clones) {
@@ -152,7 +152,7 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
 								   sizeof(*ip_hdr));
 			pkt->nb_segs		= 1;
 			pkt->pkt_len		= pkt_size;
-			pkt->ol_flags		&= EXT_ATTACHED_MBUF;
+			pkt->ol_flags		&= RTE_MBUF_F_EXTERNAL;
 			pkt->ol_flags		|= ol_flags;
 			pkt->vlan_tci		= vlan_tci;
 			pkt->vlan_tci_outer	= vlan_tci_outer;
diff --git a/app/test-pmd/ieee1588fwd.c b/app/test-pmd/ieee1588fwd.c
index 034f238c34..7f3a6c95e7 100644
--- a/app/test-pmd/ieee1588fwd.c
+++ b/app/test-pmd/ieee1588fwd.c
@@ -114,7 +114,7 @@ ieee1588_packet_fwd(struct fwd_stream *fs)
 	eth_hdr = rte_pktmbuf_mtod(mb, struct rte_ether_hdr *);
 	eth_type = rte_be_to_cpu_16(eth_hdr->ether_type);
 
-	if (! (mb->ol_flags & PKT_RX_IEEE1588_PTP)) {
+	if (! (mb->ol_flags & RTE_MBUF_F_RX_IEEE1588_PTP)) {
 		if (eth_type == RTE_ETHER_TYPE_1588) {
 			printf("Port %u Received PTP packet not filtered"
 			       " by hardware\n",
@@ -163,7 +163,7 @@ ieee1588_packet_fwd(struct fwd_stream *fs)
 	 * Check that the received PTP packet has been timestamped by the
 	 * hardware.
 	 */
-	if (! (mb->ol_flags & PKT_RX_IEEE1588_TMST)) {
+	if (! (mb->ol_flags & RTE_MBUF_F_RX_IEEE1588_TMST)) {
 		printf("Port %u Received PTP packet not timestamped"
 		       " by hardware\n",
 		       fs->rx_port);
@@ -183,7 +183,7 @@ ieee1588_packet_fwd(struct fwd_stream *fs)
 	rte_ether_addr_copy(&addr, &eth_hdr->s_addr);
 
 	/* Forward PTP packet with hardware TX timestamp */
-	mb->ol_flags |= PKT_TX_IEEE1588_TMST;
+	mb->ol_flags |= RTE_MBUF_F_TX_IEEE1588_TMST;
 	fs->tx_packets += 1;
 	if (rte_eth_tx_burst(fs->rx_port, fs->tx_queue, &mb, 1) == 0) {
 		printf("Port %u sent PTP packet dropped\n", fs->rx_port);
diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c
index 21be8bb470..ea979c4a71 100644
--- a/app/test-pmd/macfwd.c
+++ b/app/test-pmd/macfwd.c
@@ -73,11 +73,11 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
 	txp = &ports[fs->tx_port];
 	tx_offloads = txp->dev_conf.txmode.offloads;
 	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
-		ol_flags = PKT_TX_VLAN;
+		ol_flags = RTE_MBUF_F_TX_VLAN;
 	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
-		ol_flags |= PKT_TX_QINQ;
+		ol_flags |= RTE_MBUF_F_TX_QINQ;
 	if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
-		ol_flags |= PKT_TX_MACSEC;
+		ol_flags |= RTE_MBUF_F_TX_MACSEC;
 	for (i = 0; i < nb_rx; i++) {
 		if (likely(i < nb_rx - 1))
 			rte_prefetch0(rte_pktmbuf_mtod(pkts_burst[i + 1],
@@ -88,7 +88,7 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
 				&eth_hdr->d_addr);
 		rte_ether_addr_copy(&ports[fs->tx_port].eth_addr,
 				&eth_hdr->s_addr);
-		mb->ol_flags &= IND_ATTACHED_MBUF | EXT_ATTACHED_MBUF;
+		mb->ol_flags &= RTE_MBUF_F_INDIRECT | RTE_MBUF_F_EXTERNAL;
 		mb->ol_flags |= ol_flags;
 		mb->l2_len = sizeof(struct rte_ether_hdr);
 		mb->l3_len = sizeof(struct rte_ipv4_hdr);
diff --git a/app/test-pmd/macswap_common.h b/app/test-pmd/macswap_common.h
index 7e9a3590a4..0d43d5cceb 100644
--- a/app/test-pmd/macswap_common.h
+++ b/app/test-pmd/macswap_common.h
@@ -11,11 +11,11 @@ ol_flags_init(uint64_t tx_offload)
 	uint64_t ol_flags = 0;
 
 	ol_flags |= (tx_offload & DEV_TX_OFFLOAD_VLAN_INSERT) ?
-			PKT_TX_VLAN : 0;
+			RTE_MBUF_F_TX_VLAN : 0;
 	ol_flags |= (tx_offload & DEV_TX_OFFLOAD_QINQ_INSERT) ?
-			PKT_TX_QINQ : 0;
+			RTE_MBUF_F_TX_QINQ : 0;
 	ol_flags |= (tx_offload & DEV_TX_OFFLOAD_MACSEC_INSERT) ?
-			PKT_TX_MACSEC : 0;
+			RTE_MBUF_F_TX_MACSEC : 0;
 
 	return ol_flags;
 }
@@ -26,10 +26,10 @@ vlan_qinq_set(struct rte_mbuf *pkts[], uint16_t nb,
 {
 	int i;
 
-	if (ol_flags & PKT_TX_VLAN)
+	if (ol_flags & RTE_MBUF_F_TX_VLAN)
 		for (i = 0; i < nb; i++)
 			pkts[i]->vlan_tci = vlan;
-	if (ol_flags & PKT_TX_QINQ)
+	if (ol_flags & RTE_MBUF_F_TX_QINQ)
 		for (i = 0; i < nb; i++)
 			pkts[i]->vlan_tci_outer = outer_vlan;
 }
@@ -37,7 +37,7 @@ vlan_qinq_set(struct rte_mbuf *pkts[], uint16_t nb,
 static inline void
 mbuf_field_set(struct rte_mbuf *mb, uint64_t ol_flags)
 {
-	mb->ol_flags &= IND_ATTACHED_MBUF | EXT_ATTACHED_MBUF;
+	mb->ol_flags &= RTE_MBUF_F_INDIRECT | RTE_MBUF_F_EXTERNAL;
 	mb->ol_flags |= ol_flags;
 	mb->l2_len = sizeof(struct rte_ether_hdr);
 	mb->l3_len = sizeof(struct rte_ipv4_hdr);
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index ab7cd622c7..881a1120e9 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -207,7 +207,7 @@ pkt_burst_prepare(struct rte_mbuf *pkt, struct rte_mempool *mbp,
 
 	rte_pktmbuf_reset_headroom(pkt);
 	pkt->data_len = tx_pkt_seg_lengths[0];
-	pkt->ol_flags &= EXT_ATTACHED_MBUF;
+	pkt->ol_flags &= RTE_MBUF_F_EXTERNAL;
 	pkt->ol_flags |= ol_flags;
 	pkt->vlan_tci = vlan_tci;
 	pkt->vlan_tci_outer = vlan_tci_outer;
@@ -353,11 +353,11 @@ pkt_burst_transmit(struct fwd_stream *fs)
 	vlan_tci = txp->tx_vlan_id;
 	vlan_tci_outer = txp->tx_vlan_id_outer;
 	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
-		ol_flags = PKT_TX_VLAN;
+		ol_flags = RTE_MBUF_F_TX_VLAN;
 	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
-		ol_flags |= PKT_TX_QINQ;
+		ol_flags |= RTE_MBUF_F_TX_QINQ;
 	if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
-		ol_flags |= PKT_TX_MACSEC;
+		ol_flags |= RTE_MBUF_F_TX_MACSEC;
 
 	/*
 	 * Initialize Ethernet header.
diff --git a/app/test-pmd/util.c b/app/test-pmd/util.c
index 14a9a251fb..a3c4318164 100644
--- a/app/test-pmd/util.c
+++ b/app/test-pmd/util.c
@@ -151,20 +151,20 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[],
 			  eth_type, (unsigned int) mb->pkt_len,
 			  (int)mb->nb_segs);
 		ol_flags = mb->ol_flags;
-		if (ol_flags & PKT_RX_RSS_HASH) {
+		if (ol_flags & RTE_MBUF_F_RX_RSS_HASH) {
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - RSS hash=0x%x",
 				  (unsigned int) mb->hash.rss);
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - RSS queue=0x%x", (unsigned int) queue);
 		}
-		if (ol_flags & PKT_RX_FDIR) {
+		if (ol_flags & RTE_MBUF_F_RX_FDIR) {
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - FDIR matched ");
-			if (ol_flags & PKT_RX_FDIR_ID)
+			if (ol_flags & RTE_MBUF_F_RX_FDIR_ID)
 				MKDUMPSTR(print_buf, buf_size, cur_len,
 					  "ID=0x%x", mb->hash.fdir.hi);
-			else if (ol_flags & PKT_RX_FDIR_FLX)
+			else if (ol_flags & RTE_MBUF_F_RX_FDIR_FLX)
 				MKDUMPSTR(print_buf, buf_size, cur_len,
 					  "flex bytes=0x%08x %08x",
 					  mb->hash.fdir.hi, mb->hash.fdir.lo);
@@ -176,18 +176,18 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[],
 		if (is_timestamp_enabled(mb))
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - timestamp %"PRIu64" ", get_timestamp(mb));
-		if (ol_flags & PKT_RX_QINQ)
+		if (ol_flags & RTE_MBUF_F_RX_QINQ)
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - QinQ VLAN tci=0x%x, VLAN tci outer=0x%x",
 				  mb->vlan_tci, mb->vlan_tci_outer);
-		else if (ol_flags & PKT_RX_VLAN)
+		else if (ol_flags & RTE_MBUF_F_RX_VLAN)
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - VLAN tci=0x%x", mb->vlan_tci);
-		if (!is_rx && (ol_flags & PKT_TX_DYNF_METADATA))
+		if (!is_rx && (ol_flags & RTE_MBUF_DYNFLAG_TX_METADATA))
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - Tx metadata: 0x%x",
 				  *RTE_FLOW_DYNF_METADATA(mb));
-		if (is_rx && (ol_flags & PKT_RX_DYNF_METADATA))
+		if (is_rx && (ol_flags & RTE_MBUF_DYNFLAG_RX_METADATA))
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - Rx metadata: 0x%x",
 				  *RTE_FLOW_DYNF_METADATA(mb));
@@ -325,7 +325,7 @@ tx_pkt_set_md(uint16_t port_id, __rte_unused uint16_t queue,
 		for (i = 0; i < nb_pkts; i++) {
 			*RTE_FLOW_DYNF_METADATA(pkts[i]) =
 						ports[port_id].tx_metadata;
-			pkts[i]->ol_flags |= PKT_TX_DYNF_METADATA;
+			pkts[i]->ol_flags |= RTE_MBUF_DYNFLAG_TX_METADATA;
 		}
 	return nb_pkts;
 }
diff --git a/app/test/test_ipsec.c b/app/test/test_ipsec.c
index c6d6b88d6d..1bec63b0e8 100644
--- a/app/test/test_ipsec.c
+++ b/app/test/test_ipsec.c
@@ -1622,8 +1622,8 @@ inline_outb_burst_null_null_check(struct ipsec_unitest_params *ut_params,
 			"ibuf pkt_len is not equal to obuf pkt_len");
 
 		/* check mbuf ol_flags */
-		TEST_ASSERT(ut_params->ibuf[j]->ol_flags & PKT_TX_SEC_OFFLOAD,
-			"ibuf PKT_TX_SEC_OFFLOAD is not set");
+		TEST_ASSERT(ut_params->ibuf[j]->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD,
+			    "ibuf RTE_MBUF_F_TX_SEC_OFFLOAD is not set");
 	}
 	return 0;
 }
diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
index 9a248dfaea..ee034fb898 100644
--- a/app/test/test_mbuf.c
+++ b/app/test/test_mbuf.c
@@ -1495,7 +1495,7 @@ test_get_rx_ol_flag_list(void)
 		GOTO_FAIL("%s expected: -1, received = %d\n", __func__, ret);
 
 	/* Test case to check with zero buffer len */
-	ret = rte_get_rx_ol_flag_list(PKT_RX_L4_CKSUM_MASK, buf, 0);
+	ret = rte_get_rx_ol_flag_list(RTE_MBUF_F_RX_L4_CKSUM_MASK, buf, 0);
 	if (ret != -1)
 		GOTO_FAIL("%s expected: -1, received = %d\n", __func__, ret);
 
@@ -1526,7 +1526,8 @@ test_get_rx_ol_flag_list(void)
 				"non-zero, buffer should not be empty");
 
 	/* Test case to check with valid mask value */
-	ret = rte_get_rx_ol_flag_list(PKT_RX_SEC_OFFLOAD, buf, sizeof(buf));
+	ret = rte_get_rx_ol_flag_list(RTE_MBUF_F_RX_SEC_OFFLOAD, buf,
+				      sizeof(buf));
 	if (ret != 0)
 		GOTO_FAIL("%s expected: 0, received = %d\n", __func__, ret);
 
@@ -1553,7 +1554,7 @@ test_get_tx_ol_flag_list(void)
 		GOTO_FAIL("%s expected: -1, received = %d\n", __func__, ret);
 
 	/* Test case to check with zero buffer len */
-	ret = rte_get_tx_ol_flag_list(PKT_TX_IP_CKSUM, buf, 0);
+	ret = rte_get_tx_ol_flag_list(RTE_MBUF_F_TX_IP_CKSUM, buf, 0);
 	if (ret != -1)
 		GOTO_FAIL("%s expected: -1, received = %d\n", __func__, ret);
 
@@ -1585,7 +1586,8 @@ test_get_tx_ol_flag_list(void)
 				"non-zero, buffer should not be empty");
 
 	/* Test case to check with valid mask value */
-	ret = rte_get_tx_ol_flag_list(PKT_TX_UDP_CKSUM, buf, sizeof(buf));
+	ret = rte_get_tx_ol_flag_list(RTE_MBUF_F_TX_UDP_CKSUM, buf,
+				      sizeof(buf));
 	if (ret != 0)
 		GOTO_FAIL("%s expected: 0, received = %d\n", __func__, ret);
 
@@ -1611,28 +1613,28 @@ test_get_rx_ol_flag_name(void)
 	uint16_t i;
 	const char *flag_str = NULL;
 	const struct flag_name rx_flags[] = {
-		VAL_NAME(PKT_RX_VLAN),
-		VAL_NAME(PKT_RX_RSS_HASH),
-		VAL_NAME(PKT_RX_FDIR),
-		VAL_NAME(PKT_RX_L4_CKSUM_BAD),
-		VAL_NAME(PKT_RX_L4_CKSUM_GOOD),
-		VAL_NAME(PKT_RX_L4_CKSUM_NONE),
-		VAL_NAME(PKT_RX_IP_CKSUM_BAD),
-		VAL_NAME(PKT_RX_IP_CKSUM_GOOD),
-		VAL_NAME(PKT_RX_IP_CKSUM_NONE),
-		VAL_NAME(PKT_RX_OUTER_IP_CKSUM_BAD),
-		VAL_NAME(PKT_RX_VLAN_STRIPPED),
-		VAL_NAME(PKT_RX_IEEE1588_PTP),
-		VAL_NAME(PKT_RX_IEEE1588_TMST),
-		VAL_NAME(PKT_RX_FDIR_ID),
-		VAL_NAME(PKT_RX_FDIR_FLX),
-		VAL_NAME(PKT_RX_QINQ_STRIPPED),
-		VAL_NAME(PKT_RX_LRO),
-		VAL_NAME(PKT_RX_SEC_OFFLOAD),
-		VAL_NAME(PKT_RX_SEC_OFFLOAD_FAILED),
-		VAL_NAME(PKT_RX_OUTER_L4_CKSUM_BAD),
-		VAL_NAME(PKT_RX_OUTER_L4_CKSUM_GOOD),
-		VAL_NAME(PKT_RX_OUTER_L4_CKSUM_INVALID),
+		VAL_NAME(RTE_MBUF_F_RX_VLAN),
+		VAL_NAME(RTE_MBUF_F_RX_RSS_HASH),
+		VAL_NAME(RTE_MBUF_F_RX_FDIR),
+		VAL_NAME(RTE_MBUF_F_RX_L4_CKSUM_BAD),
+		VAL_NAME(RTE_MBUF_F_RX_L4_CKSUM_GOOD),
+		VAL_NAME(RTE_MBUF_F_RX_L4_CKSUM_NONE),
+		VAL_NAME(RTE_MBUF_F_RX_IP_CKSUM_BAD),
+		VAL_NAME(RTE_MBUF_F_RX_IP_CKSUM_GOOD),
+		VAL_NAME(RTE_MBUF_F_RX_IP_CKSUM_NONE),
+		VAL_NAME(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD),
+		VAL_NAME(RTE_MBUF_F_RX_VLAN_STRIPPED),
+		VAL_NAME(RTE_MBUF_F_RX_IEEE1588_PTP),
+		VAL_NAME(RTE_MBUF_F_RX_IEEE1588_TMST),
+		VAL_NAME(RTE_MBUF_F_RX_FDIR_ID),
+		VAL_NAME(RTE_MBUF_F_RX_FDIR_FLX),
+		VAL_NAME(RTE_MBUF_F_RX_QINQ_STRIPPED),
+		VAL_NAME(RTE_MBUF_F_RX_LRO),
+		VAL_NAME(RTE_MBUF_F_RX_SEC_OFFLOAD),
+		VAL_NAME(RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED),
+		VAL_NAME(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD),
+		VAL_NAME(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD),
+		VAL_NAME(RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID),
 	};
 
 	/* Test case to check with valid flag */
@@ -1663,31 +1665,31 @@ test_get_tx_ol_flag_name(void)
 	uint16_t i;
 	const char *flag_str = NULL;
 	const struct flag_name tx_flags[] = {
-		VAL_NAME(PKT_TX_VLAN),
-		VAL_NAME(PKT_TX_IP_CKSUM),
-		VAL_NAME(PKT_TX_TCP_CKSUM),
-		VAL_NAME(PKT_TX_SCTP_CKSUM),
-		VAL_NAME(PKT_TX_UDP_CKSUM),
-		VAL_NAME(PKT_TX_IEEE1588_TMST),
-		VAL_NAME(PKT_TX_TCP_SEG),
-		VAL_NAME(PKT_TX_IPV4),
-		VAL_NAME(PKT_TX_IPV6),
-		VAL_NAME(PKT_TX_OUTER_IP_CKSUM),
-		VAL_NAME(PKT_TX_OUTER_IPV4),
-		VAL_NAME(PKT_TX_OUTER_IPV6),
-		VAL_NAME(PKT_TX_TUNNEL_VXLAN),
-		VAL_NAME(PKT_TX_TUNNEL_GRE),
-		VAL_NAME(PKT_TX_TUNNEL_IPIP),
-		VAL_NAME(PKT_TX_TUNNEL_GENEVE),
-		VAL_NAME(PKT_TX_TUNNEL_MPLSINUDP),
-		VAL_NAME(PKT_TX_TUNNEL_VXLAN_GPE),
-		VAL_NAME(PKT_TX_TUNNEL_IP),
-		VAL_NAME(PKT_TX_TUNNEL_UDP),
-		VAL_NAME(PKT_TX_QINQ),
-		VAL_NAME(PKT_TX_MACSEC),
-		VAL_NAME(PKT_TX_SEC_OFFLOAD),
-		VAL_NAME(PKT_TX_UDP_SEG),
-		VAL_NAME(PKT_TX_OUTER_UDP_CKSUM),
+		VAL_NAME(RTE_MBUF_F_TX_VLAN),
+		VAL_NAME(RTE_MBUF_F_TX_IP_CKSUM),
+		VAL_NAME(RTE_MBUF_F_TX_TCP_CKSUM),
+		VAL_NAME(RTE_MBUF_F_TX_SCTP_CKSUM),
+		VAL_NAME(RTE_MBUF_F_TX_UDP_CKSUM),
+		VAL_NAME(RTE_MBUF_F_TX_IEEE1588_TMST),
+		VAL_NAME(RTE_MBUF_F_TX_TCP_SEG),
+		VAL_NAME(RTE_MBUF_F_TX_IPV4),
+		VAL_NAME(RTE_MBUF_F_TX_IPV6),
+		VAL_NAME(RTE_MBUF_F_TX_OUTER_IP_CKSUM),
+		VAL_NAME(RTE_MBUF_F_TX_OUTER_IPV4),
+		VAL_NAME(RTE_MBUF_F_TX_OUTER_IPV6),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_VXLAN),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_GRE),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_IPIP),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_GENEVE),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_MPLSINUDP),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_IP),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_UDP),
+		VAL_NAME(RTE_MBUF_F_TX_QINQ),
+		VAL_NAME(RTE_MBUF_F_TX_MACSEC),
+		VAL_NAME(RTE_MBUF_F_TX_SEC_OFFLOAD),
+		VAL_NAME(RTE_MBUF_F_TX_UDP_SEG),
+		VAL_NAME(RTE_MBUF_F_TX_OUTER_UDP_CKSUM),
 	};
 
 	/* Test case to check with valid flag */
@@ -1755,8 +1757,8 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 
 	/* test to validate if IP checksum is counted only for IPV4 packet */
 	/* set both IP checksum and IPV6 flags */
-	ol_flags |= PKT_TX_IP_CKSUM;
-	ol_flags |= PKT_TX_IPV6;
+	ol_flags |= RTE_MBUF_F_TX_IP_CKSUM;
+	ol_flags |= RTE_MBUF_F_TX_IPV6;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_IP_CKSUM_IPV6_SET",
 				pktmbuf_pool,
 				ol_flags, 0, -EINVAL) < 0)
@@ -1765,14 +1767,14 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 	ol_flags = 0;
 
 	/* test to validate if IP type is set when required */
-	ol_flags |= PKT_TX_L4_MASK;
+	ol_flags |= RTE_MBUF_F_TX_L4_MASK;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_IP_TYPE_NOT_SET",
 				pktmbuf_pool,
 				ol_flags, 0, -EINVAL) < 0)
 		GOTO_FAIL("%s failed: IP type is not set.\n", __func__);
 
 	/* test if IP type is set when TCP SEG is on */
-	ol_flags |= PKT_TX_TCP_SEG;
+	ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_IP_TYPE_NOT_SET",
 				pktmbuf_pool,
 				ol_flags, 0, -EINVAL) < 0)
@@ -1780,8 +1782,8 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 
 	ol_flags = 0;
 	/* test to confirm IP type (IPV4/IPV6) is set */
-	ol_flags = PKT_TX_L4_MASK;
-	ol_flags |= PKT_TX_IPV6;
+	ol_flags = RTE_MBUF_F_TX_L4_MASK;
+	ol_flags |= RTE_MBUF_F_TX_IPV6;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_IP_TYPE_SET",
 				pktmbuf_pool,
 				ol_flags, 0, 0) < 0)
@@ -1789,15 +1791,15 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 
 	ol_flags = 0;
 	/* test to check TSO segment size is non-zero */
-	ol_flags |= PKT_TX_IPV4;
-	ol_flags |= PKT_TX_TCP_SEG;
+	ol_flags |= RTE_MBUF_F_TX_IPV4;
+	ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 	/* set 0 tso segment size */
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_NULL_TSO_SEGSZ",
 				pktmbuf_pool,
 				ol_flags, 0, -EINVAL) < 0)
 		GOTO_FAIL("%s failed: tso segment size is null.\n", __func__);
 
-	/* retain IPV4 and PKT_TX_TCP_SEG mask */
+	/* retain IPV4 and RTE_MBUF_F_TX_TCP_SEG mask */
 	/* set valid tso segment size but IP CKSUM not set */
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_TSO_IP_CKSUM_NOT_SET",
 				pktmbuf_pool,
@@ -1806,7 +1808,7 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 
 	/* test to validate if IP checksum is set for TSO capability */
 	/* retain IPV4, TCP_SEG, tso_seg size */
-	ol_flags |= PKT_TX_IP_CKSUM;
+	ol_flags |= RTE_MBUF_F_TX_IP_CKSUM;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_TSO_IP_CKSUM_SET",
 				pktmbuf_pool,
 				ol_flags, 512, 0) < 0)
@@ -1814,8 +1816,8 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 
 	/* test to confirm TSO for IPV6 type */
 	ol_flags = 0;
-	ol_flags |= PKT_TX_IPV6;
-	ol_flags |= PKT_TX_TCP_SEG;
+	ol_flags |= RTE_MBUF_F_TX_IPV6;
+	ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_TSO_IPV6_SET",
 				pktmbuf_pool,
 				ol_flags, 512, 0) < 0)
@@ -1823,8 +1825,8 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 
 	ol_flags = 0;
 	/* test if outer IP checksum set for non outer IPv4 packet */
-	ol_flags |= PKT_TX_IPV6;
-	ol_flags |= PKT_TX_OUTER_IP_CKSUM;
+	ol_flags |= RTE_MBUF_F_TX_IPV6;
+	ol_flags |= RTE_MBUF_F_TX_OUTER_IP_CKSUM;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_OUTER_IPV4_NOT_SET",
 				pktmbuf_pool,
 				ol_flags, 512, -EINVAL) < 0)
@@ -1832,8 +1834,8 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 
 	ol_flags = 0;
 	/* test to confirm outer IP checksum is set for outer IPV4 packet */
-	ol_flags |= PKT_TX_OUTER_IP_CKSUM;
-	ol_flags |= PKT_TX_OUTER_IPV4;
+	ol_flags |= RTE_MBUF_F_TX_OUTER_IP_CKSUM;
+	ol_flags |= RTE_MBUF_F_TX_OUTER_IPV4;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_OUTER_IPV4_SET",
 				pktmbuf_pool,
 				ol_flags, 512, 0) < 0)
@@ -2366,7 +2368,7 @@ test_pktmbuf_ext_shinfo_init_helper(struct rte_mempool *pktmbuf_pool)
 	buf_iova = rte_mem_virt2iova(ext_buf_addr);
 	rte_pktmbuf_attach_extbuf(m, ext_buf_addr, buf_iova, buf_len,
 		ret_shinfo);
-	if (m->ol_flags != EXT_ATTACHED_MBUF)
+	if (m->ol_flags != RTE_MBUF_F_EXTERNAL)
 		GOTO_FAIL("%s: External buffer is not attached to mbuf\n",
 				__func__);
 
@@ -2380,7 +2382,7 @@ test_pktmbuf_ext_shinfo_init_helper(struct rte_mempool *pktmbuf_pool)
 	/* attach the same external buffer to the cloned mbuf */
 	rte_pktmbuf_attach_extbuf(clone, ext_buf_addr, buf_iova, buf_len,
 			ret_shinfo);
-	if (clone->ol_flags != EXT_ATTACHED_MBUF)
+	if (clone->ol_flags != RTE_MBUF_F_EXTERNAL)
 		GOTO_FAIL("%s: External buffer is not attached to mbuf\n",
 				__func__);
 
@@ -2654,8 +2656,8 @@ test_mbuf_dyn(struct rte_mempool *pktmbuf_pool)
 			flag2, strerror(errno));
 
 	flag3 = rte_mbuf_dynflag_register_bitnum(&dynflag3,
-						rte_bsf64(PKT_LAST_FREE));
-	if (flag3 != rte_bsf64(PKT_LAST_FREE))
+						rte_bsf64(RTE_MBUF_F_LAST_FREE));
+	if (flag3 != rte_bsf64(RTE_MBUF_F_LAST_FREE))
 		GOTO_FAIL("failed to register dynamic flag 3, flag3=%d: %s",
 			flag3, strerror(errno));
 
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index e75f4fa9e3..309b91a01f 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -512,9 +512,9 @@ configured TPID.
     // enable VLAN insert offload
     testpmd> port config (port_id) rx_offload vlan_insert|qinq_insert (on|off)
 
-    if (mbuf->ol_flags && PKT_TX_QINQ)       // case-1: insert VLAN to single-tagged packet
+    if (mbuf->ol_flags && RTE_MBUF_F_TX_QINQ)       // case-1: insert VLAN to single-tagged packet
         tci_value = mbuf->vlan_tci_outer
-    else if (mbuf->ol_flags && PKT_TX_VLAN)  // case-2: insert VLAN to untagged packet
+    else if (mbuf->ol_flags && RTE_MBUF_F_TX_VLAN)  // case-2: insert VLAN to untagged packet
         tci_value = mbuf->vlan_tci
 
 VLAN Strip
@@ -528,7 +528,7 @@ The application configures the per-port VLAN strip offload.
     testpmd> port config (port_id) tx_offload vlan_strip (on|off)
 
     // notify application VLAN strip via mbuf
-    mbuf->ol_flags |= PKT_RX_VLAN | PKT_RX_STRIPPED // outer VLAN is found and stripped
+    mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_STRIPPED // outer VLAN is found and stripped
     mbuf->vlan_tci = tci_value                      // TCI of the stripped VLAN
 
 Time Synchronization
@@ -552,7 +552,7 @@ packets to application via mbuf.
 .. code-block:: console
 
     // RX packet completion will indicate whether the packet is PTP
-    mbuf->ol_flags |= PKT_RX_IEEE1588_PTP
+    mbuf->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP
 
 Statistics Collection
 ~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst
index 91bdcd065a..d5ffd51dea 100644
--- a/doc/guides/nics/enic.rst
+++ b/doc/guides/nics/enic.rst
@@ -279,9 +279,9 @@ inner and outer packets can be IPv4 or IPv6.
 - Rx checksum offloads.
 
   The NIC validates IPv4/UDP/TCP checksums of both inner and outer packets.
-  Good checksum flags (e.g. ``PKT_RX_L4_CKSUM_GOOD``) indicate that the inner
+  Good checksum flags (e.g. ``RTE_MBUF_F_RX_L4_CKSUM_GOOD``) indicate that the inner
   packet has the correct checksum, and if applicable, the outer packet also
-  has the correct checksum. Bad checksum flags (e.g. ``PKT_RX_L4_CKSUM_BAD``)
+  has the correct checksum. Bad checksum flags (e.g. ``RTE_MBUF_F_RX_L4_CKSUM_BAD``)
   indicate that the inner and/or outer packets have invalid checksum values.
 
 - Inner Rx packet type classification
@@ -437,8 +437,8 @@ Limitations
 
 Another alternative is modify the adapter's ingress VLAN rewrite mode so that
 packets with the default VLAN tag are stripped by the adapter and presented to
-DPDK as untagged packets. In this case mbuf->vlan_tci and the PKT_RX_VLAN and
-PKT_RX_VLAN_STRIPPED mbuf flags would not be set. This mode is enabled with the
+DPDK as untagged packets. In this case mbuf->vlan_tci and the RTE_MBUF_F_RX_VLAN and
+RTE_MBUF_F_RX_VLAN_STRIPPED mbuf flags would not be set. This mode is enabled with the
 ``devargs`` parameter ``ig-vlan-rewrite=untag``. For example::
 
     -a 12:00.0,ig-vlan-rewrite=untag
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 4fce8cd1c9..0c059411e8 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -210,7 +210,7 @@ Supports Large Receive Offload.
   ``dev_conf.rxmode.max_lro_pkt_size``.
 * **[implements] datapath**: ``LRO functionality``.
 * **[implements] rte_eth_dev_data**: ``lro``.
-* **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
+* **[provides]   mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_LRO``, ``mbuf.tso_segsz``.
 * **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
 * **[provides]   rte_eth_dev_info**: ``max_lro_pkt_size``.
 
@@ -224,7 +224,7 @@ Supports TCP Segmentation Offloading.
 
 * **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_TCP_TSO``.
 * **[uses]       rte_eth_desc_lim**: ``nb_seg_max``, ``nb_mtu_seg_max``.
-* **[uses]       mbuf**: ``mbuf.ol_flags:`` ``PKT_TX_TCP_SEG``, ``PKT_TX_IPV4``, ``PKT_TX_IPV6``, ``PKT_TX_IP_CKSUM``.
+* **[uses]       mbuf**: ``mbuf.ol_flags:`` ``RTE_MBUF_F_TX_TCP_SEG``, ``RTE_MBUF_F_TX_IPV4``, ``RTE_MBUF_F_TX_IPV6``, ``RTE_MBUF_F_TX_IP_CKSUM``.
 * **[uses]       mbuf**: ``mbuf.tso_segsz``, ``mbuf.l2_len``, ``mbuf.l3_len``, ``mbuf.l4_len``.
 * **[implements] datapath**: ``TSO functionality``.
 * **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_TCP_TSO,DEV_TX_OFFLOAD_UDP_TSO``.
@@ -292,7 +292,7 @@ Supports RSS hashing on RX.
 * **[uses]     user config**: ``dev_conf.rx_adv_conf.rss_conf``.
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_RSS_HASH``.
 * **[provides] rte_eth_dev_info**: ``flow_type_rss_offloads``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_RSS_HASH``, ``mbuf.rss``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_RSS_HASH``, ``mbuf.rss``.
 
 
 .. _nic_features_inner_rss:
@@ -304,7 +304,7 @@ Supports RX RSS hashing on Inner headers.
 
 * **[uses]    rte_flow_action_rss**: ``level``.
 * **[uses]    rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_RSS_HASH``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_RSS_HASH``, ``mbuf.rss``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_RSS_HASH``, ``mbuf.rss``.
 
 
 .. _nic_features_rss_key_update:
@@ -435,8 +435,8 @@ of protocol operations. See Security library and PMD documentation for more deta
   ``session_stats_get``, ``session_destroy``, ``set_pkt_metadata``, ``capabilities_get``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_SECURITY``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_SECURITY``.
-* **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD``,
-  ``mbuf.ol_flags:PKT_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD_FAILED``.
+* **[provides]   mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_SEC_OFFLOAD``,
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED``.
 * **[provides]   rte_security_ops, capabilities_get**:  ``action: RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO``
 
 
@@ -458,8 +458,8 @@ protocol operations. See security library and PMD documentation for more details
   ``capabilities_get``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_SECURITY``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_SECURITY``.
-* **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD``,
-  ``mbuf.ol_flags:PKT_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD_FAILED``.
+* **[provides]   mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_SEC_OFFLOAD``,
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED``.
 * **[provides]   rte_security_ops, capabilities_get**:  ``action: RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL``
 
 
@@ -483,9 +483,9 @@ Supports VLAN offload to hardware.
 
 * **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
 * **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_VLAN_INSERT``.
-* **[uses]       mbuf**: ``mbuf.ol_flags:PKT_TX_VLAN``, ``mbuf.vlan_tci``.
+* **[uses]       mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_TX_VLAN``, ``mbuf.vlan_tci``.
 * **[implements] eth_dev_ops**: ``vlan_offload_set``.
-* **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:PKT_RX_VLAN`` ``mbuf.vlan_tci``.
+* **[provides]   mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:RTE_MBUF_F_RX_VLAN`` ``mbuf.vlan_tci``.
 * **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
 * **[related]    API**: ``rte_eth_dev_set_vlan_offload()``,
@@ -501,9 +501,9 @@ Supports QinQ (queue in queue) offload.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
 * **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_QINQ_INSERT``.
-* **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ``, ``mbuf.vlan_tci_outer``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.ol_flags:PKT_RX_QINQ``,
-  ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:PKT_RX_VLAN``
+* **[uses]     mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_TX_QINQ``, ``mbuf.vlan_tci_outer``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_QINQ_STRIPPED``, ``mbuf.ol_flags:RTE_MBUF_F_RX_QINQ``,
+  ``mbuf.ol_flags:RTE_MBUF_F_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:RTE_MBUF_F_RX_VLAN``
   ``mbuf.vlan_tci``, ``mbuf.vlan_tci_outer``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
@@ -533,12 +533,12 @@ Supports L3 checksum offload.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
 * **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_IPV4_CKSUM``.
-* **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
-  ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
+* **[uses]     mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_TX_IP_CKSUM``,
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_IPV4`` | ``RTE_MBUF_F_TX_IPV6``.
 * **[uses]     mbuf**: ``mbuf.l2_len``, ``mbuf.l3_len``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
-  ``PKT_RX_IP_CKSUM_BAD`` | ``PKT_RX_IP_CKSUM_GOOD`` |
-  ``PKT_RX_IP_CKSUM_NONE``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN`` |
+  ``RTE_MBUF_F_RX_IP_CKSUM_BAD`` | ``RTE_MBUF_F_RX_IP_CKSUM_GOOD`` |
+  ``RTE_MBUF_F_RX_IP_CKSUM_NONE``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
 
@@ -552,13 +552,13 @@ Supports L4 checksum offload.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM,DEV_RX_OFFLOAD_SCTP_CKSUM``.
 * **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
-* **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
-  ``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
-  ``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
+* **[uses]     mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_TX_IPV4`` | ``RTE_MBUF_F_TX_IPV6``,
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_L4_NO_CKSUM`` | ``RTE_MBUF_F_TX_TCP_CKSUM`` |
+  ``RTE_MBUF_F_TX_SCTP_CKSUM`` | ``RTE_MBUF_F_TX_UDP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.l2_len``, ``mbuf.l3_len``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_L4_CKSUM_UNKNOWN`` |
-  ``PKT_RX_L4_CKSUM_BAD`` | ``PKT_RX_L4_CKSUM_GOOD`` |
-  ``PKT_RX_L4_CKSUM_NONE``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN`` |
+  ``RTE_MBUF_F_RX_L4_CKSUM_BAD`` | ``RTE_MBUF_F_RX_L4_CKSUM_GOOD`` |
+  ``RTE_MBUF_F_RX_L4_CKSUM_NONE``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM,DEV_RX_OFFLOAD_SCTP_CKSUM``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
 
@@ -570,7 +570,7 @@ Timestamp offload
 Supports Timestamp.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TIMESTAMP``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_TIMESTAMP``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_TIMESTAMP``.
 * **[provides] mbuf**: ``mbuf.timestamp``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa: DEV_RX_OFFLOAD_TIMESTAMP``.
 * **[related] eth_dev_ops**: ``read_clock``.
@@ -584,7 +584,7 @@ Supports MACsec.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
 * **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MACSEC_INSERT``.
-* **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
+* **[uses]     mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_TX_MACSEC``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
 
@@ -598,12 +598,12 @@ Supports inner packet L3 checksum.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
 * **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
-* **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
-  ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
-  ``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
-  ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
+* **[uses]     mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_TX_IP_CKSUM``,
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_IPV4`` | ``RTE_MBUF_F_TX_IPV6``,
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_OUTER_IP_CKSUM``,
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_OUTER_IPV4`` | ``RTE_MBUF_F_TX_OUTER_IPV6``.
 * **[uses]     mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_OUTER_IP_CKSUM_BAD``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 
@@ -616,11 +616,11 @@ Inner L4 checksum
 Supports inner packet L4 checksum.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_UDP_CKSUM``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_OUTER_L4_CKSUM_UNKNOWN`` |
-  ``PKT_RX_OUTER_L4_CKSUM_BAD`` | ``PKT_RX_OUTER_L4_CKSUM_GOOD`` | ``PKT_RX_OUTER_L4_CKSUM_INVALID``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN`` |
+  ``RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD`` | ``RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD`` | ``RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID``.
 * **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``.
-* **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
-  ``mbuf.ol_flags:PKT_TX_OUTER_UDP_CKSUM``.
+* **[uses]     mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_TX_OUTER_IPV4`` | ``RTE_MBUF_F_TX_OUTER_IPV6``.
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_OUTER_UDP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_UDP_CKSUM``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``.
diff --git a/doc/guides/nics/ixgbe.rst b/doc/guides/nics/ixgbe.rst
index b82e634382..97e4648a38 100644
--- a/doc/guides/nics/ixgbe.rst
+++ b/doc/guides/nics/ixgbe.rst
@@ -264,7 +264,7 @@ Intel 82599 10 Gigabit Ethernet Controller Specification Update (Revision 2.87)
 Errata: 44 Integrity Error Reported for IPv4/UDP Packets With Zero Checksum
 
 To support UDP zero checksum, the zero and bad UDP checksum packet is marked as
-PKT_RX_L4_CKSUM_UNKNOWN, so the application needs to recompute the checksum to
+RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN, so the application needs to recompute the checksum to
 validate it.
 
 Inline crypto processing support
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index bae73f42d8..9324ce7818 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -255,7 +255,7 @@ Limitations
   no MPRQ feature or vectorized code can be engaged.
 
 - When Multi-Packet Rx queue is configured (``mprq_en``), a Rx packet can be
-  externally attached to a user-provided mbuf with having EXT_ATTACHED_MBUF in
+  externally attached to a user-provided mbuf with having RTE_MBUF_F_EXTERNAL in
   ol_flags. As the mempool for the external buffer is managed by PMD, all the
   Rx mbufs must be freed before the device is closed. Otherwise, the mempool of
   the external buffers will be freed by PMD and the application which still
@@ -263,7 +263,7 @@ Limitations
 
 - If Multi-Packet Rx queue is configured (``mprq_en``) and Rx CQE compression is
   enabled (``rxq_cqe_comp_en``) at the same time, RSS hash result is not fully
-  supported. Some Rx packets may not have PKT_RX_RSS_HASH.
+  supported. Some Rx packets may not have RTE_MBUF_F_RX_RSS_HASH.
 
 - IPv6 Multicast messages are not supported on VM, while promiscuous mode
   and allmulticast mode are both set to off.
@@ -644,7 +644,7 @@ Driver options
   the mbuf by external buffer attachment - ``rte_pktmbuf_attach_extbuf()``.
   A mempool for external buffers will be allocated and managed by PMD. If Rx
   packet is externally attached, ol_flags field of the mbuf will have
-  EXT_ATTACHED_MBUF and this flag must be preserved. ``RTE_MBUF_HAS_EXTBUF()``
+  RTE_MBUF_F_EXTERNAL and this flag must be preserved. ``RTE_MBUF_HAS_EXTBUF()``
   checks the flag. The default value is 128, valid only if ``mprq_en`` is set.
 
 - ``rxqs_min_mprq`` parameter [int]
diff --git a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
index 7bff0aef0b..6537f3d5d6 100644
--- a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
+++ b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
@@ -211,11 +211,11 @@ To segment an outgoing packet, an application must:
      responsibility to ensure that these flags are set.
 
    - For example, in order to segment TCP/IPv4 packets, the application should
-     add the ``PKT_TX_IPV4`` and ``PKT_TX_TCP_SEG`` flags to the mbuf's
+     add the ``RTE_MBUF_F_TX_IPV4`` and ``RTE_MBUF_F_TX_TCP_SEG`` flags to the mbuf's
      ol_flags.
 
    - If checksum calculation in hardware is required, the application should
-     also add the ``PKT_TX_TCP_CKSUM`` and ``PKT_TX_IP_CKSUM`` flags.
+     also add the ``RTE_MBUF_F_TX_TCP_CKSUM`` and ``RTE_MBUF_F_TX_IP_CKSUM`` flags.
 
 #. Check if the packet should be processed. Packets with one of the
    following properties are not processed and are returned immediately:
diff --git a/doc/guides/prog_guide/mbuf_lib.rst b/doc/guides/prog_guide/mbuf_lib.rst
index 2f190b40e4..15b266c295 100644
--- a/doc/guides/prog_guide/mbuf_lib.rst
+++ b/doc/guides/prog_guide/mbuf_lib.rst
@@ -123,7 +123,7 @@ timestamp mechanism, the VLAN tagging and the IP checksum computation.
 
 On TX side, it is also possible for an application to delegate some
 processing to the hardware if it supports it. For instance, the
-PKT_TX_IP_CKSUM flag allows to offload the computation of the IPv4
+RTE_MBUF_F_TX_IP_CKSUM flag allows to offload the computation of the IPv4
 checksum.
 
 The following examples explain how to configure different TX offloads on
@@ -134,7 +134,7 @@ a vxlan-encapsulated tcp packet:
 
     mb->l2_len = len(out_eth)
     mb->l3_len = len(out_ip)
-    mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM
+    mb->ol_flags |= RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CSUM
     set out_ip checksum to 0 in the packet
 
   This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM.
@@ -143,7 +143,7 @@ a vxlan-encapsulated tcp packet:
 
     mb->l2_len = len(out_eth)
     mb->l3_len = len(out_ip)
-    mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM | PKT_TX_UDP_CKSUM
+    mb->ol_flags |= RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CSUM | RTE_MBUF_F_TX_UDP_CKSUM
     set out_ip checksum to 0 in the packet
     set out_udp checksum to pseudo header using rte_ipv4_phdr_cksum()
 
@@ -154,7 +154,7 @@ a vxlan-encapsulated tcp packet:
 
     mb->l2_len = len(out_eth + out_ip + out_udp + vxlan + in_eth)
     mb->l3_len = len(in_ip)
-    mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM
+    mb->ol_flags |= RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CSUM
     set in_ip checksum to 0 in the packet
 
   This is similar to case 1), but l2_len is different. It is supported
@@ -165,7 +165,7 @@ a vxlan-encapsulated tcp packet:
 
     mb->l2_len = len(out_eth + out_ip + out_udp + vxlan + in_eth)
     mb->l3_len = len(in_ip)
-    mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM | PKT_TX_TCP_CKSUM
+    mb->ol_flags |= RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CSUM | RTE_MBUF_F_TX_TCP_CKSUM
     set in_ip checksum to 0 in the packet
     set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()
 
@@ -179,8 +179,8 @@ a vxlan-encapsulated tcp packet:
     mb->l2_len = len(out_eth + out_ip + out_udp + vxlan + in_eth)
     mb->l3_len = len(in_ip)
     mb->l4_len = len(in_tcp)
-    mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM |
-      PKT_TX_TCP_SEG;
+    mb->ol_flags |= RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM |
+      RTE_MBUF_F_TX_TCP_SEG;
     set in_ip checksum to 0 in the packet
     set in_tcp checksum to pseudo header without including the IP
       payload length using rte_ipv4_phdr_cksum()
@@ -194,8 +194,8 @@ a vxlan-encapsulated tcp packet:
     mb->outer_l3_len = len(out_ip)
     mb->l2_len = len(out_udp + vxlan + in_eth)
     mb->l3_len = len(in_ip)
-    mb->ol_flags |= PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IP_CKSUM  | \
-      PKT_TX_IP_CKSUM |  PKT_TX_TCP_CKSUM;
+    mb->ol_flags |= RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IP_CKSUM  | \
+      RTE_MBUF_F_TX_IP_CKSUM |  RTE_MBUF_F_TX_TCP_CKSUM;
     set out_ip checksum to 0 in the packet
     set in_ip checksum to 0 in the packet
     set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()
diff --git a/doc/guides/prog_guide/metrics_lib.rst b/doc/guides/prog_guide/metrics_lib.rst
index eca855d601..f8416eaa02 100644
--- a/doc/guides/prog_guide/metrics_lib.rst
+++ b/doc/guides/prog_guide/metrics_lib.rst
@@ -290,7 +290,7 @@ Timestamp and latency calculation
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 The Latency stats library marks the time in the timestamp field of the
-mbuf for the ingress packets and sets the ``PKT_RX_TIMESTAMP`` flag of
+mbuf for the ingress packets and sets the ``RTE_MBUF_F_RX_TIMESTAMP`` flag of
 ``ol_flags`` for the mbuf to indicate the marked time as a valid one.
 At the egress, the mbufs with the flag set are considered having valid
 timestamp and are used for the latency calculation.
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 2b42d5ec8c..8f9251953d 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -687,9 +687,9 @@ Item: ``META``
 Matches 32 bit metadata item set.
 
 On egress, metadata can be set either by mbuf metadata field with
-PKT_TX_DYNF_METADATA flag or ``SET_META`` action. On ingress, ``SET_META``
+RTE_MBUF_DYNFLAG_TX_METADATA flag or ``SET_META`` action. On ingress, ``SET_META``
 action sets metadata for a packet and the metadata will be reported via
-``metadata`` dynamic field of ``rte_mbuf`` with PKT_RX_DYNF_METADATA flag.
+``metadata`` dynamic field of ``rte_mbuf`` with RTE_MBUF_DYNFLAG_RX_METADATA flag.
 
 - Default ``mask`` matches the specified Rx metadata value.
 
@@ -1656,8 +1656,8 @@ flows to loop between groups.
 Action: ``MARK``
 ^^^^^^^^^^^^^^^^
 
-Attaches an integer value to packets and sets ``PKT_RX_FDIR`` and
-``PKT_RX_FDIR_ID`` mbuf flags.
+Attaches an integer value to packets and sets ``RTE_MBUF_F_RX_FDIR`` and
+``RTE_MBUF_F_RX_FDIR_ID`` mbuf flags.
 
 This value is arbitrary and application-defined. Maximum allowed value
 depends on the underlying implementation. It is returned in the
@@ -1677,7 +1677,7 @@ Action: ``FLAG``
 ^^^^^^^^^^^^^^^^
 
 Flags packets. Similar to `Action: MARK`_ without a specific value; only
-sets the ``PKT_RX_FDIR`` mbuf flag.
+sets the ``RTE_MBUF_F_RX_FDIR`` mbuf flag.
 
 - No configurable properties.
 
@@ -2635,10 +2635,10 @@ Action: ``SET_META``
 
 Set metadata. Item ``META`` matches metadata.
 
-Metadata set by mbuf metadata field with PKT_TX_DYNF_METADATA flag on egress
+Metadata set by mbuf metadata field with RTE_MBUF_DYNFLAG_TX_METADATA flag on egress
 will be overridden by this action. On ingress, the metadata will be carried by
 ``metadata`` dynamic field of ``rte_mbuf`` which can be accessed by
-``RTE_FLOW_DYNF_METADATA()``. PKT_RX_DYNF_METADATA flag will be set along
+``RTE_FLOW_DYNF_METADATA()``. RTE_MBUF_DYNFLAG_RX_METADATA flag will be set along
 with the data.
 
 The mbuf dynamic field must be registered by calling
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 549e9416c4..fd277e28c9 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -39,11 +39,6 @@ Deprecation Notices
   ``__atomic_thread_fence`` must be used for patches that need to be merged in
   20.08 onwards. This change will not introduce any performance degradation.
 
-* mbuf: The mbuf offload flags ``PKT_*`` will be renamed as ``RTE_MBUF_F_*``.
-  A compatibility layer will be kept until DPDK 22.11, except for the flags
-  that are already deprecated (``PKT_RX_L4_CKSUM_BAD``, ``PKT_RX_IP_CKSUM_BAD``,
-  ``PKT_RX_EIP_CKSUM_BAD``, ``PKT_TX_QINQ_PKT``) which will be removed.
-
 * pci: To reduce unnecessary ABIs exposed by DPDK bus driver, "rte_bus_pci.h"
   will be made internal in 21.11 and macros/data structures/functions defined
   in the header will not be considered as ABI anymore. This change is inspired
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 73e377a007..33519bf90b 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -141,6 +141,10 @@ Removed Items
   blacklist/whitelist are removed. Users must use the new
   block/allow list arguments.
 
+* mbuf: The mbuf offload flags ``PKT_*`` are renamed as ``RTE_MBUF_F_*``. A
+  compatibility layer will be kept until DPDK 22.11, except for the flags that
+  are already deprecated (``PKT_RX_L4_CKSUM_BAD``, ``PKT_RX_IP_CKSUM_BAD``,
+  ``PKT_RX_EIP_CKSUM_BAD``, ``PKT_TX_QINQ_PKT``), and which are now removed.
 
 API Changes
 -----------
diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c
index c5e0a83a8c..c9dc2cba9b 100644
--- a/drivers/compress/mlx5/mlx5_compress.c
+++ b/drivers/compress/mlx5/mlx5_compress.c
@@ -466,7 +466,7 @@ mlx5_compress_addr2mr(struct mlx5_compress_priv *priv, uintptr_t addr,
 		return lkey;
 	/* Take slower bottom-half on miss. */
 	return mlx5_mr_addr2mr_bh(priv->pd, 0, &priv->mr_scache, mr_ctrl, addr,
-				  !!(ol_flags & EXT_ATTACHED_MBUF));
+				  !!(ol_flags & RTE_MBUF_F_EXTERNAL));
 }
 
 static __rte_always_inline uint32_t
diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c
index 682cf8b607..27207ab9a3 100644
--- a/drivers/crypto/mlx5/mlx5_crypto.c
+++ b/drivers/crypto/mlx5/mlx5_crypto.c
@@ -367,7 +367,7 @@ mlx5_crypto_addr2mr(struct mlx5_crypto_priv *priv, uintptr_t addr,
 		return lkey;
 	/* Take slower bottom-half on miss. */
 	return mlx5_mr_addr2mr_bh(priv->pd, 0, &priv->mr_scache, mr_ctrl, addr,
-				  !!(ol_flags & EXT_ATTACHED_MBUF));
+				  !!(ol_flags & RTE_MBUF_F_EXTERNAL));
 }
 
 static __rte_always_inline uint32_t
diff --git a/drivers/event/octeontx/ssovf_worker.c b/drivers/event/octeontx/ssovf_worker.c
index 8b056ddc5a..1300c4f155 100644
--- a/drivers/event/octeontx/ssovf_worker.c
+++ b/drivers/event/octeontx/ssovf_worker.c
@@ -428,53 +428,53 @@ octeontx_create_rx_ol_flags_array(void *mem)
 		errcode = idx & 0xff;
 		errlev = (idx & 0x700) >> 8;
 
-		val = PKT_RX_IP_CKSUM_UNKNOWN;
-		val |= PKT_RX_L4_CKSUM_UNKNOWN;
-		val |= PKT_RX_OUTER_L4_CKSUM_UNKNOWN;
+		val = RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
+		val |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
+		val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN;
 
 		switch (errlev) {
 		case OCCTX_ERRLEV_RE:
 			if (errcode) {
-				val |= PKT_RX_IP_CKSUM_BAD;
-				val |= PKT_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 			break;
 		case OCCTX_ERRLEV_LC:
 			if (errcode == OCCTX_EC_IP4_CSUM) {
-				val |= PKT_RX_IP_CKSUM_BAD;
-				val |= PKT_RX_OUTER_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			}
 			break;
 		case OCCTX_ERRLEV_LD:
 			/* Check if parsed packet is neither IPv4 or IPV6 */
 			if (errcode == OCCTX_EC_IP4_NOT)
 				break;
-			val |= PKT_RX_IP_CKSUM_GOOD;
+			val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			if (errcode == OCCTX_EC_L4_CSUM)
-				val |= PKT_RX_OUTER_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 			else
-				val |= PKT_RX_L4_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			break;
 		case OCCTX_ERRLEV_LE:
 			if (errcode == OCCTX_EC_IP4_CSUM)
-				val |= PKT_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			else
-				val |= PKT_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			break;
 		case OCCTX_ERRLEV_LF:
 			/* Check if parsed packet is neither IPv4 or IPV6 */
 			if (errcode == OCCTX_EC_IP4_NOT)
 				break;
-			val |= PKT_RX_IP_CKSUM_GOOD;
+			val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			if (errcode == OCCTX_EC_L4_CSUM)
-				val |= PKT_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			else
-				val |= PKT_RX_L4_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			break;
 		}
 
diff --git a/drivers/event/octeontx/ssovf_worker.h b/drivers/event/octeontx/ssovf_worker.h
index f609b296ed..ccc6de588e 100644
--- a/drivers/event/octeontx/ssovf_worker.h
+++ b/drivers/event/octeontx/ssovf_worker.h
@@ -126,7 +126,7 @@ ssovf_octeontx_wqe_to_pkt(uint64_t work, uint16_t port_info,
 
 	if (!!(flag & OCCTX_RX_VLAN_FLTR_F)) {
 		if (likely(wqe->s.w2.vv)) {
-			mbuf->ol_flags |= PKT_RX_VLAN;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN;
 			mbuf->vlan_tci =
 				ntohs(*((uint16_t *)((char *)mbuf->buf_addr +
 					mbuf->data_off + wqe->s.w4.vlptr + 2)));
diff --git a/drivers/event/octeontx2/otx2_worker.h b/drivers/event/octeontx2/otx2_worker.h
index 3e36dcece1..aa766c6602 100644
--- a/drivers/event/octeontx2/otx2_worker.h
+++ b/drivers/event/octeontx2/otx2_worker.h
@@ -277,7 +277,7 @@ otx2_ssogws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
 	uint16_t ref_cnt = m->refcnt;
 
 	if ((flags & NIX_TX_OFFLOAD_SECURITY_F) &&
-	    (m->ol_flags & PKT_TX_SEC_OFFLOAD)) {
+	    (m->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD)) {
 		txq = otx2_ssogws_xtract_meta(m, txq_data);
 		return otx2_sec_event_tx(base, ev, m, txq, flags);
 	}
diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
index 294132b759..e1a0a4cf94 100644
--- a/drivers/net/af_packet/rte_eth_af_packet.c
+++ b/drivers/net/af_packet/rte_eth_af_packet.c
@@ -147,7 +147,7 @@ eth_af_packet_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		/* check for vlan info */
 		if (ppd->tp_status & TP_STATUS_VLAN_VALID) {
 			mbuf->vlan_tci = ppd->tp_vlan_tci;
-			mbuf->ol_flags |= (PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+			mbuf->ol_flags |= (RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 		}
 
 		/* release incoming frame and advance ring buffer */
@@ -224,7 +224,7 @@ eth_af_packet_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		}
 
 		/* insert vlan info if necessary */
-		if (mbuf->ol_flags & PKT_TX_VLAN) {
+		if (mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) {
 			if (rte_vlan_insert(&mbuf)) {
 				rte_pktmbuf_free(mbuf);
 				continue;
diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c
index 7d367c9306..402892cc9e 100644
--- a/drivers/net/atlantic/atl_rxtx.c
+++ b/drivers/net/atlantic/atl_rxtx.c
@@ -15,20 +15,20 @@
 #include "hw_atl/hw_atl_b0_internal.h"
 
 #define ATL_TX_CKSUM_OFFLOAD_MASK (			 \
-	PKT_TX_IP_CKSUM |				 \
-	PKT_TX_L4_MASK |				 \
-	PKT_TX_TCP_SEG)
+	RTE_MBUF_F_TX_IP_CKSUM |				 \
+	RTE_MBUF_F_TX_L4_MASK |				 \
+	RTE_MBUF_F_TX_TCP_SEG)
 
 #define ATL_TX_OFFLOAD_MASK (				 \
-	PKT_TX_VLAN |					 \
-	PKT_TX_IPV6 |					 \
-	PKT_TX_IPV4 |					 \
-	PKT_TX_IP_CKSUM |				 \
-	PKT_TX_L4_MASK |				 \
-	PKT_TX_TCP_SEG)
+	RTE_MBUF_F_TX_VLAN |					 \
+	RTE_MBUF_F_TX_IPV6 |					 \
+	RTE_MBUF_F_TX_IPV4 |					 \
+	RTE_MBUF_F_TX_IP_CKSUM |				 \
+	RTE_MBUF_F_TX_L4_MASK |				 \
+	RTE_MBUF_F_TX_TCP_SEG)
 
 #define ATL_TX_OFFLOAD_NOTSUP_MASK \
-	(PKT_TX_OFFLOAD_MASK ^ ATL_TX_OFFLOAD_MASK)
+	(RTE_MBUF_F_TX_OFFLOAD_MASK ^ ATL_TX_OFFLOAD_MASK)
 
 /**
  * Structure associated with each descriptor of the RX ring of a RX queue.
@@ -850,21 +850,21 @@ atl_desc_to_offload_flags(struct atl_rx_queue *rxq,
 	if (rxq->l3_csum_enabled && ((rxd_wb->pkt_type & 0x3) == 0)) {
 		/* IPv4 csum error ? */
 		if (rxd_wb->rx_stat & BIT(1))
-			mbuf_flags |= PKT_RX_IP_CKSUM_BAD;
+			mbuf_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		else
-			mbuf_flags |= PKT_RX_IP_CKSUM_GOOD;
+			mbuf_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 	} else {
-		mbuf_flags |= PKT_RX_IP_CKSUM_UNKNOWN;
+		mbuf_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 	}
 
 	/* CSUM calculated ? */
 	if (rxq->l4_csum_enabled && (rxd_wb->rx_stat & BIT(3))) {
 		if (rxd_wb->rx_stat & BIT(2))
-			mbuf_flags |= PKT_RX_L4_CKSUM_BAD;
+			mbuf_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		else
-			mbuf_flags |= PKT_RX_L4_CKSUM_GOOD;
+			mbuf_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 	} else {
-		mbuf_flags |= PKT_RX_L4_CKSUM_UNKNOWN;
+		mbuf_flags |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 	}
 
 	return mbuf_flags;
@@ -1044,12 +1044,12 @@ atl_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 			rx_mbuf->packet_type = atl_desc_to_pkt_type(&rxd_wb);
 
 			if (rx_mbuf->packet_type & RTE_PTYPE_L2_ETHER_VLAN) {
-				rx_mbuf->ol_flags |= PKT_RX_VLAN;
+				rx_mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN;
 				rx_mbuf->vlan_tci = rxd_wb.vlan;
 
 				if (cfg->vlan_strip)
 					rx_mbuf->ol_flags |=
-						PKT_RX_VLAN_STRIPPED;
+						RTE_MBUF_F_RX_VLAN_STRIPPED;
 			}
 
 			if (!rx_mbuf_first)
@@ -1179,12 +1179,12 @@ atl_tso_setup(struct rte_mbuf *tx_pkt, union hw_atl_txc_s *txc)
 	uint32_t tx_cmd = 0;
 	uint64_t ol_flags = tx_pkt->ol_flags;
 
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		tx_cmd |= tx_desc_cmd_lso | tx_desc_cmd_l4cs;
 
 		txc->cmd = 0x4;
 
-		if (ol_flags & PKT_TX_IPV6)
+		if (ol_flags & RTE_MBUF_F_TX_IPV6)
 			txc->cmd |= 0x2;
 
 		txc->l2_len = tx_pkt->l2_len;
@@ -1194,7 +1194,7 @@ atl_tso_setup(struct rte_mbuf *tx_pkt, union hw_atl_txc_s *txc)
 		txc->mss_len = tx_pkt->tso_segsz;
 	}
 
-	if (ol_flags & PKT_TX_VLAN) {
+	if (ol_flags & RTE_MBUF_F_TX_VLAN) {
 		tx_cmd |= tx_desc_cmd_vlan;
 		txc->vlan_tag = tx_pkt->vlan_tci;
 	}
@@ -1212,9 +1212,9 @@ atl_setup_csum_offload(struct rte_mbuf *mbuf, struct hw_atl_txd_s *txd,
 		       uint32_t tx_cmd)
 {
 	txd->cmd |= tx_desc_cmd_fcs;
-	txd->cmd |= (mbuf->ol_flags & PKT_TX_IP_CKSUM) ? tx_desc_cmd_ipv4 : 0;
+	txd->cmd |= (mbuf->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) ? tx_desc_cmd_ipv4 : 0;
 	/* L4 csum requested */
-	txd->cmd |= (mbuf->ol_flags & PKT_TX_L4_MASK) ? tx_desc_cmd_l4cs : 0;
+	txd->cmd |= (mbuf->ol_flags & RTE_MBUF_F_TX_L4_MASK) ? tx_desc_cmd_l4cs : 0;
 	txd->cmd |= tx_cmd;
 }
 
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 01553958be..dbb100763e 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -1310,7 +1310,7 @@ avp_dev_copy_from_buffers(struct avp_dev *avp,
 	src_offset = 0;
 
 	if (pkt_buf->ol_flags & RTE_AVP_RX_VLAN_PKT) {
-		ol_flags = PKT_RX_VLAN;
+		ol_flags = RTE_MBUF_F_RX_VLAN;
 		vlan_tci = pkt_buf->vlan_tci;
 	} else {
 		ol_flags = 0;
@@ -1568,7 +1568,7 @@ avp_recv_pkts(void *rx_queue,
 		m->port = avp->port_id;
 
 		if (pkt_buf->ol_flags & RTE_AVP_RX_VLAN_PKT) {
-			m->ol_flags = PKT_RX_VLAN;
+			m->ol_flags = RTE_MBUF_F_RX_VLAN;
 			m->vlan_tci = pkt_buf->vlan_tci;
 		}
 
@@ -1674,7 +1674,7 @@ avp_dev_copy_to_buffers(struct avp_dev *avp,
 	first_buf->nb_segs = count;
 	first_buf->pkt_len = total_length;
 
-	if (mbuf->ol_flags & PKT_TX_VLAN) {
+	if (mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		first_buf->ol_flags |= RTE_AVP_TX_VLAN_PKT;
 		first_buf->vlan_tci = mbuf->vlan_tci;
 	}
@@ -1905,7 +1905,7 @@ avp_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		pkt_buf->nb_segs = 1;
 		pkt_buf->next = NULL;
 
-		if (m->ol_flags & PKT_TX_VLAN) {
+		if (m->ol_flags & RTE_MBUF_F_TX_VLAN) {
 			pkt_buf->ol_flags |= RTE_AVP_TX_VLAN_PKT;
 			pkt_buf->vlan_tci = m->vlan_tci;
 		}
diff --git a/drivers/net/axgbe/axgbe_rxtx.c b/drivers/net/axgbe/axgbe_rxtx.c
index 45b9bd3e39..67149abf80 100644
--- a/drivers/net/axgbe/axgbe_rxtx.c
+++ b/drivers/net/axgbe/axgbe_rxtx.c
@@ -260,17 +260,17 @@ axgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		}
 		if (rxq->pdata->rx_csum_enable) {
 			mbuf->ol_flags = 0;
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			if (unlikely(error_status == AXGBE_L3_CSUM_ERR)) {
-				mbuf->ol_flags &= ~PKT_RX_IP_CKSUM_GOOD;
-				mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
-				mbuf->ol_flags &= ~PKT_RX_L4_CKSUM_GOOD;
-				mbuf->ol_flags |= PKT_RX_L4_CKSUM_UNKNOWN;
+				mbuf->ol_flags &= ~RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				mbuf->ol_flags &= ~RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 			} else if (
 				unlikely(error_status == AXGBE_L4_CSUM_ERR)) {
-				mbuf->ol_flags &= ~PKT_RX_L4_CKSUM_GOOD;
-				mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+				mbuf->ol_flags &= ~RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			}
 		}
 		rte_prefetch1(rte_pktmbuf_mtod(mbuf, void *));
@@ -282,25 +282,25 @@ axgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		offloads = rxq->pdata->eth_dev->data->dev_conf.rxmode.offloads;
 		if (!err || !etlt) {
 			if (etlt == RX_CVLAN_TAG_PRESENT) {
-				mbuf->ol_flags |= PKT_RX_VLAN;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN;
 				mbuf->vlan_tci =
 					AXGMAC_GET_BITS_LE(desc->write.desc0,
 							RX_NORMAL_DESC0, OVT);
 				if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-					mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED;
+					mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN_STRIPPED;
 				else
-					mbuf->ol_flags &= ~PKT_RX_VLAN_STRIPPED;
+					mbuf->ol_flags &= ~RTE_MBUF_F_RX_VLAN_STRIPPED;
 				} else {
 					mbuf->ol_flags &=
-						~(PKT_RX_VLAN
-							| PKT_RX_VLAN_STRIPPED);
+						~(RTE_MBUF_F_RX_VLAN
+							| RTE_MBUF_F_RX_VLAN_STRIPPED);
 					mbuf->vlan_tci = 0;
 				}
 		}
 		/* Indicate if a Context Descriptor is next */
 		if (AXGMAC_GET_BITS_LE(desc->write.desc3, RX_NORMAL_DESC3, CDA))
-			mbuf->ol_flags |= PKT_RX_IEEE1588_PTP
-					| PKT_RX_IEEE1588_TMST;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP
+					| RTE_MBUF_F_RX_IEEE1588_TMST;
 		pkt_len = AXGMAC_GET_BITS_LE(desc->write.desc3, RX_NORMAL_DESC3,
 					     PL) - rxq->crc_len;
 		/* Mbuf populate */
@@ -426,17 +426,17 @@ uint16_t eth_axgbe_recv_scattered_pkts(void *rx_queue,
 		offloads = rxq->pdata->eth_dev->data->dev_conf.rxmode.offloads;
 		if (!err || !etlt) {
 			if (etlt == RX_CVLAN_TAG_PRESENT) {
-				mbuf->ol_flags |= PKT_RX_VLAN;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN;
 				mbuf->vlan_tci =
 					AXGMAC_GET_BITS_LE(desc->write.desc0,
 							RX_NORMAL_DESC0, OVT);
 				if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-					mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED;
+					mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN_STRIPPED;
 				else
-					mbuf->ol_flags &= ~PKT_RX_VLAN_STRIPPED;
+					mbuf->ol_flags &= ~RTE_MBUF_F_RX_VLAN_STRIPPED;
 			} else {
 				mbuf->ol_flags &=
-					~(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+					~(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 				mbuf->vlan_tci = 0;
 			}
 		}
@@ -465,17 +465,17 @@ uint16_t eth_axgbe_recv_scattered_pkts(void *rx_queue,
 		first_seg->port = rxq->port_id;
 		if (rxq->pdata->rx_csum_enable) {
 			mbuf->ol_flags = 0;
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			if (unlikely(error_status == AXGBE_L3_CSUM_ERR)) {
-				mbuf->ol_flags &= ~PKT_RX_IP_CKSUM_GOOD;
-				mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
-				mbuf->ol_flags &= ~PKT_RX_L4_CKSUM_GOOD;
-				mbuf->ol_flags |= PKT_RX_L4_CKSUM_UNKNOWN;
+				mbuf->ol_flags &= ~RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				mbuf->ol_flags &= ~RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 			} else if (unlikely(error_status
 						== AXGBE_L4_CSUM_ERR)) {
-				mbuf->ol_flags &= ~PKT_RX_L4_CKSUM_GOOD;
-				mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+				mbuf->ol_flags &= ~RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			}
 		}
 
@@ -795,7 +795,7 @@ static int axgbe_xmit_hw(struct axgbe_tx_queue *txq,
 	AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, FL,
 			   mbuf->pkt_len);
 	/* Timestamp enablement check */
-	if (mbuf->ol_flags & PKT_TX_IEEE1588_TMST)
+	if (mbuf->ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 		AXGMAC_SET_BITS_LE(desc->desc2, TX_NORMAL_DESC2, TTSE, 1);
 	rte_wmb();
 	/* Mark it as First and Last Descriptor */
@@ -804,14 +804,14 @@ static int axgbe_xmit_hw(struct axgbe_tx_queue *txq,
 	/* Mark it as a NORMAL descriptor */
 	AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, CTXT, 0);
 	/* configure h/w Offload */
-	mask = mbuf->ol_flags & PKT_TX_L4_MASK;
-	if ((mask == PKT_TX_TCP_CKSUM) || (mask == PKT_TX_UDP_CKSUM))
+	mask = mbuf->ol_flags & RTE_MBUF_F_TX_L4_MASK;
+	if ((mask == RTE_MBUF_F_TX_TCP_CKSUM) || (mask == RTE_MBUF_F_TX_UDP_CKSUM))
 		AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, CIC, 0x3);
-	else if (mbuf->ol_flags & PKT_TX_IP_CKSUM)
+	else if (mbuf->ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 		AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, CIC, 0x1);
 	rte_wmb();
 
-	if (mbuf->ol_flags & (PKT_TX_VLAN | PKT_TX_QINQ)) {
+	if (mbuf->ol_flags & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) {
 		/* Mark it as a CONTEXT descriptor */
 		AXGMAC_SET_BITS_LE(desc->desc3, TX_CONTEXT_DESC3,
 				  CTXT, 1);
diff --git a/drivers/net/axgbe/axgbe_rxtx_vec_sse.c b/drivers/net/axgbe/axgbe_rxtx_vec_sse.c
index 1c962b9333..816371cd79 100644
--- a/drivers/net/axgbe/axgbe_rxtx_vec_sse.c
+++ b/drivers/net/axgbe/axgbe_rxtx_vec_sse.c
@@ -23,7 +23,7 @@ axgbe_vec_tx(volatile struct axgbe_tx_desc *desc,
 {
 	uint64_t tmst_en = 0;
 	/* Timestamp enablement check */
-	if (mbuf->ol_flags & PKT_TX_IEEE1588_TMST)
+	if (mbuf->ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 		tmst_en = TX_DESC_CTRL_FLAG_TMST;
 	__m128i descriptor = _mm_set_epi64x((uint64_t)mbuf->pkt_len << 32 |
 					    TX_DESC_CTRL_FLAGS | mbuf->data_len
diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c
index 235c374180..6a710021dc 100644
--- a/drivers/net/bnx2x/bnx2x.c
+++ b/drivers/net/bnx2x/bnx2x.c
@@ -2189,7 +2189,7 @@ int bnx2x_tx_encap(struct bnx2x_tx_queue *txq, struct rte_mbuf *m0)
 
 	tx_start_bd->nbd = rte_cpu_to_le_16(2);
 
-	if (m0->ol_flags & PKT_TX_VLAN) {
+	if (m0->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		tx_start_bd->vlan_or_ethertype =
 		    rte_cpu_to_le_16(m0->vlan_tci);
 		tx_start_bd->bd_flags.as_bitfield |=
diff --git a/drivers/net/bnx2x/bnx2x_rxtx.c b/drivers/net/bnx2x/bnx2x_rxtx.c
index 2b17602290..2570a6b252 100644
--- a/drivers/net/bnx2x/bnx2x_rxtx.c
+++ b/drivers/net/bnx2x/bnx2x_rxtx.c
@@ -435,7 +435,7 @@ bnx2x_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		 */
 		if (cqe_fp->pars_flags.flags & PARSING_FLAGS_VLAN) {
 			rx_mb->vlan_tci = cqe_fp->vlan_tag;
-			rx_mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+			rx_mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		}
 
 		rx_pkts[nb_rx] = rx_mb;
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index a40fa50138..882206f93c 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -260,25 +260,25 @@ static void bnxt_tpa_start(struct bnxt_rx_queue *rxq,
 	mbuf->pkt_len = rte_le_to_cpu_32(tpa_start->len);
 	mbuf->data_len = mbuf->pkt_len;
 	mbuf->port = rxq->port_id;
-	mbuf->ol_flags = PKT_RX_LRO;
+	mbuf->ol_flags = RTE_MBUF_F_RX_LRO;
 
 	bnxt_tpa_get_metadata(rxq->bp, tpa_info, tpa_start, tpa_start1);
 
 	if (likely(tpa_info->hash_valid)) {
 		mbuf->hash.rss = tpa_info->rss_hash;
-		mbuf->ol_flags |= PKT_RX_RSS_HASH;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 	} else if (tpa_info->cfa_code_valid) {
 		mbuf->hash.fdir.id = tpa_info->cfa_code;
-		mbuf->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 	}
 
 	if (tpa_info->vlan_valid && BNXT_RX_VLAN_STRIP_EN(rxq->bp)) {
 		mbuf->vlan_tci = tpa_info->vlan;
-		mbuf->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 	}
 
 	if (likely(tpa_info->l4_csum_valid))
-		mbuf->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	/* recycle next mbuf */
 	data_cons = RING_NEXT(data_cons);
@@ -576,34 +576,34 @@ bnxt_init_ol_flags_tables(struct bnxt_rx_queue *rxq)
 
 		if (BNXT_RX_VLAN_STRIP_EN(rxq->bp)) {
 			if (i & RX_PKT_CMPL_FLAGS2_META_FORMAT_VLAN)
-				pt[i] |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+				pt[i] |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		}
 
 		if (i & (RX_PKT_CMPL_FLAGS2_T_IP_CS_CALC << 3)) {
 			/* Tunnel case. */
 			if (outer_cksum_enabled) {
 				if (i & RX_PKT_CMPL_FLAGS2_IP_CS_CALC)
-					pt[i] |= PKT_RX_IP_CKSUM_GOOD;
+					pt[i] |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 				if (i & RX_PKT_CMPL_FLAGS2_L4_CS_CALC)
-					pt[i] |= PKT_RX_L4_CKSUM_GOOD;
+					pt[i] |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 				if (i & RX_PKT_CMPL_FLAGS2_T_L4_CS_CALC)
-					pt[i] |= PKT_RX_OUTER_L4_CKSUM_GOOD;
+					pt[i] |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
 			} else {
 				if (i & RX_PKT_CMPL_FLAGS2_T_IP_CS_CALC)
-					pt[i] |= PKT_RX_IP_CKSUM_GOOD;
+					pt[i] |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 				if (i & RX_PKT_CMPL_FLAGS2_T_L4_CS_CALC)
-					pt[i] |= PKT_RX_L4_CKSUM_GOOD;
+					pt[i] |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 		} else {
 			/* Non-tunnel case. */
 			if (i & RX_PKT_CMPL_FLAGS2_IP_CS_CALC)
-				pt[i] |= PKT_RX_IP_CKSUM_GOOD;
+				pt[i] |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 			if (i & RX_PKT_CMPL_FLAGS2_L4_CS_CALC)
-				pt[i] |= PKT_RX_L4_CKSUM_GOOD;
+				pt[i] |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		}
 	}
 
@@ -616,30 +616,30 @@ bnxt_init_ol_flags_tables(struct bnxt_rx_queue *rxq)
 			/* Tunnel case. */
 			if (outer_cksum_enabled) {
 				if (i & (RX_PKT_CMPL_ERRORS_IP_CS_ERROR >> 4))
-					pt[i] |= PKT_RX_IP_CKSUM_BAD;
+					pt[i] |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 
 				if (i & (RX_PKT_CMPL_ERRORS_T_IP_CS_ERROR >> 4))
-					pt[i] |= PKT_RX_OUTER_IP_CKSUM_BAD;
+					pt[i] |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 
 				if (i & (RX_PKT_CMPL_ERRORS_L4_CS_ERROR >> 4))
-					pt[i] |= PKT_RX_L4_CKSUM_BAD;
+					pt[i] |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 
 				if (i & (RX_PKT_CMPL_ERRORS_T_L4_CS_ERROR >> 4))
-					pt[i] |= PKT_RX_OUTER_L4_CKSUM_BAD;
+					pt[i] |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 			} else {
 				if (i & (RX_PKT_CMPL_ERRORS_T_IP_CS_ERROR >> 4))
-					pt[i] |= PKT_RX_IP_CKSUM_BAD;
+					pt[i] |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 
 				if (i & (RX_PKT_CMPL_ERRORS_T_L4_CS_ERROR >> 4))
-					pt[i] |= PKT_RX_L4_CKSUM_BAD;
+					pt[i] |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			}
 		} else {
 			/* Non-tunnel case. */
 			if (i & (RX_PKT_CMPL_ERRORS_IP_CS_ERROR >> 4))
-				pt[i] |= PKT_RX_IP_CKSUM_BAD;
+				pt[i] |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 
 			if (i & (RX_PKT_CMPL_ERRORS_L4_CS_ERROR >> 4))
-				pt[i] |= PKT_RX_L4_CKSUM_BAD;
+				pt[i] |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		}
 	}
 }
@@ -677,13 +677,13 @@ bnxt_set_ol_flags(struct bnxt_rx_ring_info *rxr, struct rx_pkt_cmpl *rxcmp,
 
 	if (flags_type & RX_PKT_CMPL_FLAGS_RSS_VALID) {
 		mbuf->hash.rss = rte_le_to_cpu_32(rxcmp->rss_hash);
-		ol_flags |= PKT_RX_RSS_HASH;
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 	}
 
 #ifdef RTE_LIBRTE_IEEE1588
 	if (unlikely((flags_type & RX_PKT_CMPL_FLAGS_MASK) ==
 		     RX_PKT_CMPL_FLAGS_ITYPE_PTP_W_TIMESTAMP))
-		ol_flags |= PKT_RX_IEEE1588_PTP | PKT_RX_IEEE1588_TMST;
+		ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP | RTE_MBUF_F_RX_IEEE1588_TMST;
 #endif
 
 	mbuf->ol_flags = ol_flags;
@@ -807,7 +807,7 @@ bnxt_ulp_set_mark_in_mbuf(struct bnxt *bp, struct rx_pkt_cmpl_hi *rxcmp1,
 		mbuf->hash.fdir.hi = mark_id;
 		*bnxt_cfa_code_dynfield(mbuf) = cfa_code & 0xffffffffull;
 		mbuf->hash.fdir.id = rxcmp1->cfa_code;
-		mbuf->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		return mark_id;
 	}
 
@@ -854,7 +854,7 @@ void bnxt_set_mark_in_mbuf(struct bnxt *bp,
 	}
 
 	mbuf->hash.fdir.hi = bp->mark_table[cfa_code].mark_id;
-	mbuf->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+	mbuf->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 }
 
 static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
diff --git a/drivers/net/bnxt/bnxt_rxr.h b/drivers/net/bnxt/bnxt_rxr.h
index 59adb7242c..a84f016609 100644
--- a/drivers/net/bnxt/bnxt_rxr.h
+++ b/drivers/net/bnxt/bnxt_rxr.h
@@ -212,7 +212,7 @@ static inline void bnxt_rx_vlan_v2(struct rte_mbuf *mbuf,
 {
 	if (RX_CMP_VLAN_VALID(rxcmp)) {
 		mbuf->vlan_tci = RX_CMP_METADATA0_VID(rxcmp1);
-		mbuf->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 	}
 }
 
@@ -276,47 +276,47 @@ static inline void bnxt_parse_csum_v2(struct rte_mbuf *mbuf,
 			t_pkt = 1;
 
 		if (unlikely(RX_CMP_V2_L4_CS_ERR(error_v2)))
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		else if (flags2 & RX_CMP_FLAGS2_L4_CSUM_ALL_OK_MASK)
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		else
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_UNKNOWN;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 
 		if (unlikely(RX_CMP_V2_L3_CS_ERR(error_v2)))
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		else if (flags2 & RX_CMP_FLAGS2_IP_CSUM_ALL_OK_MASK)
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 		else
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_UNKNOWN;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 	} else {
 		hdr_cnt = RX_CMP_V2_L4_CS_OK(flags2);
 		if (hdr_cnt > 1)
 			t_pkt = 1;
 
 		if (RX_CMP_V2_L4_CS_OK(flags2))
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		else if (RX_CMP_V2_L4_CS_ERR(error_v2))
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		else
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_UNKNOWN;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 
 		if (RX_CMP_V2_L3_CS_OK(flags2))
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 		else if (RX_CMP_V2_L3_CS_ERR(error_v2))
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		else
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_UNKNOWN;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 	}
 
 	if (t_pkt) {
 		if (unlikely(RX_CMP_V2_OT_L4_CS_ERR(error_v2) ||
 					RX_CMP_V2_T_L4_CS_ERR(error_v2)))
-			mbuf->ol_flags |= PKT_RX_OUTER_L4_CKSUM_BAD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 		else
-			mbuf->ol_flags |= PKT_RX_OUTER_L4_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
 
 		if (unlikely(RX_CMP_V2_T_IP_CS_ERR(error_v2)))
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	}
 }
 
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 5d3cdfa8f2..964b5772b0 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -108,12 +108,12 @@ int bnxt_init_tx_ring_struct(struct bnxt_tx_queue *txq, unsigned int socket_id)
 static bool
 bnxt_xmit_need_long_bd(struct rte_mbuf *tx_pkt, struct bnxt_tx_queue *txq)
 {
-	if (tx_pkt->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_TCP_CKSUM |
-				PKT_TX_UDP_CKSUM | PKT_TX_IP_CKSUM |
-				PKT_TX_VLAN | PKT_TX_OUTER_IP_CKSUM |
-				PKT_TX_TUNNEL_GRE | PKT_TX_TUNNEL_VXLAN |
-				PKT_TX_TUNNEL_GENEVE | PKT_TX_IEEE1588_TMST |
-				PKT_TX_QINQ) ||
+	if (tx_pkt->ol_flags & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_TCP_CKSUM |
+				RTE_MBUF_F_TX_UDP_CKSUM | RTE_MBUF_F_TX_IP_CKSUM |
+				RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_OUTER_IP_CKSUM |
+				RTE_MBUF_F_TX_TUNNEL_GRE | RTE_MBUF_F_TX_TUNNEL_VXLAN |
+				RTE_MBUF_F_TX_TUNNEL_GENEVE | RTE_MBUF_F_TX_IEEE1588_TMST |
+				RTE_MBUF_F_TX_QINQ) ||
 	     (BNXT_TRUFLOW_EN(txq->bp) &&
 	      (txq->bp->tx_cfa_action || txq->vfr_tx_cfa_action)))
 		return true;
@@ -200,13 +200,13 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 		vlan_tag_flags = 0;
 
 		/* HW can accelerate only outer vlan in QinQ mode */
-		if (tx_pkt->ol_flags & PKT_TX_QINQ) {
+		if (tx_pkt->ol_flags & RTE_MBUF_F_TX_QINQ) {
 			vlan_tag_flags = TX_BD_LONG_CFA_META_KEY_VLAN_TAG |
 				tx_pkt->vlan_tci_outer;
 			outer_tpid_bd = txq->bp->outer_tpid_bd &
 				BNXT_OUTER_TPID_BD_MASK;
 			vlan_tag_flags |= outer_tpid_bd;
-		} else if (tx_pkt->ol_flags & PKT_TX_VLAN) {
+		} else if (tx_pkt->ol_flags & RTE_MBUF_F_TX_VLAN) {
 			/* shurd: Should this mask at
 			 * TX_BD_LONG_CFA_META_VLAN_VID_MASK?
 			 */
@@ -236,7 +236,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 		else
 			txbd1->cfa_action = txq->bp->tx_cfa_action;
 
-		if (tx_pkt->ol_flags & PKT_TX_TCP_SEG) {
+		if (tx_pkt->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			uint16_t hdr_size;
 
 			/* TSO */
@@ -244,7 +244,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 					 TX_BD_LONG_LFLAGS_T_IPID;
 			hdr_size = tx_pkt->l2_len + tx_pkt->l3_len +
 					tx_pkt->l4_len;
-			hdr_size += (tx_pkt->ol_flags & PKT_TX_TUNNEL_MASK) ?
+			hdr_size += (tx_pkt->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 				    tx_pkt->outer_l2_len +
 				    tx_pkt->outer_l3_len : 0;
 			/* The hdr_size is multiple of 16bit units not 8bit.
@@ -299,24 +299,24 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 			   PKT_TX_TCP_UDP_CKSUM) {
 			/* TCP/UDP CSO */
 			txbd1->lflags |= TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM;
-		} else if ((tx_pkt->ol_flags & PKT_TX_TCP_CKSUM) ==
-			   PKT_TX_TCP_CKSUM) {
+		} else if ((tx_pkt->ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) ==
+			   RTE_MBUF_F_TX_TCP_CKSUM) {
 			/* TCP/UDP CSO */
 			txbd1->lflags |= TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM;
-		} else if ((tx_pkt->ol_flags & PKT_TX_UDP_CKSUM) ==
-			   PKT_TX_UDP_CKSUM) {
+		} else if ((tx_pkt->ol_flags & RTE_MBUF_F_TX_UDP_CKSUM) ==
+			   RTE_MBUF_F_TX_UDP_CKSUM) {
 			/* TCP/UDP CSO */
 			txbd1->lflags |= TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM;
-		} else if ((tx_pkt->ol_flags & PKT_TX_IP_CKSUM) ==
-			   PKT_TX_IP_CKSUM) {
+		} else if ((tx_pkt->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) ==
+			   RTE_MBUF_F_TX_IP_CKSUM) {
 			/* IP CSO */
 			txbd1->lflags |= TX_BD_LONG_LFLAGS_IP_CHKSUM;
-		} else if ((tx_pkt->ol_flags & PKT_TX_OUTER_IP_CKSUM) ==
-			   PKT_TX_OUTER_IP_CKSUM) {
+		} else if ((tx_pkt->ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) ==
+			   RTE_MBUF_F_TX_OUTER_IP_CKSUM) {
 			/* IP CSO */
 			txbd1->lflags |= TX_BD_LONG_LFLAGS_T_IP_CHKSUM;
-		} else if ((tx_pkt->ol_flags & PKT_TX_IEEE1588_TMST) ==
-			   PKT_TX_IEEE1588_TMST) {
+		} else if ((tx_pkt->ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST) ==
+			   RTE_MBUF_F_TX_IEEE1588_TMST) {
 			/* PTP */
 			txbd1->lflags |= TX_BD_LONG_LFLAGS_STAMP;
 		}
diff --git a/drivers/net/bnxt/bnxt_txr.h b/drivers/net/bnxt/bnxt_txr.h
index 6bfdc6d01a..e11343c082 100644
--- a/drivers/net/bnxt/bnxt_txr.h
+++ b/drivers/net/bnxt/bnxt_txr.h
@@ -60,25 +60,25 @@ int bnxt_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int bnxt_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int bnxt_flush_tx_cmp(struct bnxt_cp_ring_info *cpr);
 
-#define PKT_TX_OIP_IIP_TCP_UDP_CKSUM	(PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM | \
-					PKT_TX_IP_CKSUM | PKT_TX_OUTER_IP_CKSUM)
-#define PKT_TX_OIP_IIP_UDP_CKSUM	(PKT_TX_UDP_CKSUM | \
-					PKT_TX_IP_CKSUM | PKT_TX_OUTER_IP_CKSUM)
-#define PKT_TX_OIP_IIP_TCP_CKSUM	(PKT_TX_TCP_CKSUM | \
-					PKT_TX_IP_CKSUM | PKT_TX_OUTER_IP_CKSUM)
-#define PKT_TX_IIP_TCP_UDP_CKSUM	(PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM | \
-					PKT_TX_IP_CKSUM)
-#define PKT_TX_IIP_TCP_CKSUM		(PKT_TX_TCP_CKSUM | PKT_TX_IP_CKSUM)
-#define PKT_TX_IIP_UDP_CKSUM		(PKT_TX_UDP_CKSUM | PKT_TX_IP_CKSUM)
-#define PKT_TX_OIP_TCP_UDP_CKSUM	(PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM | \
-					PKT_TX_OUTER_IP_CKSUM)
-#define PKT_TX_OIP_UDP_CKSUM		(PKT_TX_UDP_CKSUM | \
-					PKT_TX_OUTER_IP_CKSUM)
-#define PKT_TX_OIP_TCP_CKSUM		(PKT_TX_TCP_CKSUM | \
-					PKT_TX_OUTER_IP_CKSUM)
-#define PKT_TX_OIP_IIP_CKSUM		(PKT_TX_IP_CKSUM |	\
-					 PKT_TX_OUTER_IP_CKSUM)
-#define PKT_TX_TCP_UDP_CKSUM		(PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM)
+#define PKT_TX_OIP_IIP_TCP_UDP_CKSUM	(RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM | \
+					RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+#define PKT_TX_OIP_IIP_UDP_CKSUM	(RTE_MBUF_F_TX_UDP_CKSUM | \
+					RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+#define PKT_TX_OIP_IIP_TCP_CKSUM	(RTE_MBUF_F_TX_TCP_CKSUM | \
+					RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+#define PKT_TX_IIP_TCP_UDP_CKSUM	(RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM | \
+					RTE_MBUF_F_TX_IP_CKSUM)
+#define PKT_TX_IIP_TCP_CKSUM		(RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_IP_CKSUM)
+#define PKT_TX_IIP_UDP_CKSUM		(RTE_MBUF_F_TX_UDP_CKSUM | RTE_MBUF_F_TX_IP_CKSUM)
+#define PKT_TX_OIP_TCP_UDP_CKSUM	(RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM | \
+					RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+#define PKT_TX_OIP_UDP_CKSUM		(RTE_MBUF_F_TX_UDP_CKSUM | \
+					RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+#define PKT_TX_OIP_TCP_CKSUM		(RTE_MBUF_F_TX_TCP_CKSUM | \
+					RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+#define PKT_TX_OIP_IIP_CKSUM		(RTE_MBUF_F_TX_IP_CKSUM |	\
+					 RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+#define PKT_TX_TCP_UDP_CKSUM		(RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM)
 
 
 #define TX_BD_FLG_TIP_IP_TCP_UDP_CHKSUM	(TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM | \
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 54987d96b3..d516a1fcc3 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -112,7 +112,7 @@ is_lacp_packets(uint16_t ethertype, uint8_t subtype, struct rte_mbuf *mbuf)
 	const uint16_t ether_type_slow_be =
 		rte_be_to_cpu_16(RTE_ETHER_TYPE_SLOW);
 
-	return !((mbuf->ol_flags & PKT_RX_VLAN) ? mbuf->vlan_tci : 0) &&
+	return !((mbuf->ol_flags & RTE_MBUF_F_RX_VLAN) ? mbuf->vlan_tci : 0) &&
 		(ethertype == ether_type_slow_be &&
 		(subtype == SLOW_SUBTYPE_MARKER || subtype == SLOW_SUBTYPE_LACP));
 }
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index 7caec6cf14..784a979f44 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -47,15 +47,15 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	uint16_t flags = 0;
 
 	/* Fastpath is dependent on these enums */
-	RTE_BUILD_BUG_ON(PKT_TX_TCP_CKSUM != (1ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_SCTP_CKSUM != (2ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_UDP_CKSUM != (3ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_IP_CKSUM != (1ULL << 54));
-	RTE_BUILD_BUG_ON(PKT_TX_IPV4 != (1ULL << 55));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IP_CKSUM != (1ULL << 58));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IPV4 != (1ULL << 59));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IPV6 != (1ULL << 60));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_UDP_CKSUM != (1ULL << 41));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_TCP_CKSUM != (1ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_SCTP_CKSUM != (2ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_UDP_CKSUM != (3ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IP_CKSUM != (1ULL << 54));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IPV4 != (1ULL << 55));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IP_CKSUM != (1ULL << 58));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV4 != (1ULL << 59));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV6 != (1ULL << 60));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_UDP_CKSUM != (1ULL << 41));
 	RTE_BUILD_BUG_ON(RTE_MBUF_L2_LEN_BITS != 7);
 	RTE_BUILD_BUG_ON(RTE_MBUF_L3_LEN_BITS != 9);
 	RTE_BUILD_BUG_ON(RTE_MBUF_OUTL2_LEN_BITS != 7);
diff --git a/drivers/net/cnxk/cn10k_rx.h b/drivers/net/cnxk/cn10k_rx.h
index 68219b8c19..a1990d83b2 100644
--- a/drivers/net/cnxk/cn10k_rx.h
+++ b/drivers/net/cnxk/cn10k_rx.h
@@ -104,9 +104,9 @@ nix_update_match_id(const uint16_t match_id, uint64_t ol_flags,
 	 * 0 to CNXK_FLOW_ACTION_FLAG_DEFAULT - 2
 	 */
 	if (likely(match_id)) {
-		ol_flags |= PKT_RX_FDIR;
+		ol_flags |= RTE_MBUF_F_RX_FDIR;
 		if (match_id != CNXK_FLOW_ACTION_FLAG_DEFAULT) {
-			ol_flags |= PKT_RX_FDIR_ID;
+			ol_flags |= RTE_MBUF_F_RX_FDIR_ID;
 			mbuf->hash.fdir.hi = match_id - 1;
 		}
 	}
@@ -190,7 +190,7 @@ cn10k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 
 	if (flag & NIX_RX_OFFLOAD_RSS_F) {
 		mbuf->hash.rss = tag;
-		ol_flags |= PKT_RX_RSS_HASH;
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 	}
 
 	if (flag & NIX_RX_OFFLOAD_CHECKSUM_F)
@@ -198,11 +198,11 @@ cn10k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 
 	if (flag & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
 		if (rx->vtag0_gone) {
-			ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+			ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 			mbuf->vlan_tci = rx->vtag0_tci;
 		}
 		if (rx->vtag1_gone) {
-			ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED;
+			ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
 			mbuf->vlan_tci_outer = rx->vtag1_tci;
 		}
 	}
@@ -305,7 +305,7 @@ static __rte_always_inline uint64_t
 nix_vlan_update(const uint64_t w2, uint64_t ol_flags, uint8x16_t *f)
 {
 	if (w2 & BIT_ULL(21) /* vtag0_gone */) {
-		ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		*f = vsetq_lane_u16((uint16_t)(w2 >> 32), *f, 5);
 	}
 
@@ -316,7 +316,7 @@ static __rte_always_inline uint64_t
 nix_qinq_update(const uint64_t w2, uint64_t ol_flags, struct rte_mbuf *mbuf)
 {
 	if (w2 & BIT_ULL(23) /* vtag1_gone */) {
-		ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED;
+		ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
 		mbuf->vlan_tci_outer = (uint16_t)(w2 >> 48);
 	}
 
@@ -443,10 +443,10 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 			f1 = vsetq_lane_u32(cq1_w0, f1, 3);
 			f2 = vsetq_lane_u32(cq2_w0, f2, 3);
 			f3 = vsetq_lane_u32(cq3_w0, f3, 3);
-			ol_flags0 = PKT_RX_RSS_HASH;
-			ol_flags1 = PKT_RX_RSS_HASH;
-			ol_flags2 = PKT_RX_RSS_HASH;
-			ol_flags3 = PKT_RX_RSS_HASH;
+			ol_flags0 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags1 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags2 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags3 = RTE_MBUF_F_RX_RSS_HASH;
 		} else {
 			ol_flags0 = 0;
 			ol_flags1 = 0;
@@ -519,8 +519,8 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 						  RTE_PTYPE_L2_ETHER_TIMESYNC,
 						  RTE_PTYPE_L2_ETHER_TIMESYNC,
 						  RTE_PTYPE_L2_ETHER_TIMESYNC};
-			const uint64_t ts_olf = PKT_RX_IEEE1588_PTP |
-						PKT_RX_IEEE1588_TMST |
+			const uint64_t ts_olf = RTE_MBUF_F_RX_IEEE1588_PTP |
+						RTE_MBUF_F_RX_IEEE1588_TMST |
 						tstamp->rx_tstamp_dynflag;
 			const uint32x4_t and_mask = {0x1, 0x2, 0x4, 0x8};
 			uint64x2_t ts01, ts23, mask;
diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index f75cae07ae..0e8c8c71b5 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -171,12 +171,12 @@ cn10k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 {
 	uint64_t mask, ol_flags = m->ol_flags;
 
-	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & PKT_TX_TCP_SEG)) {
+	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		uintptr_t mdata = rte_pktmbuf_mtod(m, uintptr_t);
 		uint16_t *iplen, *oiplen, *oudplen;
 		uint16_t lso_sb, paylen;
 
-		mask = -!!(ol_flags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6));
+		mask = -!!(ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IPV6));
 		lso_sb = (mask & (m->outer_l2_len + m->outer_l3_len)) +
 			 m->l2_len + m->l3_len + m->l4_len;
 
@@ -185,18 +185,18 @@ cn10k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 
 		/* Get iplen position assuming no tunnel hdr */
 		iplen = (uint16_t *)(mdata + m->l2_len +
-				     (2 << !!(ol_flags & PKT_TX_IPV6)));
+				     (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
 		/* Handle tunnel tso */
 		if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-		    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+		    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 			const uint8_t is_udp_tun =
 				(CNXK_NIX_UDP_TUN_BITMASK >>
-				 ((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) &
+				 ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) &
 				0x1;
 
 			oiplen = (uint16_t *)(mdata + m->outer_l2_len +
 					      (2 << !!(ol_flags &
-						       PKT_TX_OUTER_IPV6)));
+						       RTE_MBUF_F_TX_OUTER_IPV6)));
 			*oiplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*oiplen) -
 						   paylen);
 
@@ -211,7 +211,7 @@ cn10k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 			/* Update iplen position to inner ip hdr */
 			iplen = (uint16_t *)(mdata + lso_sb - m->l3_len -
 					     m->l4_len +
-					     (2 << !!(ol_flags & PKT_TX_IPV6)));
+					     (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
 		}
 
 		*iplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*iplen) - paylen);
@@ -261,11 +261,11 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, uintptr_t lmt_addr,
 
 	if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
 	    (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F)) {
-		const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+		const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
 		const uint8_t ol3type =
-			((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) +
-			((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) +
-			!!(ol_flags & PKT_TX_OUTER_IP_CKSUM);
+			((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
+			((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
+			!!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
 
 		/* Outer L3 */
 		w1.ol3type = ol3type;
@@ -277,15 +277,15 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, uintptr_t lmt_addr,
 		w1.ol4type = csum + (csum << 1);
 
 		/* Inner L3 */
-		w1.il3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) +
-			     ((!!(ol_flags & PKT_TX_IPV6)) << 2);
+		w1.il3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
+			     ((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2);
 		w1.il3ptr = w1.ol4ptr + m->l2_len;
 		w1.il4ptr = w1.il3ptr + m->l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.il3type = w1.il3type + !!(ol_flags & PKT_TX_IP_CKSUM);
+		w1.il3type = w1.il3type + !!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
 
 		/* Inner L4 */
-		w1.il4type = (ol_flags & PKT_TX_L4_MASK) >> 52;
+		w1.il4type = (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
 
 		/* In case of no tunnel header use only
 		 * shift IL3/IL4 fields a bit to use
@@ -296,16 +296,16 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, uintptr_t lmt_addr,
 		       ((w1.u & 0X00000000FFFFFFFF) >> (mask << 4));
 
 	} else if (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) {
-		const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+		const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
 		const uint8_t outer_l2_len = m->outer_l2_len;
 
 		/* Outer L3 */
 		w1.ol3ptr = outer_l2_len;
 		w1.ol4ptr = outer_l2_len + m->outer_l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.ol3type = ((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) +
-			     ((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) +
-			     !!(ol_flags & PKT_TX_OUTER_IP_CKSUM);
+		w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
+			     ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
+			     !!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
 
 		/* Outer L4 */
 		w1.ol4type = csum + (csum << 1);
@@ -321,27 +321,27 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, uintptr_t lmt_addr,
 		w1.ol3ptr = l2_len;
 		w1.ol4ptr = l2_len + m->l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.ol3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) +
-			     ((!!(ol_flags & PKT_TX_IPV6)) << 2) +
-			     !!(ol_flags & PKT_TX_IP_CKSUM);
+		w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
+			     ((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2) +
+			     !!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
 
 		/* Inner L4 */
-		w1.ol4type = (ol_flags & PKT_TX_L4_MASK) >> 52;
+		w1.ol4type = (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
 	}
 
 	if (flags & NIX_TX_NEED_EXT_HDR && flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) {
-		send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & PKT_TX_VLAN);
+		send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_VLAN);
 		/* HW will update ptr after vlan0 update */
 		send_hdr_ext->w1.vlan1_ins_ptr = 12;
 		send_hdr_ext->w1.vlan1_ins_tci = m->vlan_tci;
 
-		send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & PKT_TX_QINQ);
+		send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_QINQ);
 		/* 2B before end of l2 header */
 		send_hdr_ext->w1.vlan0_ins_ptr = 12;
 		send_hdr_ext->w1.vlan0_ins_tci = m->vlan_tci_outer;
 	}
 
-	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & PKT_TX_TCP_SEG)) {
+	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		uint16_t lso_sb;
 		uint64_t mask;
 
@@ -352,20 +352,20 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, uintptr_t lmt_addr,
 		send_hdr_ext->w0.lso = 1;
 		send_hdr_ext->w0.lso_mps = m->tso_segsz;
 		send_hdr_ext->w0.lso_format =
-			NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & PKT_TX_IPV6);
+			NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & RTE_MBUF_F_TX_IPV6);
 		w1.ol4type = NIX_SENDL4TYPE_TCP_CKSUM;
 
 		/* Handle tunnel tso */
 		if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-		    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+		    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 			const uint8_t is_udp_tun =
 				(CNXK_NIX_UDP_TUN_BITMASK >>
-				 ((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) &
+				 ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) &
 				0x1;
 			uint8_t shift = is_udp_tun ? 32 : 0;
 
-			shift += (!!(ol_flags & PKT_TX_OUTER_IPV6) << 4);
-			shift += (!!(ol_flags & PKT_TX_IPV6) << 3);
+			shift += (!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) << 4);
+			shift += (!!(ol_flags & RTE_MBUF_F_TX_IPV6) << 3);
 
 			w1.il4type = NIX_SENDL4TYPE_TCP_CKSUM;
 			w1.ol4type = is_udp_tun ? NIX_SENDL4TYPE_UDP_CKSUM : 0;
@@ -414,7 +414,7 @@ cn10k_nix_xmit_prepare_tstamp(uintptr_t lmt_addr, const uint64_t *cmd,
 			      const uint16_t flags)
 {
 	if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
-		const uint8_t is_ol_tstamp = !(ol_flags & PKT_TX_IEEE1588_TMST);
+		const uint8_t is_ol_tstamp = !(ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST);
 		struct nix_send_ext_s *send_hdr_ext =
 					(struct nix_send_ext_s *)lmt_addr + 16;
 		uint64_t *lmt = (uint64_t *)lmt_addr;
@@ -434,7 +434,7 @@ cn10k_nix_xmit_prepare_tstamp(uintptr_t lmt_addr, const uint64_t *cmd,
 			rte_compiler_barrier();
 		}
 
-		/* Packets for which PKT_TX_IEEE1588_TMST is not set, tx tstamp
+		/* Packets for which RTE_MBUF_F_TX_IEEE1588_TMST is not set, tx tstamp
 		 * should not be recorded, hence changing the alg type to
 		 * NIX_SENDMEMALG_SET and also changing send mem addr field to
 		 * next 8 bytes as it corrpt the actual tx tstamp registered
@@ -721,7 +721,7 @@ cn10k_nix_prepare_tso(struct rte_mbuf *m, union nix_send_hdr_w1_u *w1,
 	uint16_t lso_sb;
 	uint64_t mask;
 
-	if (!(ol_flags & PKT_TX_TCP_SEG))
+	if (!(ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 		return;
 
 	mask = -(!w1->il3type);
@@ -730,20 +730,20 @@ cn10k_nix_prepare_tso(struct rte_mbuf *m, union nix_send_hdr_w1_u *w1,
 	w0->u |= BIT(14);
 	w0->lso_sb = lso_sb;
 	w0->lso_mps = m->tso_segsz;
-	w0->lso_format = NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & PKT_TX_IPV6);
+	w0->lso_format = NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & RTE_MBUF_F_TX_IPV6);
 	w1->ol4type = NIX_SENDL4TYPE_TCP_CKSUM;
 
 	/* Handle tunnel tso */
 	if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-	    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+	    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 		const uint8_t is_udp_tun =
 			(CNXK_NIX_UDP_TUN_BITMASK >>
-			 ((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) &
+			 ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) &
 			0x1;
 		uint8_t shift = is_udp_tun ? 32 : 0;
 
-		shift += (!!(ol_flags & PKT_TX_OUTER_IPV6) << 4);
-		shift += (!!(ol_flags & PKT_TX_IPV6) << 3);
+		shift += (!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) << 4);
+		shift += (!!(ol_flags & RTE_MBUF_F_TX_IPV6) << 3);
 
 		w1->il4type = NIX_SENDL4TYPE_TCP_CKSUM;
 		w1->ol4type = is_udp_tun ? NIX_SENDL4TYPE_UDP_CKSUM : 0;
@@ -1282,26 +1282,26 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			const uint8x16_t tbl = {
 				/* [0-15] = il4type:il3type */
 				0x04, /* none (IPv6 assumed) */
-				0x14, /* PKT_TX_TCP_CKSUM (IPv6 assumed) */
-				0x24, /* PKT_TX_SCTP_CKSUM (IPv6 assumed) */
-				0x34, /* PKT_TX_UDP_CKSUM (IPv6 assumed) */
-				0x03, /* PKT_TX_IP_CKSUM */
-				0x13, /* PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM */
-				0x23, /* PKT_TX_IP_CKSUM | PKT_TX_SCTP_CKSUM */
-				0x33, /* PKT_TX_IP_CKSUM | PKT_TX_UDP_CKSUM */
-				0x02, /* PKT_TX_IPV4  */
-				0x12, /* PKT_TX_IPV4 | PKT_TX_TCP_CKSUM */
-				0x22, /* PKT_TX_IPV4 | PKT_TX_SCTP_CKSUM */
-				0x32, /* PKT_TX_IPV4 | PKT_TX_UDP_CKSUM */
-				0x03, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM */
-				0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_TCP_CKSUM
+				0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6 assumed) */
+				0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6 assumed) */
+				0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6 assumed) */
+				0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
+				0x13, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM */
+				0x23, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_SCTP_CKSUM */
+				0x33, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM */
+				0x02, /* RTE_MBUF_F_TX_IPV4  */
+				0x12, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_TCP_CKSUM */
+				0x22, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_SCTP_CKSUM */
+				0x32, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_UDP_CKSUM */
+				0x03, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM */
+				0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_TCP_CKSUM
 				       */
-				0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_SCTP_CKSUM
+				0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_SCTP_CKSUM
 				       */
-				0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_UDP_CKSUM
+				0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_UDP_CKSUM
 				       */
 			};
 
@@ -1486,40 +1486,40 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 				{
 					/* [0-15] = il4type:il3type */
 					0x04, /* none (IPv6) */
-					0x14, /* PKT_TX_TCP_CKSUM (IPv6) */
-					0x24, /* PKT_TX_SCTP_CKSUM (IPv6) */
-					0x34, /* PKT_TX_UDP_CKSUM (IPv6) */
-					0x03, /* PKT_TX_IP_CKSUM */
-					0x13, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_TCP_CKSUM
+					0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6) */
+					0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6) */
+					0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6) */
+					0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
+					0x13, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x23, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_SCTP_CKSUM
+					0x23, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x33, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_UDP_CKSUM
+					0x33, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
-					0x02, /* PKT_TX_IPV4 */
-					0x12, /* PKT_TX_IPV4 |
-					       * PKT_TX_TCP_CKSUM
+					0x02, /* RTE_MBUF_F_TX_IPV4 */
+					0x12, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x22, /* PKT_TX_IPV4 |
-					       * PKT_TX_SCTP_CKSUM
+					0x22, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x32, /* PKT_TX_IPV4 |
-					       * PKT_TX_UDP_CKSUM
+					0x32, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
-					0x03, /* PKT_TX_IPV4 |
-					       * PKT_TX_IP_CKSUM
+					0x03, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_IP_CKSUM
 					       */
-					0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_TCP_CKSUM
+					0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_SCTP_CKSUM
+					0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_UDP_CKSUM
+					0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
 				},
 
@@ -1707,11 +1707,11 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 		if (flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) {
 			/* Tx ol_flag for vlan. */
-			const uint64x2_t olv = {PKT_TX_VLAN, PKT_TX_VLAN};
+			const uint64x2_t olv = {RTE_MBUF_F_TX_VLAN, RTE_MBUF_F_TX_VLAN};
 			/* Bit enable for VLAN1 */
 			const uint64x2_t mlv = {BIT_ULL(49), BIT_ULL(49)};
 			/* Tx ol_flag for QnQ. */
-			const uint64x2_t olq = {PKT_TX_QINQ, PKT_TX_QINQ};
+			const uint64x2_t olq = {RTE_MBUF_F_TX_QINQ, RTE_MBUF_F_TX_QINQ};
 			/* Bit enable for VLAN0 */
 			const uint64x2_t mlq = {BIT_ULL(48), BIT_ULL(48)};
 			/* Load vlan values from packet. outer is VLAN 0 */
@@ -1753,8 +1753,8 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 		if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
 			/* Tx ol_flag for timestam. */
-			const uint64x2_t olf = {PKT_TX_IEEE1588_TMST,
-						PKT_TX_IEEE1588_TMST};
+			const uint64x2_t olf = {RTE_MBUF_F_TX_IEEE1588_TMST,
+						RTE_MBUF_F_TX_IEEE1588_TMST};
 			/* Set send mem alg to SUB. */
 			const uint64x2_t alg = {BIT_ULL(59), BIT_ULL(59)};
 			/* Increment send mem address by 8. */
diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index 115e678916..08090fd4ac 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -47,15 +47,15 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	uint16_t flags = 0;
 
 	/* Fastpath is dependent on these enums */
-	RTE_BUILD_BUG_ON(PKT_TX_TCP_CKSUM != (1ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_SCTP_CKSUM != (2ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_UDP_CKSUM != (3ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_IP_CKSUM != (1ULL << 54));
-	RTE_BUILD_BUG_ON(PKT_TX_IPV4 != (1ULL << 55));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IP_CKSUM != (1ULL << 58));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IPV4 != (1ULL << 59));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IPV6 != (1ULL << 60));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_UDP_CKSUM != (1ULL << 41));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_TCP_CKSUM != (1ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_SCTP_CKSUM != (2ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_UDP_CKSUM != (3ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IP_CKSUM != (1ULL << 54));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IPV4 != (1ULL << 55));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IP_CKSUM != (1ULL << 58));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV4 != (1ULL << 59));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV6 != (1ULL << 60));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_UDP_CKSUM != (1ULL << 41));
 	RTE_BUILD_BUG_ON(RTE_MBUF_L2_LEN_BITS != 7);
 	RTE_BUILD_BUG_ON(RTE_MBUF_L3_LEN_BITS != 9);
 	RTE_BUILD_BUG_ON(RTE_MBUF_OUTL2_LEN_BITS != 7);
diff --git a/drivers/net/cnxk/cn9k_rx.h b/drivers/net/cnxk/cn9k_rx.h
index a3bf4e0b63..6714743f96 100644
--- a/drivers/net/cnxk/cn9k_rx.h
+++ b/drivers/net/cnxk/cn9k_rx.h
@@ -99,9 +99,9 @@ nix_update_match_id(const uint16_t match_id, uint64_t ol_flags,
 	 * 0 to CNXK_FLOW_ACTION_FLAG_DEFAULT - 2
 	 */
 	if (likely(match_id)) {
-		ol_flags |= PKT_RX_FDIR;
+		ol_flags |= RTE_MBUF_F_RX_FDIR;
 		if (match_id != CNXK_FLOW_ACTION_FLAG_DEFAULT) {
-			ol_flags |= PKT_RX_FDIR_ID;
+			ol_flags |= RTE_MBUF_F_RX_FDIR_ID;
 			mbuf->hash.fdir.hi = match_id - 1;
 		}
 	}
@@ -186,7 +186,7 @@ cn9k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 
 	if (flag & NIX_RX_OFFLOAD_RSS_F) {
 		mbuf->hash.rss = tag;
-		ol_flags |= PKT_RX_RSS_HASH;
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 	}
 
 	if (flag & NIX_RX_OFFLOAD_CHECKSUM_F)
@@ -194,11 +194,11 @@ cn9k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 
 	if (flag & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
 		if (rx->cn9k.vtag0_gone) {
-			ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+			ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 			mbuf->vlan_tci = rx->cn9k.vtag0_tci;
 		}
 		if (rx->cn9k.vtag1_gone) {
-			ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED;
+			ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
 			mbuf->vlan_tci_outer = rx->cn9k.vtag1_tci;
 		}
 	}
@@ -302,7 +302,7 @@ static __rte_always_inline uint64_t
 nix_vlan_update(const uint64_t w2, uint64_t ol_flags, uint8x16_t *f)
 {
 	if (w2 & BIT_ULL(21) /* vtag0_gone */) {
-		ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		*f = vsetq_lane_u16((uint16_t)(w2 >> 32), *f, 5);
 	}
 
@@ -313,7 +313,7 @@ static __rte_always_inline uint64_t
 nix_qinq_update(const uint64_t w2, uint64_t ol_flags, struct rte_mbuf *mbuf)
 {
 	if (w2 & BIT_ULL(23) /* vtag1_gone */) {
-		ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED;
+		ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
 		mbuf->vlan_tci_outer = (uint16_t)(w2 >> 48);
 	}
 
@@ -414,10 +414,10 @@ cn9k_nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
 			f1 = vsetq_lane_u32(cq1_w0, f1, 3);
 			f2 = vsetq_lane_u32(cq2_w0, f2, 3);
 			f3 = vsetq_lane_u32(cq3_w0, f3, 3);
-			ol_flags0 = PKT_RX_RSS_HASH;
-			ol_flags1 = PKT_RX_RSS_HASH;
-			ol_flags2 = PKT_RX_RSS_HASH;
-			ol_flags3 = PKT_RX_RSS_HASH;
+			ol_flags0 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags1 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags2 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags3 = RTE_MBUF_F_RX_RSS_HASH;
 		} else {
 			ol_flags0 = 0;
 			ol_flags1 = 0;
@@ -490,8 +490,8 @@ cn9k_nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
 						  RTE_PTYPE_L2_ETHER_TIMESYNC,
 						  RTE_PTYPE_L2_ETHER_TIMESYNC,
 						  RTE_PTYPE_L2_ETHER_TIMESYNC};
-			const uint64_t ts_olf = PKT_RX_IEEE1588_PTP |
-						PKT_RX_IEEE1588_TMST |
+			const uint64_t ts_olf = RTE_MBUF_F_RX_IEEE1588_PTP |
+						RTE_MBUF_F_RX_IEEE1588_TMST |
 						rxq->tstamp->rx_tstamp_dynflag;
 			const uint32x4_t and_mask = {0x1, 0x2, 0x4, 0x8};
 			uint64x2_t ts01, ts23, mask;
diff --git a/drivers/net/cnxk/cn9k_tx.h b/drivers/net/cnxk/cn9k_tx.h
index ed65cd351f..584378d759 100644
--- a/drivers/net/cnxk/cn9k_tx.h
+++ b/drivers/net/cnxk/cn9k_tx.h
@@ -61,12 +61,12 @@ cn9k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 {
 	uint64_t mask, ol_flags = m->ol_flags;
 
-	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & PKT_TX_TCP_SEG)) {
+	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		uintptr_t mdata = rte_pktmbuf_mtod(m, uintptr_t);
 		uint16_t *iplen, *oiplen, *oudplen;
 		uint16_t lso_sb, paylen;
 
-		mask = -!!(ol_flags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6));
+		mask = -!!(ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IPV6));
 		lso_sb = (mask & (m->outer_l2_len + m->outer_l3_len)) +
 			 m->l2_len + m->l3_len + m->l4_len;
 
@@ -75,18 +75,18 @@ cn9k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 
 		/* Get iplen position assuming no tunnel hdr */
 		iplen = (uint16_t *)(mdata + m->l2_len +
-				     (2 << !!(ol_flags & PKT_TX_IPV6)));
+				     (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
 		/* Handle tunnel tso */
 		if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-		    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+		    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 			const uint8_t is_udp_tun =
 				(CNXK_NIX_UDP_TUN_BITMASK >>
-				 ((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) &
+				 ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) &
 				0x1;
 
 			oiplen = (uint16_t *)(mdata + m->outer_l2_len +
 					      (2 << !!(ol_flags &
-						       PKT_TX_OUTER_IPV6)));
+						       RTE_MBUF_F_TX_OUTER_IPV6)));
 			*oiplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*oiplen) -
 						   paylen);
 
@@ -101,7 +101,7 @@ cn9k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 			/* Update iplen position to inner ip hdr */
 			iplen = (uint16_t *)(mdata + lso_sb - m->l3_len -
 					     m->l4_len +
-					     (2 << !!(ol_flags & PKT_TX_IPV6)));
+					     (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
 		}
 
 		*iplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*iplen) - paylen);
@@ -151,11 +151,11 @@ cn9k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 
 	if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
 	    (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F)) {
-		const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+		const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
 		const uint8_t ol3type =
-			((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) +
-			((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) +
-			!!(ol_flags & PKT_TX_OUTER_IP_CKSUM);
+			((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
+			((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
+			!!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
 
 		/* Outer L3 */
 		w1.ol3type = ol3type;
@@ -167,15 +167,15 @@ cn9k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		w1.ol4type = csum + (csum << 1);
 
 		/* Inner L3 */
-		w1.il3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) +
-			     ((!!(ol_flags & PKT_TX_IPV6)) << 2);
+		w1.il3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
+			     ((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2);
 		w1.il3ptr = w1.ol4ptr + m->l2_len;
 		w1.il4ptr = w1.il3ptr + m->l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.il3type = w1.il3type + !!(ol_flags & PKT_TX_IP_CKSUM);
+		w1.il3type = w1.il3type + !!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
 
 		/* Inner L4 */
-		w1.il4type = (ol_flags & PKT_TX_L4_MASK) >> 52;
+		w1.il4type = (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
 
 		/* In case of no tunnel header use only
 		 * shift IL3/IL4 fields a bit to use
@@ -186,16 +186,16 @@ cn9k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		       ((w1.u & 0X00000000FFFFFFFF) >> (mask << 4));
 
 	} else if (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) {
-		const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+		const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
 		const uint8_t outer_l2_len = m->outer_l2_len;
 
 		/* Outer L3 */
 		w1.ol3ptr = outer_l2_len;
 		w1.ol4ptr = outer_l2_len + m->outer_l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.ol3type = ((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) +
-			     ((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) +
-			     !!(ol_flags & PKT_TX_OUTER_IP_CKSUM);
+		w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
+			     ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
+			     !!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
 
 		/* Outer L4 */
 		w1.ol4type = csum + (csum << 1);
@@ -211,27 +211,27 @@ cn9k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		w1.ol3ptr = l2_len;
 		w1.ol4ptr = l2_len + m->l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.ol3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) +
-			     ((!!(ol_flags & PKT_TX_IPV6)) << 2) +
-			     !!(ol_flags & PKT_TX_IP_CKSUM);
+		w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
+			     ((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2) +
+			     !!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
 
 		/* Inner L4 */
-		w1.ol4type = (ol_flags & PKT_TX_L4_MASK) >> 52;
+		w1.ol4type = (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
 	}
 
 	if (flags & NIX_TX_NEED_EXT_HDR && flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) {
-		send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & PKT_TX_VLAN);
+		send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_VLAN);
 		/* HW will update ptr after vlan0 update */
 		send_hdr_ext->w1.vlan1_ins_ptr = 12;
 		send_hdr_ext->w1.vlan1_ins_tci = m->vlan_tci;
 
-		send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & PKT_TX_QINQ);
+		send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_QINQ);
 		/* 2B before end of l2 header */
 		send_hdr_ext->w1.vlan0_ins_ptr = 12;
 		send_hdr_ext->w1.vlan0_ins_tci = m->vlan_tci_outer;
 	}
 
-	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & PKT_TX_TCP_SEG)) {
+	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		uint16_t lso_sb;
 		uint64_t mask;
 
@@ -242,20 +242,20 @@ cn9k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		send_hdr_ext->w0.lso = 1;
 		send_hdr_ext->w0.lso_mps = m->tso_segsz;
 		send_hdr_ext->w0.lso_format =
-			NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & PKT_TX_IPV6);
+			NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & RTE_MBUF_F_TX_IPV6);
 		w1.ol4type = NIX_SENDL4TYPE_TCP_CKSUM;
 
 		/* Handle tunnel tso */
 		if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-		    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+		    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 			const uint8_t is_udp_tun =
 				(CNXK_NIX_UDP_TUN_BITMASK >>
-				 ((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) &
+				 ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) &
 				0x1;
 			uint8_t shift = is_udp_tun ? 32 : 0;
 
-			shift += (!!(ol_flags & PKT_TX_OUTER_IPV6) << 4);
-			shift += (!!(ol_flags & PKT_TX_IPV6) << 3);
+			shift += (!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) << 4);
+			shift += (!!(ol_flags & RTE_MBUF_F_TX_IPV6) << 3);
 
 			w1.il4type = NIX_SENDL4TYPE_TCP_CKSUM;
 			w1.ol4type = is_udp_tun ? NIX_SENDL4TYPE_UDP_CKSUM : 0;
@@ -296,7 +296,7 @@ cn9k_nix_xmit_prepare_tstamp(uint64_t *cmd, const uint64_t *send_mem_desc,
 	if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
 		struct nix_send_mem_s *send_mem;
 		uint16_t off = (no_segdw - 1) << 1;
-		const uint8_t is_ol_tstamp = !(ol_flags & PKT_TX_IEEE1588_TMST);
+		const uint8_t is_ol_tstamp = !(ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST);
 
 		send_mem = (struct nix_send_mem_s *)(cmd + off);
 		if (flags & NIX_TX_MULTI_SEG_F) {
@@ -309,7 +309,7 @@ cn9k_nix_xmit_prepare_tstamp(uint64_t *cmd, const uint64_t *send_mem_desc,
 			rte_compiler_barrier();
 		}
 
-		/* Packets for which PKT_TX_IEEE1588_TMST is not set, tx tstamp
+		/* Packets for which RTE_MBUF_F_TX_IEEE1588_TMST is not set, tx tstamp
 		 * should not be recorded, hence changing the alg type to
 		 * NIX_SENDMEMALG_SET and also changing send mem addr field to
 		 * next 8 bytes as it corrpt the actual tx tstamp registered
@@ -553,7 +553,7 @@ cn9k_nix_prepare_tso(struct rte_mbuf *m, union nix_send_hdr_w1_u *w1,
 	uint16_t lso_sb;
 	uint64_t mask;
 
-	if (!(ol_flags & PKT_TX_TCP_SEG))
+	if (!(ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 		return;
 
 	mask = -(!w1->il3type);
@@ -562,15 +562,15 @@ cn9k_nix_prepare_tso(struct rte_mbuf *m, union nix_send_hdr_w1_u *w1,
 	w0->u |= BIT(14);
 	w0->lso_sb = lso_sb;
 	w0->lso_mps = m->tso_segsz;
-	w0->lso_format = NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & PKT_TX_IPV6);
+	w0->lso_format = NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & RTE_MBUF_F_TX_IPV6);
 	w1->ol4type = NIX_SENDL4TYPE_TCP_CKSUM;
 
 	/* Handle tunnel tso */
 	if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-	    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+	    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 		const uint8_t is_udp_tun =
 			(CNXK_NIX_UDP_TUN_BITMASK >>
-			 ((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) &
+			 ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) &
 			0x1;
 
 		w1->il4type = NIX_SENDL4TYPE_TCP_CKSUM;
@@ -578,7 +578,7 @@ cn9k_nix_prepare_tso(struct rte_mbuf *m, union nix_send_hdr_w1_u *w1,
 		/* Update format for UDP tunneled packet */
 		w0->lso_format += is_udp_tun ? 2 : 6;
 
-		w0->lso_format += !!(ol_flags & PKT_TX_OUTER_IPV6) << 1;
+		w0->lso_format += !!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) << 1;
 	}
 }
 
@@ -1060,26 +1060,26 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			const uint8x16_t tbl = {
 				/* [0-15] = il4type:il3type */
 				0x04, /* none (IPv6 assumed) */
-				0x14, /* PKT_TX_TCP_CKSUM (IPv6 assumed) */
-				0x24, /* PKT_TX_SCTP_CKSUM (IPv6 assumed) */
-				0x34, /* PKT_TX_UDP_CKSUM (IPv6 assumed) */
-				0x03, /* PKT_TX_IP_CKSUM */
-				0x13, /* PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM */
-				0x23, /* PKT_TX_IP_CKSUM | PKT_TX_SCTP_CKSUM */
-				0x33, /* PKT_TX_IP_CKSUM | PKT_TX_UDP_CKSUM */
-				0x02, /* PKT_TX_IPV4  */
-				0x12, /* PKT_TX_IPV4 | PKT_TX_TCP_CKSUM */
-				0x22, /* PKT_TX_IPV4 | PKT_TX_SCTP_CKSUM */
-				0x32, /* PKT_TX_IPV4 | PKT_TX_UDP_CKSUM */
-				0x03, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM */
-				0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_TCP_CKSUM
+				0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6 assumed) */
+				0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6 assumed) */
+				0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6 assumed) */
+				0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
+				0x13, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM */
+				0x23, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_SCTP_CKSUM */
+				0x33, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM */
+				0x02, /* RTE_MBUF_F_TX_IPV4  */
+				0x12, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_TCP_CKSUM */
+				0x22, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_SCTP_CKSUM */
+				0x32, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_UDP_CKSUM */
+				0x03, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM */
+				0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_TCP_CKSUM
 				       */
-				0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_SCTP_CKSUM
+				0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_SCTP_CKSUM
 				       */
-				0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_UDP_CKSUM
+				0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_UDP_CKSUM
 				       */
 			};
 
@@ -1264,40 +1264,40 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 				{
 					/* [0-15] = il4type:il3type */
 					0x04, /* none (IPv6) */
-					0x14, /* PKT_TX_TCP_CKSUM (IPv6) */
-					0x24, /* PKT_TX_SCTP_CKSUM (IPv6) */
-					0x34, /* PKT_TX_UDP_CKSUM (IPv6) */
-					0x03, /* PKT_TX_IP_CKSUM */
-					0x13, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_TCP_CKSUM
+					0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6) */
+					0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6) */
+					0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6) */
+					0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
+					0x13, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x23, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_SCTP_CKSUM
+					0x23, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x33, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_UDP_CKSUM
+					0x33, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
-					0x02, /* PKT_TX_IPV4 */
-					0x12, /* PKT_TX_IPV4 |
-					       * PKT_TX_TCP_CKSUM
+					0x02, /* RTE_MBUF_F_TX_IPV4 */
+					0x12, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x22, /* PKT_TX_IPV4 |
-					       * PKT_TX_SCTP_CKSUM
+					0x22, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x32, /* PKT_TX_IPV4 |
-					       * PKT_TX_UDP_CKSUM
+					0x32, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
-					0x03, /* PKT_TX_IPV4 |
-					       * PKT_TX_IP_CKSUM
+					0x03, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_IP_CKSUM
 					       */
-					0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_TCP_CKSUM
+					0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_SCTP_CKSUM
+					0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_UDP_CKSUM
+					0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
 				},
 
@@ -1485,11 +1485,11 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 		if (flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) {
 			/* Tx ol_flag for vlan. */
-			const uint64x2_t olv = {PKT_TX_VLAN, PKT_TX_VLAN};
+			const uint64x2_t olv = {RTE_MBUF_F_TX_VLAN, RTE_MBUF_F_TX_VLAN};
 			/* Bit enable for VLAN1 */
 			const uint64x2_t mlv = {BIT_ULL(49), BIT_ULL(49)};
 			/* Tx ol_flag for QnQ. */
-			const uint64x2_t olq = {PKT_TX_QINQ, PKT_TX_QINQ};
+			const uint64x2_t olq = {RTE_MBUF_F_TX_QINQ, RTE_MBUF_F_TX_QINQ};
 			/* Bit enable for VLAN0 */
 			const uint64x2_t mlq = {BIT_ULL(48), BIT_ULL(48)};
 			/* Load vlan values from packet. outer is VLAN 0 */
@@ -1531,8 +1531,8 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 		if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
 			/* Tx ol_flag for timestam. */
-			const uint64x2_t olf = {PKT_TX_IEEE1588_TMST,
-						PKT_TX_IEEE1588_TMST};
+			const uint64x2_t olf = {RTE_MBUF_F_TX_IEEE1588_TMST,
+						RTE_MBUF_F_TX_IEEE1588_TMST};
 			/* Set send mem alg to SUB. */
 			const uint64x2_t alg = {BIT_ULL(59), BIT_ULL(59)};
 			/* Increment send mem address by 8. */
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 27920c84f2..262f726998 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -116,8 +116,8 @@
 #define CNXK_NIX_FASTPATH_LOOKUP_MEM "cnxk_nix_fastpath_lookup_mem"
 
 #define CNXK_NIX_UDP_TUN_BITMASK                                               \
-	((1ull << (PKT_TX_TUNNEL_VXLAN >> 45)) |                               \
-	 (1ull << (PKT_TX_TUNNEL_GENEVE >> 45)))
+	((1ull << (RTE_MBUF_F_TX_TUNNEL_VXLAN >> 45)) |                               \
+	 (1ull << (RTE_MBUF_F_TX_TUNNEL_GENEVE >> 45)))
 
 struct cnxk_fc_cfg {
 	enum rte_eth_fc_mode mode;
@@ -481,15 +481,15 @@ cnxk_nix_mbuf_to_tstamp(struct rte_mbuf *mbuf,
 		 */
 		*cnxk_nix_timestamp_dynfield(mbuf, tstamp) =
 			rte_be_to_cpu_64(*tstamp_ptr);
-		/* PKT_RX_IEEE1588_TMST flag needs to be set only in case
+		/* RTE_MBUF_F_RX_IEEE1588_TMST flag needs to be set only in case
 		 * PTP packets are received.
 		 */
 		if (mbuf->packet_type == RTE_PTYPE_L2_ETHER_TIMESYNC) {
 			tstamp->rx_tstamp =
 				*cnxk_nix_timestamp_dynfield(mbuf, tstamp);
 			tstamp->rx_ready = 1;
-			mbuf->ol_flags |= PKT_RX_IEEE1588_PTP |
-					  PKT_RX_IEEE1588_TMST |
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP |
+					  RTE_MBUF_F_RX_IEEE1588_TMST |
 					  tstamp->rx_tstamp_dynflag;
 		}
 	}
diff --git a/drivers/net/cnxk/cnxk_lookup.c b/drivers/net/cnxk/cnxk_lookup.c
index 0152ad906a..d3e78fa58b 100644
--- a/drivers/net/cnxk/cnxk_lookup.c
+++ b/drivers/net/cnxk/cnxk_lookup.c
@@ -242,9 +242,9 @@ nix_create_rx_ol_flags_array(void *mem)
 		errlev = idx & 0xf;
 		errcode = (idx & 0xff0) >> 4;
 
-		val = PKT_RX_IP_CKSUM_UNKNOWN;
-		val |= PKT_RX_L4_CKSUM_UNKNOWN;
-		val |= PKT_RX_OUTER_L4_CKSUM_UNKNOWN;
+		val = RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
+		val |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
+		val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN;
 
 		switch (errlev) {
 		case NPC_ERRLEV_RE:
@@ -252,46 +252,46 @@ nix_create_rx_ol_flags_array(void *mem)
 			 * including Outer L2 length mismatch error
 			 */
 			if (errcode) {
-				val |= PKT_RX_IP_CKSUM_BAD;
-				val |= PKT_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 			break;
 		case NPC_ERRLEV_LC:
 			if (errcode == NPC_EC_OIP4_CSUM ||
 			    errcode == NPC_EC_IP_FRAG_OFFSET_1) {
-				val |= PKT_RX_IP_CKSUM_BAD;
-				val |= PKT_RX_OUTER_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			}
 			break;
 		case NPC_ERRLEV_LG:
 			if (errcode == NPC_EC_IIP4_CSUM)
-				val |= PKT_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			else
-				val |= PKT_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			break;
 		case NPC_ERRLEV_NIX:
 			if (errcode == NIX_RX_PERRCODE_OL4_CHK ||
 			    errcode == NIX_RX_PERRCODE_OL4_LEN ||
 			    errcode == NIX_RX_PERRCODE_OL4_PORT) {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_BAD;
-				val |= PKT_RX_OUTER_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 			} else if (errcode == NIX_RX_PERRCODE_IL4_CHK ||
 				   errcode == NIX_RX_PERRCODE_IL4_LEN ||
 				   errcode == NIX_RX_PERRCODE_IL4_PORT) {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			} else if (errcode == NIX_RX_PERRCODE_IL3_LEN ||
 				   errcode == NIX_RX_PERRCODE_OL3_LEN) {
-				val |= PKT_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 			break;
 		}
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index 3299d6252e..20aa84b653 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -539,7 +539,7 @@ static inline unsigned int flits_to_desc(unsigned int n)
  */
 static inline int is_eth_imm(const struct rte_mbuf *m)
 {
-	unsigned int hdrlen = (m->ol_flags & PKT_TX_TCP_SEG) ?
+	unsigned int hdrlen = (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) ?
 			      sizeof(struct cpl_tx_pkt_lso_core) : 0;
 
 	hdrlen += sizeof(struct cpl_tx_pkt);
@@ -749,12 +749,12 @@ static u64 hwcsum(enum chip_type chip, const struct rte_mbuf *m)
 {
 	int csum_type;
 
-	if (m->ol_flags & PKT_TX_IP_CKSUM) {
-		switch (m->ol_flags & PKT_TX_L4_MASK) {
-		case PKT_TX_TCP_CKSUM:
+	if (m->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
+		switch (m->ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+		case RTE_MBUF_F_TX_TCP_CKSUM:
 			csum_type = TX_CSUM_TCPIP;
 			break;
-		case PKT_TX_UDP_CKSUM:
+		case RTE_MBUF_F_TX_UDP_CKSUM:
 			csum_type = TX_CSUM_UDPIP;
 			break;
 		default:
@@ -1029,7 +1029,7 @@ static inline int tx_do_packet_coalesce(struct sge_eth_txq *txq,
 	/* fill the cpl message, same as in t4_eth_xmit, this should be kept
 	 * similar to t4_eth_xmit
 	 */
-	if (mbuf->ol_flags & PKT_TX_IP_CKSUM) {
+	if (mbuf->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		cntrl = hwcsum(adap->params.chip, mbuf) |
 			       F_TXPKT_IPCSUM_DIS;
 		txq->stats.tx_cso++;
@@ -1037,7 +1037,7 @@ static inline int tx_do_packet_coalesce(struct sge_eth_txq *txq,
 		cntrl = F_TXPKT_L4CSUM_DIS | F_TXPKT_IPCSUM_DIS;
 	}
 
-	if (mbuf->ol_flags & PKT_TX_VLAN) {
+	if (mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		txq->stats.vlan_ins++;
 		cntrl |= F_TXPKT_VLAN_VLD | V_TXPKT_VLAN(mbuf->vlan_tci);
 	}
@@ -1129,7 +1129,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
 		return 0;
 	}
 
-	if ((!(m->ol_flags & PKT_TX_TCP_SEG)) &&
+	if ((!(m->ol_flags & RTE_MBUF_F_TX_TCP_SEG)) &&
 	    (unlikely(m->pkt_len > max_pkt_len)))
 		goto out_free;
 
@@ -1140,7 +1140,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
 	/* align the end of coalesce WR to a 512 byte boundary */
 	txq->q.coalesce.max = (8 - (txq->q.pidx & 7)) * 8;
 
-	if (!((m->ol_flags & PKT_TX_TCP_SEG) ||
+	if (!((m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) ||
 			m->pkt_len > RTE_ETHER_MAX_LEN)) {
 		if (should_tx_packet_coalesce(txq, mbuf, &cflits, adap)) {
 			if (unlikely(map_mbuf(mbuf, addr) < 0)) {
@@ -1203,7 +1203,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
 	len += sizeof(*cpl);
 
 	/* Coalescing skipped and we send through normal path */
-	if (!(m->ol_flags & PKT_TX_TCP_SEG)) {
+	if (!(m->ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		wr->op_immdlen = htonl(V_FW_WR_OP(is_pf4(adap) ?
 						  FW_ETH_TX_PKT_WR :
 						  FW_ETH_TX_PKT_VM_WR) |
@@ -1212,7 +1212,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
 			cpl = (void *)(wr + 1);
 		else
 			cpl = (void *)(vmwr + 1);
-		if (m->ol_flags & PKT_TX_IP_CKSUM) {
+		if (m->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 			cntrl = hwcsum(adap->params.chip, m) |
 				F_TXPKT_IPCSUM_DIS;
 			txq->stats.tx_cso++;
@@ -1222,7 +1222,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
 			lso = (void *)(wr + 1);
 		else
 			lso = (void *)(vmwr + 1);
-		v6 = (m->ol_flags & PKT_TX_IPV6) != 0;
+		v6 = (m->ol_flags & RTE_MBUF_F_TX_IPV6) != 0;
 		l3hdr_len = m->l3_len;
 		l4hdr_len = m->l4_len;
 		eth_xtra_len = m->l2_len - RTE_ETHER_HDR_LEN;
@@ -1258,7 +1258,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
 		txq->stats.tx_cso += m->tso_segsz;
 	}
 
-	if (m->ol_flags & PKT_TX_VLAN) {
+	if (m->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		txq->stats.vlan_ins++;
 		cntrl |= F_TXPKT_VLAN_VLD | V_TXPKT_VLAN(m->vlan_tci);
 	}
@@ -1528,27 +1528,27 @@ static inline void cxgbe_fill_mbuf_info(struct adapter *adap,
 
 	if (cpl->vlan_ex)
 		cxgbe_set_mbuf_info(pkt, RTE_PTYPE_L2_ETHER_VLAN,
-				    PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+				    RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 	else
 		cxgbe_set_mbuf_info(pkt, RTE_PTYPE_L2_ETHER, 0);
 
 	if (cpl->l2info & htonl(F_RXF_IP))
 		cxgbe_set_mbuf_info(pkt, RTE_PTYPE_L3_IPV4,
-				    csum_ok ? PKT_RX_IP_CKSUM_GOOD :
-					      PKT_RX_IP_CKSUM_BAD);
+				    csum_ok ? RTE_MBUF_F_RX_IP_CKSUM_GOOD :
+				    RTE_MBUF_F_RX_IP_CKSUM_BAD);
 	else if (cpl->l2info & htonl(F_RXF_IP6))
 		cxgbe_set_mbuf_info(pkt, RTE_PTYPE_L3_IPV6,
-				    csum_ok ? PKT_RX_IP_CKSUM_GOOD :
-					      PKT_RX_IP_CKSUM_BAD);
+				    csum_ok ? RTE_MBUF_F_RX_IP_CKSUM_GOOD :
+				    RTE_MBUF_F_RX_IP_CKSUM_BAD);
 
 	if (cpl->l2info & htonl(F_RXF_TCP))
 		cxgbe_set_mbuf_info(pkt, RTE_PTYPE_L4_TCP,
-				    csum_ok ? PKT_RX_L4_CKSUM_GOOD :
-					      PKT_RX_L4_CKSUM_BAD);
+				    csum_ok ? RTE_MBUF_F_RX_L4_CKSUM_GOOD :
+				    RTE_MBUF_F_RX_L4_CKSUM_BAD);
 	else if (cpl->l2info & htonl(F_RXF_UDP))
 		cxgbe_set_mbuf_info(pkt, RTE_PTYPE_L4_UDP,
-				    csum_ok ? PKT_RX_L4_CKSUM_GOOD :
-					      PKT_RX_L4_CKSUM_BAD);
+				    csum_ok ? RTE_MBUF_F_RX_L4_CKSUM_GOOD :
+				    RTE_MBUF_F_RX_L4_CKSUM_BAD);
 }
 
 /**
@@ -1639,7 +1639,7 @@ static int process_responses(struct sge_rspq *q, int budget,
 
 				if (!rss_hdr->filter_tid &&
 				    rss_hdr->hash_type) {
-					pkt->ol_flags |= PKT_RX_RSS_HASH;
+					pkt->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 					pkt->hash.rss =
 						ntohl(rss_hdr->hash_val);
 				}
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index b5728e09c2..98edc53359 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -80,10 +80,9 @@
 	ETH_RSS_TCP | \
 	ETH_RSS_SCTP)
 
-#define DPAA_TX_CKSUM_OFFLOAD_MASK (             \
-		PKT_TX_IP_CKSUM |                \
-		PKT_TX_TCP_CKSUM |               \
-		PKT_TX_UDP_CKSUM)
+#define DPAA_TX_CKSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM |                \
+		RTE_MBUF_F_TX_TCP_CKSUM |               \
+		RTE_MBUF_F_TX_UDP_CKSUM)
 
 /* DPAA Frame descriptor macros */
 
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 423de40e95..ffac6ce3e2 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -125,8 +125,8 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m, void *fd_virt_addr)
 
 	DPAA_DP_LOG(DEBUG, " Parsing mbuf: %p with annotations: %p", m, annot);
 
-	m->ol_flags = PKT_RX_RSS_HASH | PKT_RX_IP_CKSUM_GOOD |
-		PKT_RX_L4_CKSUM_GOOD;
+	m->ol_flags = RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+		RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	switch (prs) {
 	case DPAA_PKT_TYPE_IPV4:
@@ -204,13 +204,13 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m, void *fd_virt_addr)
 		break;
 	case DPAA_PKT_TYPE_IPV4_CSUM_ERR:
 	case DPAA_PKT_TYPE_IPV6_CSUM_ERR:
-		m->ol_flags = PKT_RX_RSS_HASH | PKT_RX_IP_CKSUM_BAD;
+		m->ol_flags = RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		break;
 	case DPAA_PKT_TYPE_IPV4_TCP_CSUM_ERR:
 	case DPAA_PKT_TYPE_IPV6_TCP_CSUM_ERR:
 	case DPAA_PKT_TYPE_IPV4_UDP_CSUM_ERR:
 	case DPAA_PKT_TYPE_IPV6_UDP_CSUM_ERR:
-		m->ol_flags = PKT_RX_RSS_HASH | PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags = RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		break;
 	case DPAA_PKT_TYPE_NONE:
 		m->packet_type = 0;
@@ -229,7 +229,7 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m, void *fd_virt_addr)
 
 	/* Check if Vlan is present */
 	if (prs & DPAA_PARSE_VLAN_MASK)
-		m->ol_flags |= PKT_RX_VLAN;
+		m->ol_flags |= RTE_MBUF_F_RX_VLAN;
 	/* Packet received without stripping the vlan */
 }
 
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index f491f4d10a..267090c59b 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -114,7 +114,7 @@ dpaa2_dev_rx_parse_new(struct rte_mbuf *m, const struct qbman_fd *fd,
 		m->packet_type = dpaa2_dev_rx_parse_slow(m, annotation);
 	}
 	m->hash.rss = fd->simple.flc_hi;
-	m->ol_flags |= PKT_RX_RSS_HASH;
+	m->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 
 	if (dpaa2_enable_ts[m->port]) {
 		*dpaa2_timestamp_dynfield(m) = annotation->word2;
@@ -141,20 +141,20 @@ dpaa2_dev_rx_parse_slow(struct rte_mbuf *mbuf,
 
 #if defined(RTE_LIBRTE_IEEE1588)
 	if (BIT_ISSET_AT_POS(annotation->word1, DPAA2_ETH_FAS_PTP))
-		mbuf->ol_flags |= PKT_RX_IEEE1588_PTP;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP;
 #endif
 
 	if (BIT_ISSET_AT_POS(annotation->word3, L2_VLAN_1_PRESENT)) {
 		vlan_tci = rte_pktmbuf_mtod_offset(mbuf, uint16_t *,
 			(VLAN_TCI_OFFSET_1(annotation->word5) >> 16));
 		mbuf->vlan_tci = rte_be_to_cpu_16(*vlan_tci);
-		mbuf->ol_flags |= PKT_RX_VLAN;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN;
 		pkt_type |= RTE_PTYPE_L2_ETHER_VLAN;
 	} else if (BIT_ISSET_AT_POS(annotation->word3, L2_VLAN_N_PRESENT)) {
 		vlan_tci = rte_pktmbuf_mtod_offset(mbuf, uint16_t *,
 			(VLAN_TCI_OFFSET_1(annotation->word5) >> 16));
 		mbuf->vlan_tci = rte_be_to_cpu_16(*vlan_tci);
-		mbuf->ol_flags |= PKT_RX_VLAN | PKT_RX_QINQ;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_QINQ;
 		pkt_type |= RTE_PTYPE_L2_ETHER_QINQ;
 	}
 
@@ -189,9 +189,9 @@ dpaa2_dev_rx_parse_slow(struct rte_mbuf *mbuf,
 	}
 
 	if (BIT_ISSET_AT_POS(annotation->word8, DPAA2_ETH_FAS_L3CE))
-		mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else if (BIT_ISSET_AT_POS(annotation->word8, DPAA2_ETH_FAS_L4CE))
-		mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 
 	if (BIT_ISSET_AT_POS(annotation->word4, L3_IP_1_FIRST_FRAGMENT |
 	    L3_IP_1_MORE_FRAGMENT |
@@ -232,9 +232,9 @@ dpaa2_dev_rx_parse(struct rte_mbuf *mbuf, void *hw_annot_addr)
 			   annotation->word4);
 
 	if (BIT_ISSET_AT_POS(annotation->word8, DPAA2_ETH_FAS_L3CE))
-		mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else if (BIT_ISSET_AT_POS(annotation->word8, DPAA2_ETH_FAS_L4CE))
-		mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 
 	if (dpaa2_enable_ts[mbuf->port]) {
 		*dpaa2_timestamp_dynfield(mbuf) = annotation->word2;
@@ -1228,9 +1228,9 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 				    (*bufs)->nb_segs == 1 &&
 				    rte_mbuf_refcnt_read((*bufs)) == 1)) {
 					if (unlikely(((*bufs)->ol_flags
-						& PKT_TX_VLAN) ||
-						(eth_data->dev_conf.txmode.offloads
-						& DEV_TX_OFFLOAD_VLAN_INSERT))) {
+						& RTE_MBUF_F_TX_VLAN) ||
+						     (eth_data->dev_conf.txmode.offloads
+						      & DEV_TX_OFFLOAD_VLAN_INSERT))) {
 						ret = rte_vlan_insert(bufs);
 						if (ret)
 							goto send_n_return;
@@ -1271,9 +1271,9 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 				goto send_n_return;
 			}
 
-			if (unlikely(((*bufs)->ol_flags & PKT_TX_VLAN) ||
-				(eth_data->dev_conf.txmode.offloads
-				& DEV_TX_OFFLOAD_VLAN_INSERT))) {
+			if (unlikely(((*bufs)->ol_flags & RTE_MBUF_F_TX_VLAN) ||
+				     (eth_data->dev_conf.txmode.offloads
+				      & DEV_TX_OFFLOAD_VLAN_INSERT))) {
 				int ret = rte_vlan_insert(bufs);
 				if (ret)
 					goto send_n_return;
@@ -1532,7 +1532,7 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 				    (*bufs)->nb_segs == 1 &&
 				    rte_mbuf_refcnt_read((*bufs)) == 1)) {
 					if (unlikely((*bufs)->ol_flags
-						& PKT_TX_VLAN)) {
+						& RTE_MBUF_F_TX_VLAN)) {
 					  ret = rte_vlan_insert(bufs);
 					  if (ret)
 						goto send_n_return;
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index 0105e2d384..dbfdaecd4c 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -50,15 +50,14 @@
 
 #define E1000_RXDCTL_GRAN	0x01000000 /* RXDCTL Granularity */
 
-#define E1000_TX_OFFLOAD_MASK ( \
-		PKT_TX_IPV6 |           \
-		PKT_TX_IPV4 |           \
-		PKT_TX_IP_CKSUM |       \
-		PKT_TX_L4_MASK |        \
-		PKT_TX_VLAN)
+#define E1000_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_IPV6 |           \
+		RTE_MBUF_F_TX_IPV4 |           \
+		RTE_MBUF_F_TX_IP_CKSUM |       \
+		RTE_MBUF_F_TX_L4_MASK |        \
+		RTE_MBUF_F_TX_VLAN)
 
 #define E1000_TX_OFFLOAD_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ E1000_TX_OFFLOAD_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ E1000_TX_OFFLOAD_MASK)
 
 /* PCI offset for querying configuration status register */
 #define PCI_CFG_STATUS_REG                 0x06
@@ -234,7 +233,7 @@ em_set_xmit_ctx(struct em_tx_queue* txq,
 	 * When doing checksum or TCP segmentation with IPv6 headers,
 	 * IPCSE field should be set t0 0.
 	 */
-	if (flags & PKT_TX_IP_CKSUM) {
+	if (flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		ctx.lower_setup.ip_fields.ipcse =
 			(uint16_t)rte_cpu_to_le_16(ipcse - 1);
 		cmd_len |= E1000_TXD_CMD_IP;
@@ -247,13 +246,13 @@ em_set_xmit_ctx(struct em_tx_queue* txq,
 	ctx.upper_setup.tcp_fields.tucss = (uint8_t)ipcse;
 	ctx.upper_setup.tcp_fields.tucse = 0;
 
-	switch (flags & PKT_TX_L4_MASK) {
-	case PKT_TX_UDP_CKSUM:
+	switch (flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		ctx.upper_setup.tcp_fields.tucso = (uint8_t)(ipcse +
 				offsetof(struct rte_udp_hdr, dgram_cksum));
 		cmp_mask |= TX_MACIP_LEN_CMP_MASK;
 		break;
-	case PKT_TX_TCP_CKSUM:
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		ctx.upper_setup.tcp_fields.tucso = (uint8_t)(ipcse +
 				offsetof(struct rte_tcp_hdr, cksum));
 		cmd_len |= E1000_TXD_CMD_TCP;
@@ -356,8 +355,8 @@ tx_desc_cksum_flags_to_upper(uint64_t ol_flags)
 	static const uint32_t l3_olinfo[2] = {0, E1000_TXD_POPTS_IXSM << 8};
 	uint32_t tmp;
 
-	tmp = l4_olinfo[(ol_flags & PKT_TX_L4_MASK) != PKT_TX_L4_NO_CKSUM];
-	tmp |= l3_olinfo[(ol_flags & PKT_TX_IP_CKSUM) != 0];
+	tmp = l4_olinfo[(ol_flags & RTE_MBUF_F_TX_L4_MASK) != RTE_MBUF_F_TX_L4_NO_CKSUM];
+	tmp |= l3_olinfo[(ol_flags & RTE_MBUF_F_TX_IP_CKSUM) != 0];
 	return tmp;
 }
 
@@ -410,7 +409,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		ol_flags = tx_pkt->ol_flags;
 
 		/* If hardware offload required */
-		tx_ol_req = (ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK));
+		tx_ol_req = (ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK));
 		if (tx_ol_req) {
 			hdrlen.f.vlan_tci = tx_pkt->vlan_tci;
 			hdrlen.f.l2_len = tx_pkt->l2_len;
@@ -506,7 +505,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		popts_spec = 0;
 
 		/* Set VLAN Tag offload fields. */
-		if (ol_flags & PKT_TX_VLAN) {
+		if (ol_flags & RTE_MBUF_F_TX_VLAN) {
 			cmd_type_len |= E1000_TXD_CMD_VLE;
 			popts_spec = tx_pkt->vlan_tci << E1000_TXD_VLAN_SHIFT;
 		}
@@ -656,7 +655,7 @@ rx_desc_status_to_pkt_flags(uint32_t rx_status)
 
 	/* Check if VLAN present */
 	pkt_flags = ((rx_status & E1000_RXD_STAT_VP) ?
-		PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED : 0);
+		RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED : 0);
 
 	return pkt_flags;
 }
@@ -667,9 +666,9 @@ rx_desc_error_to_pkt_flags(uint32_t rx_error)
 	uint64_t pkt_flags = 0;
 
 	if (rx_error & E1000_RXD_ERR_IPE)
-		pkt_flags |= PKT_RX_IP_CKSUM_BAD;
+		pkt_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	if (rx_error & E1000_RXD_ERR_TCPE)
-		pkt_flags |= PKT_RX_L4_CKSUM_BAD;
+		pkt_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	return pkt_flags;
 }
 
@@ -811,7 +810,7 @@ eth_em_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		rxm->ol_flags = rxm->ol_flags |
 				rx_desc_error_to_pkt_flags(rxd.errors);
 
-		/* Only valid if PKT_RX_VLAN set in pkt_flags */
+		/* Only valid if RTE_MBUF_F_RX_VLAN set in pkt_flags */
 		rxm->vlan_tci = rte_le_to_cpu_16(rxd.special);
 
 		/*
@@ -1037,7 +1036,7 @@ eth_em_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		first_seg->ol_flags = first_seg->ol_flags |
 					rx_desc_error_to_pkt_flags(rxd.errors);
 
-		/* Only valid if PKT_RX_VLAN set in pkt_flags */
+		/* Only valid if RTE_MBUF_F_RX_VLAN set in pkt_flags */
 		rxm->vlan_tci = rte_le_to_cpu_16(rxd.special);
 
 		/* Prefetch data of first segment, if configured to do so. */
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index c630894052..f2e8c7e5b9 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -44,24 +44,23 @@
 #include "e1000_ethdev.h"
 
 #ifdef RTE_LIBRTE_IEEE1588
-#define IGB_TX_IEEE1588_TMST PKT_TX_IEEE1588_TMST
+#define IGB_TX_IEEE1588_TMST RTE_MBUF_F_TX_IEEE1588_TMST
 #else
 #define IGB_TX_IEEE1588_TMST 0
 #endif
 /* Bit Mask to indicate what bits required for building TX context */
-#define IGB_TX_OFFLOAD_MASK (			 \
-		PKT_TX_OUTER_IPV6 |	 \
-		PKT_TX_OUTER_IPV4 |	 \
-		PKT_TX_IPV6 |		 \
-		PKT_TX_IPV4 |		 \
-		PKT_TX_VLAN |		 \
-		PKT_TX_IP_CKSUM |		 \
-		PKT_TX_L4_MASK |		 \
-		PKT_TX_TCP_SEG |		 \
+#define IGB_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_OUTER_IPV6 |	 \
+		RTE_MBUF_F_TX_OUTER_IPV4 |	 \
+		RTE_MBUF_F_TX_IPV6 |		 \
+		RTE_MBUF_F_TX_IPV4 |		 \
+		RTE_MBUF_F_TX_VLAN |		 \
+		RTE_MBUF_F_TX_IP_CKSUM |		 \
+		RTE_MBUF_F_TX_L4_MASK |		 \
+		RTE_MBUF_F_TX_TCP_SEG |		 \
 		IGB_TX_IEEE1588_TMST)
 
 #define IGB_TX_OFFLOAD_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ IGB_TX_OFFLOAD_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IGB_TX_OFFLOAD_MASK)
 
 /**
  * Structure associated with each descriptor of the RX ring of a RX queue.
@@ -224,12 +223,12 @@ struct igb_tx_queue {
 static inline uint64_t
 check_tso_para(uint64_t ol_req, union igb_tx_offload ol_para)
 {
-	if (!(ol_req & PKT_TX_TCP_SEG))
+	if (!(ol_req & RTE_MBUF_F_TX_TCP_SEG))
 		return ol_req;
 	if ((ol_para.tso_segsz > IGB_TSO_MAX_MSS) || (ol_para.l2_len +
 			ol_para.l3_len + ol_para.l4_len > IGB_TSO_MAX_HDRLEN)) {
-		ol_req &= ~PKT_TX_TCP_SEG;
-		ol_req |= PKT_TX_TCP_CKSUM;
+		ol_req &= ~RTE_MBUF_F_TX_TCP_SEG;
+		ol_req |= RTE_MBUF_F_TX_TCP_CKSUM;
 	}
 	return ol_req;
 }
@@ -260,13 +259,13 @@ igbe_set_xmit_ctx(struct igb_tx_queue* txq,
 	/* Specify which HW CTX to upload. */
 	mss_l4len_idx = (ctx_idx << E1000_ADVTXD_IDX_SHIFT);
 
-	if (ol_flags & PKT_TX_VLAN)
+	if (ol_flags & RTE_MBUF_F_TX_VLAN)
 		tx_offload_mask.data |= TX_VLAN_CMP_MASK;
 
 	/* check if TCP segmentation required for this packet */
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		/* implies IP cksum in IPv4 */
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			type_tucmd_mlhl = E1000_ADVTXD_TUCMD_IPV4 |
 				E1000_ADVTXD_TUCMD_L4T_TCP |
 				E1000_ADVTXD_DTYP_CTXT | E1000_ADVTXD_DCMD_DEXT;
@@ -279,26 +278,26 @@ igbe_set_xmit_ctx(struct igb_tx_queue* txq,
 		mss_l4len_idx |= tx_offload.tso_segsz << E1000_ADVTXD_MSS_SHIFT;
 		mss_l4len_idx |= tx_offload.l4_len << E1000_ADVTXD_L4LEN_SHIFT;
 	} else { /* no TSO, check if hardware checksum is needed */
-		if (ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK))
+		if (ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK))
 			tx_offload_mask.data |= TX_MACIP_LEN_CMP_MASK;
 
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			type_tucmd_mlhl = E1000_ADVTXD_TUCMD_IPV4;
 
-		switch (ol_flags & PKT_TX_L4_MASK) {
-		case PKT_TX_UDP_CKSUM:
+		switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+		case RTE_MBUF_F_TX_UDP_CKSUM:
 			type_tucmd_mlhl |= E1000_ADVTXD_TUCMD_L4T_UDP |
 				E1000_ADVTXD_DTYP_CTXT | E1000_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= sizeof(struct rte_udp_hdr)
 				<< E1000_ADVTXD_L4LEN_SHIFT;
 			break;
-		case PKT_TX_TCP_CKSUM:
+		case RTE_MBUF_F_TX_TCP_CKSUM:
 			type_tucmd_mlhl |= E1000_ADVTXD_TUCMD_L4T_TCP |
 				E1000_ADVTXD_DTYP_CTXT | E1000_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= sizeof(struct rte_tcp_hdr)
 				<< E1000_ADVTXD_L4LEN_SHIFT;
 			break;
-		case PKT_TX_SCTP_CKSUM:
+		case RTE_MBUF_F_TX_SCTP_CKSUM:
 			type_tucmd_mlhl |= E1000_ADVTXD_TUCMD_L4T_SCTP |
 				E1000_ADVTXD_DTYP_CTXT | E1000_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= sizeof(struct rte_sctp_hdr)
@@ -357,9 +356,9 @@ tx_desc_cksum_flags_to_olinfo(uint64_t ol_flags)
 	static const uint32_t l3_olinfo[2] = {0, E1000_ADVTXD_POPTS_IXSM};
 	uint32_t tmp;
 
-	tmp  = l4_olinfo[(ol_flags & PKT_TX_L4_MASK)  != PKT_TX_L4_NO_CKSUM];
-	tmp |= l3_olinfo[(ol_flags & PKT_TX_IP_CKSUM) != 0];
-	tmp |= l4_olinfo[(ol_flags & PKT_TX_TCP_SEG) != 0];
+	tmp  = l4_olinfo[(ol_flags & RTE_MBUF_F_TX_L4_MASK)  != RTE_MBUF_F_TX_L4_NO_CKSUM];
+	tmp |= l3_olinfo[(ol_flags & RTE_MBUF_F_TX_IP_CKSUM) != 0];
+	tmp |= l4_olinfo[(ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0];
 	return tmp;
 }
 
@@ -369,8 +368,8 @@ tx_desc_vlan_flags_to_cmdtype(uint64_t ol_flags)
 	uint32_t cmdtype;
 	static uint32_t vlan_cmd[2] = {0, E1000_ADVTXD_DCMD_VLE};
 	static uint32_t tso_cmd[2] = {0, E1000_ADVTXD_DCMD_TSE};
-	cmdtype = vlan_cmd[(ol_flags & PKT_TX_VLAN) != 0];
-	cmdtype |= tso_cmd[(ol_flags & PKT_TX_TCP_SEG) != 0];
+	cmdtype = vlan_cmd[(ol_flags & RTE_MBUF_F_TX_VLAN) != 0];
+	cmdtype |= tso_cmd[(ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0];
 	return cmdtype;
 }
 
@@ -526,11 +525,11 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		 */
 		cmd_type_len = txq->txd_type |
 			E1000_ADVTXD_DCMD_IFCS | E1000_ADVTXD_DCMD_DEXT;
-		if (tx_ol_req & PKT_TX_TCP_SEG)
+		if (tx_ol_req & RTE_MBUF_F_TX_TCP_SEG)
 			pkt_len -= (tx_pkt->l2_len + tx_pkt->l3_len + tx_pkt->l4_len);
 		olinfo_status = (pkt_len << E1000_ADVTXD_PAYLEN_SHIFT);
 #if defined(RTE_LIBRTE_IEEE1588)
-		if (ol_flags & PKT_TX_IEEE1588_TMST)
+		if (ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 			cmd_type_len |= E1000_ADVTXD_MAC_TSTAMP;
 #endif
 		if (tx_ol_req) {
@@ -628,7 +627,7 @@ eth_igb_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		m = tx_pkts[i];
 
 		/* Check some limitations for TSO in hardware */
-		if (m->ol_flags & PKT_TX_TCP_SEG)
+		if (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			if ((m->tso_segsz > IGB_TSO_MAX_MSS) ||
 					(m->l2_len + m->l3_len + m->l4_len >
 					IGB_TSO_MAX_HDRLEN)) {
@@ -743,11 +742,11 @@ igb_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
 static inline uint64_t
 rx_desc_hlen_type_rss_to_pkt_flags(struct igb_rx_queue *rxq, uint32_t hl_tp_rs)
 {
-	uint64_t pkt_flags = ((hl_tp_rs & 0x0F) == 0) ?  0 : PKT_RX_RSS_HASH;
+	uint64_t pkt_flags = ((hl_tp_rs & 0x0F) == 0) ?  0 : RTE_MBUF_F_RX_RSS_HASH;
 
 #if defined(RTE_LIBRTE_IEEE1588)
 	static uint32_t ip_pkt_etqf_map[8] = {
-		0, 0, 0, PKT_RX_IEEE1588_PTP,
+		0, 0, 0, RTE_MBUF_F_RX_IEEE1588_PTP,
 		0, 0, 0, 0,
 	};
 
@@ -773,11 +772,11 @@ rx_desc_status_to_pkt_flags(uint32_t rx_status)
 
 	/* Check if VLAN present */
 	pkt_flags = ((rx_status & E1000_RXD_STAT_VP) ?
-		PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED : 0);
+		RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED : 0);
 
 #if defined(RTE_LIBRTE_IEEE1588)
 	if (rx_status & E1000_RXD_STAT_TMST)
-		pkt_flags = pkt_flags | PKT_RX_IEEE1588_TMST;
+		pkt_flags = pkt_flags | RTE_MBUF_F_RX_IEEE1588_TMST;
 #endif
 	return pkt_flags;
 }
@@ -791,10 +790,10 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status)
 	 */
 
 	static uint64_t error_to_pkt_flags_map[4] = {
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD,
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD,
-		PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD,
-		PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD
 	};
 	return error_to_pkt_flags_map[(rx_status >>
 		E1000_RXD_ERR_CKSUM_BIT) & E1000_RXD_ERR_CKSUM_MSK];
@@ -936,7 +935,7 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
 
 		/*
-		 * The vlan_tci field is only valid when PKT_RX_VLAN is
+		 * The vlan_tci field is only valid when RTE_MBUF_F_RX_VLAN is
 		 * set in the pkt_flags field and must be in CPU byte order.
 		 */
 		if ((staterr & rte_cpu_to_le_32(E1000_RXDEXT_STATERR_LB)) &&
@@ -1176,7 +1175,7 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		first_seg->hash.rss = rxd.wb.lower.hi_dword.rss;
 
 		/*
-		 * The vlan_tci field is only valid when PKT_RX_VLAN is
+		 * The vlan_tci field is only valid when RTE_MBUF_F_RX_VLAN is
 		 * set in the pkt_flags field and must be in CPU byte order.
 		 */
 		if ((staterr & rte_cpu_to_le_32(E1000_RXDEXT_STATERR_LB)) &&
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index 4cebf60a68..73f8f94766 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -120,9 +120,9 @@ static const struct ena_stats ena_stats_rx_strings[] = {
 			DEV_TX_OFFLOAD_UDP_CKSUM |\
 			DEV_TX_OFFLOAD_IPV4_CKSUM |\
 			DEV_TX_OFFLOAD_TCP_TSO)
-#define MBUF_OFFLOADS (PKT_TX_L4_MASK |\
-		       PKT_TX_IP_CKSUM |\
-		       PKT_TX_TCP_SEG)
+#define MBUF_OFFLOADS (RTE_MBUF_F_TX_L4_MASK |\
+		       RTE_MBUF_F_TX_IP_CKSUM |\
+		       RTE_MBUF_F_TX_TCP_SEG)
 
 /** Vendor ID used by Amazon devices */
 #define PCI_VENDOR_ID_AMAZON 0x1D0F
@@ -130,15 +130,14 @@ static const struct ena_stats ena_stats_rx_strings[] = {
 #define PCI_DEVICE_ID_ENA_VF		0xEC20
 #define PCI_DEVICE_ID_ENA_VF_RSERV0	0xEC21
 
-#define	ENA_TX_OFFLOAD_MASK	(\
-	PKT_TX_L4_MASK |         \
-	PKT_TX_IPV6 |            \
-	PKT_TX_IPV4 |            \
-	PKT_TX_IP_CKSUM |        \
-	PKT_TX_TCP_SEG)
+#define	ENA_TX_OFFLOAD_MASK	(RTE_MBUF_F_TX_L4_MASK |         \
+	RTE_MBUF_F_TX_IPV6 |            \
+	RTE_MBUF_F_TX_IPV4 |            \
+	RTE_MBUF_F_TX_IP_CKSUM |        \
+	RTE_MBUF_F_TX_TCP_SEG)
 
 #define	ENA_TX_OFFLOAD_NOTSUP_MASK	\
-	(PKT_TX_OFFLOAD_MASK ^ ENA_TX_OFFLOAD_MASK)
+	(RTE_MBUF_F_TX_OFFLOAD_MASK ^ ENA_TX_OFFLOAD_MASK)
 
 static const struct rte_pci_id pci_id_ena_map[] = {
 	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_AMAZON, PCI_DEVICE_ID_ENA_VF) },
@@ -274,24 +273,24 @@ static inline void ena_rx_mbuf_prepare(struct rte_mbuf *mbuf,
 	if (ena_rx_ctx->l3_proto == ENA_ETH_IO_L3_PROTO_IPV4) {
 		packet_type |= RTE_PTYPE_L3_IPV4;
 		if (unlikely(ena_rx_ctx->l3_csum_err))
-			ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		else
-			ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 	} else if (ena_rx_ctx->l3_proto == ENA_ETH_IO_L3_PROTO_IPV6) {
 		packet_type |= RTE_PTYPE_L3_IPV6;
 	}
 
 	if (!ena_rx_ctx->l4_csum_checked || ena_rx_ctx->frag)
-		ol_flags |= PKT_RX_L4_CKSUM_UNKNOWN;
+		ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 	else
 		if (unlikely(ena_rx_ctx->l4_csum_err))
-			ol_flags |= PKT_RX_L4_CKSUM_BAD;
+			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		else
-			ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	if (fill_hash &&
 	    likely((packet_type & ENA_PTYPE_HAS_HASH) && !ena_rx_ctx->frag)) {
-		ol_flags |= PKT_RX_RSS_HASH;
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mbuf->hash.rss = ena_rx_ctx->hash;
 	}
 
@@ -309,7 +308,7 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
 	if ((mbuf->ol_flags & MBUF_OFFLOADS) &&
 	    (queue_offloads & QUEUE_OFFLOADS)) {
 		/* check if TSO is required */
-		if ((mbuf->ol_flags & PKT_TX_TCP_SEG) &&
+		if ((mbuf->ol_flags & RTE_MBUF_F_TX_TCP_SEG) &&
 		    (queue_offloads & DEV_TX_OFFLOAD_TCP_TSO)) {
 			ena_tx_ctx->tso_enable = true;
 
@@ -317,11 +316,11 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
 		}
 
 		/* check if L3 checksum is needed */
-		if ((mbuf->ol_flags & PKT_TX_IP_CKSUM) &&
+		if ((mbuf->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) &&
 		    (queue_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM))
 			ena_tx_ctx->l3_csum_enable = true;
 
-		if (mbuf->ol_flags & PKT_TX_IPV6) {
+		if (mbuf->ol_flags & RTE_MBUF_F_TX_IPV6) {
 			ena_tx_ctx->l3_proto = ENA_ETH_IO_L3_PROTO_IPV6;
 		} else {
 			ena_tx_ctx->l3_proto = ENA_ETH_IO_L3_PROTO_IPV4;
@@ -334,12 +333,12 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
 		}
 
 		/* check if L4 checksum is needed */
-		if (((mbuf->ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM) &&
+		if (((mbuf->ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_TCP_CKSUM) &&
 		    (queue_offloads & DEV_TX_OFFLOAD_TCP_CKSUM)) {
 			ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_TCP;
 			ena_tx_ctx->l4_csum_enable = true;
-		} else if (((mbuf->ol_flags & PKT_TX_L4_MASK) ==
-				PKT_TX_UDP_CKSUM) &&
+		} else if (((mbuf->ol_flags & RTE_MBUF_F_TX_L4_MASK) ==
+				RTE_MBUF_F_TX_UDP_CKSUM) &&
 				(queue_offloads & DEV_TX_OFFLOAD_UDP_CKSUM)) {
 			ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_UDP;
 			ena_tx_ctx->l4_csum_enable = true;
@@ -2151,7 +2150,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		ena_rx_mbuf_prepare(mbuf, &ena_rx_ctx, fill_hash);
 
 		if (unlikely(mbuf->ol_flags &
-				(PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD))) {
+				(RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD))) {
 			rte_atomic64_inc(&rx_ring->adapter->drv_stats->ierrors);
 			++rx_ring->rx_stats.bad_csum;
 		}
@@ -2193,7 +2192,7 @@ eth_ena_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		m = tx_pkts[i];
 		ol_flags = m->ol_flags;
 
-		if (!(ol_flags & PKT_TX_IPV4))
+		if (!(ol_flags & RTE_MBUF_F_TX_IPV4))
 			continue;
 
 		/* If there was not L2 header length specified, assume it is
@@ -2217,8 +2216,8 @@ eth_ena_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		}
 
 		if ((ol_flags & ENA_TX_OFFLOAD_NOTSUP_MASK) != 0 ||
-				(ol_flags & PKT_TX_L4_MASK) ==
-				PKT_TX_SCTP_CKSUM) {
+				(ol_flags & RTE_MBUF_F_TX_L4_MASK) ==
+				RTE_MBUF_F_TX_SCTP_CKSUM) {
 			rte_errno = ENOTSUP;
 			return i;
 		}
@@ -2237,7 +2236,7 @@ eth_ena_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		 */
 
 		ret = rte_net_intel_cksum_flags_prepare(m,
-			ol_flags & ~PKT_TX_TCP_SEG);
+			ol_flags & ~RTE_MBUF_F_TX_TCP_SEG);
 		if (ret != 0) {
 			rte_errno = -ret;
 			return i;
diff --git a/drivers/net/enetc/enetc_rxtx.c b/drivers/net/enetc/enetc_rxtx.c
index 412322523d..ea64c9f682 100644
--- a/drivers/net/enetc/enetc_rxtx.c
+++ b/drivers/net/enetc/enetc_rxtx.c
@@ -174,80 +174,80 @@ enetc_refill_rx_ring(struct enetc_bdr *rx_ring, const int buff_cnt)
 static inline void enetc_slow_parsing(struct rte_mbuf *m,
 				     uint64_t parse_results)
 {
-	m->ol_flags &= ~(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+	m->ol_flags &= ~(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 
 	switch (parse_results) {
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV4:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV4;
-		m->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV6:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV6;
-		m->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV4_TCP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV4 |
 				 RTE_PTYPE_L4_TCP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV6_TCP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV6 |
 				 RTE_PTYPE_L4_TCP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV4_UDP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV4 |
 				 RTE_PTYPE_L4_UDP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV6_UDP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV6 |
 				 RTE_PTYPE_L4_UDP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV4_SCTP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV4 |
 				 RTE_PTYPE_L4_SCTP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV6_SCTP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV6 |
 				 RTE_PTYPE_L4_SCTP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV4_ICMP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV4 |
 				 RTE_PTYPE_L4_ICMP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV6_ICMP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV6 |
 				 RTE_PTYPE_L4_ICMP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	/* More switch cases can be added */
 	default:
 		m->packet_type = RTE_PTYPE_UNKNOWN;
-		m->ol_flags |= PKT_RX_IP_CKSUM_UNKNOWN |
-			       PKT_RX_L4_CKSUM_UNKNOWN;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN |
+			       RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 	}
 }
 
@@ -256,7 +256,7 @@ static inline void __rte_hot
 enetc_dev_rx_parse(struct rte_mbuf *m, uint16_t parse_results)
 {
 	ENETC_PMD_DP_DEBUG("parse summary = 0x%x   ", parse_results);
-	m->ol_flags |= PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD;
+	m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	switch (parse_results) {
 	case ENETC_PKT_TYPE_ETHER:
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 2affd380c6..b312e216ef 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -250,7 +250,7 @@ void enic_init_vnic_resources(struct enic *enic)
 			error_interrupt_offset);
 		/* Compute unsupported ol flags for enic_prep_pkts() */
 		enic->wq[index].tx_offload_notsup_mask =
-			PKT_TX_OFFLOAD_MASK ^ enic->tx_offload_mask;
+			RTE_MBUF_F_TX_OFFLOAD_MASK ^ enic->tx_offload_mask;
 
 		cq_idx = enic_cq_wq(enic, index);
 		vnic_cq_init(&enic->cq[cq_idx],
@@ -1755,10 +1755,10 @@ enic_enable_overlay_offload(struct enic *enic)
 		(enic->geneve ? DEV_TX_OFFLOAD_GENEVE_TNL_TSO : 0) |
 		(enic->vxlan ? DEV_TX_OFFLOAD_VXLAN_TNL_TSO : 0);
 	enic->tx_offload_mask |=
-		PKT_TX_OUTER_IPV6 |
-		PKT_TX_OUTER_IPV4 |
-		PKT_TX_OUTER_IP_CKSUM |
-		PKT_TX_TUNNEL_MASK;
+		RTE_MBUF_F_TX_OUTER_IPV6 |
+		RTE_MBUF_F_TX_OUTER_IPV4 |
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM |
+		RTE_MBUF_F_TX_TUNNEL_MASK;
 	enic->overlay_offload = true;
 
 	if (enic->vxlan && enic->geneve)
diff --git a/drivers/net/enic/enic_res.c b/drivers/net/enic/enic_res.c
index 0493e096d0..e85f9f23fb 100644
--- a/drivers/net/enic/enic_res.c
+++ b/drivers/net/enic/enic_res.c
@@ -216,12 +216,12 @@ int enic_get_vnic_config(struct enic *enic)
 		DEV_RX_OFFLOAD_TCP_CKSUM |
 		DEV_RX_OFFLOAD_RSS_HASH;
 	enic->tx_offload_mask =
-		PKT_TX_IPV6 |
-		PKT_TX_IPV4 |
-		PKT_TX_VLAN |
-		PKT_TX_IP_CKSUM |
-		PKT_TX_L4_MASK |
-		PKT_TX_TCP_SEG;
+		RTE_MBUF_F_TX_IPV6 |
+		RTE_MBUF_F_TX_IPV4 |
+		RTE_MBUF_F_TX_VLAN |
+		RTE_MBUF_F_TX_IP_CKSUM |
+		RTE_MBUF_F_TX_L4_MASK |
+		RTE_MBUF_F_TX_TCP_SEG;
 
 	return 0;
 }
diff --git a/drivers/net/enic/enic_rxtx.c b/drivers/net/enic/enic_rxtx.c
index 3899907d6d..c44715bfd0 100644
--- a/drivers/net/enic/enic_rxtx.c
+++ b/drivers/net/enic/enic_rxtx.c
@@ -424,7 +424,7 @@ uint16_t enic_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 	for (i = 0; i != nb_pkts; i++) {
 		m = tx_pkts[i];
 		ol_flags = m->ol_flags;
-		if (!(ol_flags & PKT_TX_TCP_SEG)) {
+		if (!(ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 			if (unlikely(m->pkt_len > ENIC_TX_MAX_PKT_SIZE)) {
 				rte_errno = EINVAL;
 				return i;
@@ -489,7 +489,7 @@ uint16_t enic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 	wq_desc_avail = vnic_wq_desc_avail(wq);
 	head_idx = wq->head_idx;
 	desc_count = wq->ring.desc_count;
-	ol_flags_mask = PKT_TX_VLAN | PKT_TX_IP_CKSUM | PKT_TX_L4_MASK;
+	ol_flags_mask = RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK;
 	tx_oversized = &enic->soft_stats.tx_oversized;
 
 	nb_pkts = RTE_MIN(nb_pkts, ENIC_TX_XMIT_MAX);
@@ -500,7 +500,7 @@ uint16_t enic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		data_len = tx_pkt->data_len;
 		ol_flags = tx_pkt->ol_flags;
 		nb_segs = tx_pkt->nb_segs;
-		tso = ol_flags & PKT_TX_TCP_SEG;
+		tso = ol_flags & RTE_MBUF_F_TX_TCP_SEG;
 
 		/* drop packet if it's too big to send */
 		if (unlikely(!tso && pkt_len > ENIC_TX_MAX_PKT_SIZE)) {
@@ -517,7 +517,7 @@ uint16_t enic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 		mss = 0;
 		vlan_id = tx_pkt->vlan_tci;
-		vlan_tag_insert = !!(ol_flags & PKT_TX_VLAN);
+		vlan_tag_insert = !!(ol_flags & RTE_MBUF_F_TX_VLAN);
 		bus_addr = (dma_addr_t)
 			   (tx_pkt->buf_iova + tx_pkt->data_off);
 
@@ -543,20 +543,20 @@ uint16_t enic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			offload_mode = WQ_ENET_OFFLOAD_MODE_TSO;
 			mss = tx_pkt->tso_segsz;
 			/* For tunnel, need the size of outer+inner headers */
-			if (ol_flags & PKT_TX_TUNNEL_MASK) {
+			if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
 				header_len += tx_pkt->outer_l2_len +
 					tx_pkt->outer_l3_len;
 			}
 		}
 
 		if ((ol_flags & ol_flags_mask) && (header_len == 0)) {
-			if (ol_flags & PKT_TX_IP_CKSUM)
+			if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 				mss |= ENIC_CALC_IP_CKSUM;
 
 			/* Nic uses just 1 bit for UDP and TCP */
-			switch (ol_flags & PKT_TX_L4_MASK) {
-			case PKT_TX_TCP_CKSUM:
-			case PKT_TX_UDP_CKSUM:
+			switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+			case RTE_MBUF_F_TX_TCP_CKSUM:
+			case RTE_MBUF_F_TX_UDP_CKSUM:
 				mss |= ENIC_CALC_TCP_UDP_CKSUM;
 				break;
 			}
@@ -634,7 +634,7 @@ static void enqueue_simple_pkts(struct rte_mbuf **pkts,
 		desc->header_length_flags &=
 			((1 << WQ_ENET_FLAGS_EOP_SHIFT) |
 			 (1 << WQ_ENET_FLAGS_CQ_ENTRY_SHIFT));
-		if (p->ol_flags & PKT_TX_VLAN) {
+		if (p->ol_flags & RTE_MBUF_F_TX_VLAN) {
 			desc->header_length_flags |=
 				1 << WQ_ENET_FLAGS_VLAN_TAG_INSERT_SHIFT;
 		}
@@ -643,9 +643,9 @@ static void enqueue_simple_pkts(struct rte_mbuf **pkts,
 		 * is 0, so no need to set offload_mode.
 		 */
 		mss = 0;
-		if (p->ol_flags & PKT_TX_IP_CKSUM)
+		if (p->ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			mss |= ENIC_CALC_IP_CKSUM << WQ_ENET_MSS_SHIFT;
-		if (p->ol_flags & PKT_TX_L4_MASK)
+		if (p->ol_flags & RTE_MBUF_F_TX_L4_MASK)
 			mss |= ENIC_CALC_TCP_UDP_CKSUM << WQ_ENET_MSS_SHIFT;
 		desc->mss_loopback = mss;
 
diff --git a/drivers/net/enic/enic_rxtx_common.h b/drivers/net/enic/enic_rxtx_common.h
index d8668d1898..9d6d3476b0 100644
--- a/drivers/net/enic/enic_rxtx_common.h
+++ b/drivers/net/enic/enic_rxtx_common.h
@@ -209,11 +209,11 @@ enic_cq_rx_to_pkt_flags(struct cq_desc *cqd, struct rte_mbuf *mbuf)
 
 	/* VLAN STRIPPED flag. The L2 packet type updated here also */
 	if (bwflags & CQ_ENET_RQ_DESC_FLAGS_VLAN_STRIPPED) {
-		pkt_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		pkt_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mbuf->packet_type |= RTE_PTYPE_L2_ETHER;
 	} else {
 		if (vlan_tci != 0) {
-			pkt_flags |= PKT_RX_VLAN;
+			pkt_flags |= RTE_MBUF_F_RX_VLAN;
 			mbuf->packet_type |= RTE_PTYPE_L2_ETHER_VLAN;
 		} else {
 			mbuf->packet_type |= RTE_PTYPE_L2_ETHER;
@@ -227,16 +227,16 @@ enic_cq_rx_to_pkt_flags(struct cq_desc *cqd, struct rte_mbuf *mbuf)
 		clsf_cqd = (struct cq_enet_rq_clsf_desc *)cqd;
 		filter_id = clsf_cqd->filter_id;
 		if (filter_id) {
-			pkt_flags |= PKT_RX_FDIR;
+			pkt_flags |= RTE_MBUF_F_RX_FDIR;
 			if (filter_id != ENIC_MAGIC_FILTER_ID) {
 				/* filter_id = mark id + 1, so subtract 1 */
 				mbuf->hash.fdir.hi = filter_id - 1;
-				pkt_flags |= PKT_RX_FDIR_ID;
+				pkt_flags |= RTE_MBUF_F_RX_FDIR_ID;
 			}
 		}
 	} else if (enic_cq_rx_desc_rss_type(cqrd)) {
 		/* RSS flag */
-		pkt_flags |= PKT_RX_RSS_HASH;
+		pkt_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mbuf->hash.rss = enic_cq_rx_desc_rss_hash(cqrd);
 	}
 
@@ -254,17 +254,17 @@ enic_cq_rx_to_pkt_flags(struct cq_desc *cqd, struct rte_mbuf *mbuf)
 			 */
 			if (mbuf->packet_type & RTE_PTYPE_L3_IPV4) {
 				if (enic_cq_rx_desc_ipv4_csum_ok(cqrd))
-					pkt_flags |= PKT_RX_IP_CKSUM_GOOD;
+					pkt_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 				else
-					pkt_flags |= PKT_RX_IP_CKSUM_BAD;
+					pkt_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			}
 
 			if (l4_flags == RTE_PTYPE_L4_UDP ||
 			    l4_flags == RTE_PTYPE_L4_TCP) {
 				if (enic_cq_rx_desc_tcp_udp_csum_ok(cqrd))
-					pkt_flags |= PKT_RX_L4_CKSUM_GOOD;
+					pkt_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 				else
-					pkt_flags |= PKT_RX_L4_CKSUM_BAD;
+					pkt_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			}
 		}
 	}
diff --git a/drivers/net/enic/enic_rxtx_vec_avx2.c b/drivers/net/enic/enic_rxtx_vec_avx2.c
index 1848f52717..600efff270 100644
--- a/drivers/net/enic/enic_rxtx_vec_avx2.c
+++ b/drivers/net/enic/enic_rxtx_vec_avx2.c
@@ -167,21 +167,21 @@ enic_noscatter_vec_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 			0x80, 0x80, 11, 10,
 			0x80, 0x80, 11, 10,
 			0x80, 0x80, 11, 10);
-	/* PKT_RX_RSS_HASH is 1<<1 so fits in 8-bit integer */
+	/* RTE_MBUF_F_RX_RSS_HASH is 1<<1 so fits in 8-bit integer */
 	const __m256i rss_shuffle =
 		_mm256_set_epi8(/* second 128 bits */
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
 			0, /* rss_types = 0 */
 			/* first 128 bits */
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
 			0 /* rss_types = 0 */);
 	/*
 	 * VLAN offload flags.
@@ -191,8 +191,8 @@ enic_noscatter_vec_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	 */
 	const __m256i vlan_shuffle =
 		_mm256_set_epi32(0, 0, 0, 0,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, PKT_RX_VLAN);
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, RTE_MBUF_F_RX_VLAN);
 	/* Use the same shuffle index as vlan_shuffle */
 	const __m256i vlan_ptype_shuffle =
 		_mm256_set_epi32(0, 0, 0, 0,
@@ -211,39 +211,39 @@ enic_noscatter_vec_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	const __m256i csum_shuffle =
 		_mm256_set_epi8(/* second 128 bits */
 			/* 1111 ip4+ip4_ok+l4+l4_ok */
-			((PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1),
+			((RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1),
 			/* 1110 ip4_ok+ip4+l4+!l4_ok */
-			((PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1),
-			(PKT_RX_IP_CKSUM_GOOD >> 1), /* 1101 ip4+ip4_ok */
-			(PKT_RX_IP_CKSUM_GOOD >> 1), /* 1100 ip4_ok+ip4 */
-			(PKT_RX_L4_CKSUM_GOOD >> 1), /* 1011 l4+l4_ok */
-			(PKT_RX_L4_CKSUM_BAD >> 1),  /* 1010 l4+!l4_ok */
+			((RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1),
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD >> 1), /* 1101 ip4+ip4_ok */
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD >> 1), /* 1100 ip4_ok+ip4 */
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD >> 1), /* 1011 l4+l4_ok */
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD >> 1),  /* 1010 l4+!l4_ok */
 			0, /* 1001 */
 			0, /* 1000 */
 			/* 0111 !ip4_ok+ip4+l4+l4_ok */
-			((PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD) >> 1),
+			((RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1),
 			/* 0110 !ip4_ok+ip4+l4+!l4_ok */
-			((PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD) >> 1),
-			(PKT_RX_IP_CKSUM_BAD >> 1),  /* 0101 !ip4_ok+ip4 */
-			(PKT_RX_IP_CKSUM_BAD >> 1),  /* 0100 !ip4_ok+ip4 */
-			(PKT_RX_L4_CKSUM_GOOD >> 1), /* 0011 l4+l4_ok */
-			(PKT_RX_L4_CKSUM_BAD >> 1),  /* 0010 l4+!l4_ok */
+			((RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1),
+			(RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1),  /* 0101 !ip4_ok+ip4 */
+			(RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1),  /* 0100 !ip4_ok+ip4 */
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD >> 1), /* 0011 l4+l4_ok */
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD >> 1),  /* 0010 l4+!l4_ok */
 			0, /* 0001 */
 			0, /* 0000 */
 			/* first 128 bits */
-			((PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1),
-			((PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1),
-			(PKT_RX_IP_CKSUM_GOOD >> 1),
-			(PKT_RX_IP_CKSUM_GOOD >> 1),
-			(PKT_RX_L4_CKSUM_GOOD >> 1),
-			(PKT_RX_L4_CKSUM_BAD >> 1),
+			((RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1),
+			((RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1),
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD >> 1),
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD >> 1),
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD >> 1),
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD >> 1),
 			0, 0,
-			((PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD) >> 1),
-			((PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD) >> 1),
-			(PKT_RX_IP_CKSUM_BAD >> 1),
-			(PKT_RX_IP_CKSUM_BAD >> 1),
-			(PKT_RX_L4_CKSUM_GOOD >> 1),
-			(PKT_RX_L4_CKSUM_BAD >> 1),
+			((RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1),
+			((RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1),
+			(RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1),
+			(RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1),
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD >> 1),
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD >> 1),
 			0, 0);
 	/*
 	 * Non-fragment PTYPEs.
@@ -471,7 +471,7 @@ enic_noscatter_vec_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 			break;
 
 		/*
-		 * Compute PKT_RX_RSS_HASH.
+		 * Compute RTE_MBUF_F_RX_RSS_HASH.
 		 * Use 2 shifts and 1 shuffle for 8 desc: 0.375 inst/desc
 		 * RSS types in byte 0, 4, 8, 12, 16, 20, 24, 28
 		 * Everything else is zero.
@@ -479,7 +479,7 @@ enic_noscatter_vec_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		__m256i rss_types =
 			_mm256_srli_epi32(_mm256_slli_epi32(flags0_7, 10), 28);
 		/*
-		 * RSS flags (PKT_RX_RSS_HASH) are in
+		 * RSS flags (RTE_MBUF_F_RX_RSS_HASH) are in
 		 * byte 0, 4, 8, 12, 16, 20, 24, 28
 		 * Everything else is zero.
 		 */
@@ -557,7 +557,7 @@ enic_noscatter_vec_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		vlan0_7 = _mm256_sub_epi32(zero4, vlan0_7);
 
 		/*
-		 * Compute PKT_RX_VLAN and PKT_RX_VLAN_STRIPPED.
+		 * Compute RTE_MBUF_F_RX_VLAN and RTE_MBUF_F_RX_VLAN_STRIPPED.
 		 * Use 3 shifts, 1 or,  1 shuffle for 8 desc: 0.625 inst/desc
 		 * VLAN offload flags in byte 0, 4, 8, 12, 16, 20, 24, 28
 		 * Everything else is zero.
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index 496e72a003..b232d09104 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -37,16 +37,15 @@ static inline void dump_rxd(union fm10k_rx_desc *rxd)
 }
 #endif
 
-#define FM10K_TX_OFFLOAD_MASK (  \
-		PKT_TX_VLAN |        \
-		PKT_TX_IPV6 |            \
-		PKT_TX_IPV4 |            \
-		PKT_TX_IP_CKSUM |        \
-		PKT_TX_L4_MASK |         \
-		PKT_TX_TCP_SEG)
+#define FM10K_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_VLAN |        \
+		RTE_MBUF_F_TX_IPV6 |            \
+		RTE_MBUF_F_TX_IPV4 |            \
+		RTE_MBUF_F_TX_IP_CKSUM |        \
+		RTE_MBUF_F_TX_L4_MASK |         \
+		RTE_MBUF_F_TX_TCP_SEG)
 
 #define FM10K_TX_OFFLOAD_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ FM10K_TX_OFFLOAD_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ FM10K_TX_OFFLOAD_MASK)
 
 /* @note: When this function is changed, make corresponding change to
  * fm10k_dev_supported_ptypes_get()
@@ -78,21 +77,21 @@ rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d)
 						>> FM10K_RXD_PKTTYPE_SHIFT];
 
 	if (d->w.pkt_info & FM10K_RXD_RSSTYPE_MASK)
-		m->ol_flags |= PKT_RX_RSS_HASH;
+		m->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 
 	if (unlikely((d->d.staterr &
 		(FM10K_RXD_STATUS_IPCS | FM10K_RXD_STATUS_IPE)) ==
 		(FM10K_RXD_STATUS_IPCS | FM10K_RXD_STATUS_IPE)))
-		m->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	if (unlikely((d->d.staterr &
 		(FM10K_RXD_STATUS_L4CS | FM10K_RXD_STATUS_L4E)) ==
 		(FM10K_RXD_STATUS_L4CS | FM10K_RXD_STATUS_L4E)))
-		m->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	else
-		m->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+		m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 }
 
 uint16_t
@@ -131,10 +130,10 @@ fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		 * Packets in fm10k device always carry at least one VLAN tag.
 		 * For those packets coming in without VLAN tag,
 		 * the port default VLAN tag will be used.
-		 * So, always PKT_RX_VLAN flag is set and vlan_tci
+		 * So, always RTE_MBUF_F_RX_VLAN flag is set and vlan_tci
 		 * is valid for each RX packet's mbuf.
 		 */
-		mbuf->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mbuf->vlan_tci = desc.w.vlan;
 		/**
 		 * mbuf->vlan_tci_outer is an idle field in fm10k driver,
@@ -292,10 +291,10 @@ fm10k_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		 * Packets in fm10k device always carry at least one VLAN tag.
 		 * For those packets coming in without VLAN tag,
 		 * the port default VLAN tag will be used.
-		 * So, always PKT_RX_VLAN flag is set and vlan_tci
+		 * So, always RTE_MBUF_F_RX_VLAN flag is set and vlan_tci
 		 * is valid for each RX packet's mbuf.
 		 */
-		first_seg->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		first_seg->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		first_seg->vlan_tci = desc.w.vlan;
 		/**
 		 * mbuf->vlan_tci_outer is an idle field in fm10k driver,
@@ -605,11 +604,11 @@ static inline void tx_xmit_pkt(struct fm10k_tx_queue *q, struct rte_mbuf *mb)
 	/* set checksum flags on first descriptor of packet. SCTP checksum
 	 * offload is not supported, but we do not explicitly check for this
 	 * case in favor of greatly simplified processing. */
-	if (mb->ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK | PKT_TX_TCP_SEG))
+	if (mb->ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK | RTE_MBUF_F_TX_TCP_SEG))
 		q->hw_ring[q->next_free].flags |= FM10K_TXD_FLAG_CSUM;
 
 	/* set vlan if requested */
-	if (mb->ol_flags & PKT_TX_VLAN)
+	if (mb->ol_flags & RTE_MBUF_F_TX_VLAN)
 		q->hw_ring[q->next_free].vlan = mb->vlan_tci;
 	else
 		q->hw_ring[q->next_free].vlan = 0;
@@ -620,9 +619,9 @@ static inline void tx_xmit_pkt(struct fm10k_tx_queue *q, struct rte_mbuf *mb)
 	q->hw_ring[q->next_free].buflen =
 			rte_cpu_to_le_16(rte_pktmbuf_data_len(mb));
 
-	if (mb->ol_flags & PKT_TX_TCP_SEG) {
+	if (mb->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		hdrlen = mb->l2_len + mb->l3_len + mb->l4_len;
-		hdrlen += (mb->ol_flags & PKT_TX_TUNNEL_MASK) ?
+		hdrlen += (mb->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 			  mb->outer_l2_len + mb->outer_l3_len : 0;
 		if (q->hw_ring[q->next_free].flags & FM10K_TXD_FLAG_FTAG)
 			hdrlen += sizeof(struct fm10k_ftag);
@@ -699,7 +698,7 @@ fm10k_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 	for (i = 0; i < nb_pkts; i++) {
 		m = tx_pkts[i];
 
-		if ((m->ol_flags & PKT_TX_TCP_SEG) &&
+		if ((m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) &&
 				(m->tso_segsz < FM10K_TSO_MINMSS)) {
 			rte_errno = EINVAL;
 			return i;
diff --git a/drivers/net/fm10k/fm10k_rxtx_vec.c b/drivers/net/fm10k/fm10k_rxtx_vec.c
index 83af01dc2d..7ecba9fef2 100644
--- a/drivers/net/fm10k/fm10k_rxtx_vec.c
+++ b/drivers/net/fm10k/fm10k_rxtx_vec.c
@@ -38,7 +38,7 @@ fm10k_reset_tx_queue(struct fm10k_tx_queue *txq);
 #define RXEFLAG_SHIFT     (13)
 /* IPE/L4E flag shift */
 #define L3L4EFLAG_SHIFT     (14)
-/* shift PKT_RX_L4_CKSUM_GOOD into one byte by 1 bit */
+/* shift RTE_MBUF_F_RX_L4_CKSUM_GOOD into one byte by 1 bit */
 #define CKSUM_SHIFT     (1)
 
 static inline void
@@ -52,10 +52,10 @@ fm10k_desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
 
 	const __m128i pkttype_msk = _mm_set_epi16(
 			0x0000, 0x0000, 0x0000, 0x0000,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 
 	/* mask everything except rss type */
 	const __m128i rsstype_msk = _mm_set_epi16(
@@ -75,10 +75,10 @@ fm10k_desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
 	const __m128i l3l4cksum_flag = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			(PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD) >> CKSUM_SHIFT,
-			(PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD) >> CKSUM_SHIFT,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> CKSUM_SHIFT,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> CKSUM_SHIFT);
+			(RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> CKSUM_SHIFT,
+			(RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> CKSUM_SHIFT,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> CKSUM_SHIFT,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> CKSUM_SHIFT);
 
 	const __m128i rxe_flag = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
@@ -87,9 +87,10 @@ fm10k_desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
 
 	/* map rss type to rss hash flag */
 	const __m128i rss_flags = _mm_set_epi8(0, 0, 0, 0,
-			0, 0, 0, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH, 0,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, 0);
+			0, 0, 0, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, 0, RTE_MBUF_F_RX_RSS_HASH, 0,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, 0);
 
 	/* Calculate RSS_hash and Vlan fields */
 	ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
diff --git a/drivers/net/hinic/hinic_pmd_rx.c b/drivers/net/hinic/hinic_pmd_rx.c
index 842399cc4c..311b22ccd1 100644
--- a/drivers/net/hinic/hinic_pmd_rx.c
+++ b/drivers/net/hinic/hinic_pmd_rx.c
@@ -802,7 +802,7 @@ static inline uint64_t hinic_rx_rss_hash(uint32_t offload_type,
 	rss_type = HINIC_GET_RSS_TYPES(offload_type);
 	if (likely(rss_type != 0)) {
 		*rss_hash = cqe_hass_val;
-		return PKT_RX_RSS_HASH;
+		return RTE_MBUF_F_RX_RSS_HASH;
 	}
 
 	return 0;
@@ -815,33 +815,33 @@ static inline uint64_t hinic_rx_csum(uint32_t status, struct hinic_rxq *rxq)
 	struct hinic_nic_dev *nic_dev = rxq->nic_dev;
 
 	if (unlikely(!(nic_dev->rx_csum_en & HINIC_RX_CSUM_OFFLOAD_EN)))
-		return PKT_RX_IP_CKSUM_UNKNOWN;
+		return RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 
 	/* most case checksum is ok */
 	checksum_err = HINIC_GET_RX_CSUM_ERR(status);
 	if (likely(checksum_err == 0))
-		return (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		return (RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 
 	/* If BYPASS bit set, all other status indications should be ignored */
 	if (unlikely(HINIC_CSUM_ERR_BYPASSED(checksum_err)))
-		return PKT_RX_IP_CKSUM_UNKNOWN;
+		return RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 
 	flags = 0;
 
 	/* IP checksum error */
 	if (HINIC_CSUM_ERR_IP(checksum_err))
-		flags |= PKT_RX_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		flags |= PKT_RX_IP_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	/* L4 checksum error */
 	if (HINIC_CSUM_ERR_L4(checksum_err))
-		flags |= PKT_RX_L4_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	else
-		flags |= PKT_RX_L4_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	if (unlikely(HINIC_CSUM_ERR_OTHER(checksum_err)))
-		flags = PKT_RX_L4_CKSUM_NONE;
+		flags = RTE_MBUF_F_RX_L4_CKSUM_NONE;
 
 	rxq->rxq_stats.errors++;
 
@@ -861,7 +861,7 @@ static inline uint64_t hinic_rx_vlan(uint32_t offload_type, uint32_t vlan_len,
 
 	*vlan_tci = vlan_tag;
 
-	return PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+	return RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 }
 
 static inline u32 hinic_rx_alloc_mbuf_bulk(struct hinic_rxq *rxq,
@@ -1061,7 +1061,7 @@ u16 hinic_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts)
 		/* lro offload */
 		lro_num = HINIC_GET_RX_NUM_LRO(cqe.status);
 		if (unlikely(lro_num != 0)) {
-			rxm->ol_flags |= PKT_RX_LRO;
+			rxm->ol_flags |= RTE_MBUF_F_RX_LRO;
 			rxm->tso_segsz = pkt_len / lro_num;
 		}
 
diff --git a/drivers/net/hinic/hinic_pmd_tx.c b/drivers/net/hinic/hinic_pmd_tx.c
index e14937139d..2688817f37 100644
--- a/drivers/net/hinic/hinic_pmd_tx.c
+++ b/drivers/net/hinic/hinic_pmd_tx.c
@@ -592,7 +592,7 @@ hinic_fill_tx_offload_info(struct rte_mbuf *mbuf,
 	task->pkt_info2 = 0;
 
 	/* Base VLAN */
-	if (unlikely(ol_flags & PKT_TX_VLAN)) {
+	if (unlikely(ol_flags & RTE_MBUF_F_TX_VLAN)) {
 		vlan_tag = mbuf->vlan_tci;
 		hinic_set_vlan_tx_offload(task, queue_info, vlan_tag,
 					  vlan_tag >> VLAN_PRIO_SHIFT);
@@ -602,7 +602,7 @@ hinic_fill_tx_offload_info(struct rte_mbuf *mbuf,
 	if (unlikely(!(ol_flags & HINIC_TX_CKSUM_OFFLOAD_MASK)))
 		return;
 
-	if ((ol_flags & PKT_TX_TCP_SEG))
+	if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 		/* set tso info for task and qsf */
 		hinic_set_tso_info(task, queue_info, mbuf, tx_off_info);
 	else /* just support l4 checksum offload */
@@ -718,7 +718,7 @@ hinic_ipv4_phdr_cksum(const struct rte_ipv4_hdr *ipv4_hdr, uint64_t ol_flags)
 	psd_hdr.dst_addr = ipv4_hdr->dst_addr;
 	psd_hdr.zero = 0;
 	psd_hdr.proto = ipv4_hdr->next_proto_id;
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		psd_hdr.len = 0;
 	} else {
 		psd_hdr.len =
@@ -738,7 +738,7 @@ hinic_ipv6_phdr_cksum(const struct rte_ipv6_hdr *ipv6_hdr, uint64_t ol_flags)
 	} psd_hdr;
 
 	psd_hdr.proto = (ipv6_hdr->proto << 24);
-	if (ol_flags & PKT_TX_TCP_SEG)
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 		psd_hdr.len = 0;
 	else
 		psd_hdr.len = ipv6_hdr->payload_len;
@@ -754,10 +754,10 @@ static inline void hinic_get_outer_cs_pld_offset(struct rte_mbuf *m,
 {
 	uint64_t ol_flags = m->ol_flags;
 
-	if ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_UDP_CKSUM)
+	if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_UDP_CKSUM)
 		off_info->payload_offset = m->outer_l2_len + m->outer_l3_len +
 					   m->l2_len + m->l3_len;
-	else if ((ol_flags & PKT_TX_TCP_CKSUM) || (ol_flags & PKT_TX_TCP_SEG))
+	else if ((ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) || (ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 		off_info->payload_offset = m->outer_l2_len + m->outer_l3_len +
 					   m->l2_len + m->l3_len + m->l4_len;
 }
@@ -767,10 +767,10 @@ static inline void hinic_get_pld_offset(struct rte_mbuf *m,
 {
 	uint64_t ol_flags = m->ol_flags;
 
-	if (((ol_flags & PKT_TX_L4_MASK) == PKT_TX_UDP_CKSUM) ||
-	    ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_SCTP_CKSUM))
+	if (((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_UDP_CKSUM) ||
+	    ((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_SCTP_CKSUM))
 		off_info->payload_offset = m->l2_len + m->l3_len;
-	else if ((ol_flags & PKT_TX_TCP_CKSUM) || (ol_flags & PKT_TX_TCP_SEG))
+	else if ((ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) || (ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 		off_info->payload_offset = m->l2_len + m->l3_len +
 					   m->l4_len;
 }
@@ -845,11 +845,11 @@ static inline uint8_t hinic_analyze_l3_type(struct rte_mbuf *mbuf)
 	uint8_t l3_type;
 	uint64_t ol_flags = mbuf->ol_flags;
 
-	if (ol_flags & PKT_TX_IPV4)
-		l3_type = (ol_flags & PKT_TX_IP_CKSUM) ?
+	if (ol_flags & RTE_MBUF_F_TX_IPV4)
+		l3_type = (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) ?
 			  IPV4_PKT_WITH_CHKSUM_OFFLOAD :
 			  IPV4_PKT_NO_CHKSUM_OFFLOAD;
-	else if (ol_flags & PKT_TX_IPV6)
+	else if (ol_flags & RTE_MBUF_F_TX_IPV6)
 		l3_type = IPV6_PKT;
 	else
 		l3_type = UNKNOWN_L3TYPE;
@@ -866,11 +866,11 @@ static inline void hinic_calculate_tcp_checksum(struct rte_mbuf *mbuf,
 	struct rte_tcp_hdr *tcp_hdr;
 	uint64_t ol_flags = mbuf->ol_flags;
 
-	if (ol_flags & PKT_TX_IPV4) {
+	if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		ipv4_hdr = rte_pktmbuf_mtod_offset(mbuf, struct rte_ipv4_hdr *,
 						   inner_l3_offset);
 
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			ipv4_hdr->hdr_checksum = 0;
 
 		tcp_hdr = (struct rte_tcp_hdr *)((char *)ipv4_hdr +
@@ -898,11 +898,11 @@ static inline void hinic_calculate_udp_checksum(struct rte_mbuf *mbuf,
 	struct rte_udp_hdr *udp_hdr;
 	uint64_t ol_flags = mbuf->ol_flags;
 
-	if (ol_flags & PKT_TX_IPV4) {
+	if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		ipv4_hdr = rte_pktmbuf_mtod_offset(mbuf, struct rte_ipv4_hdr *,
 						   inner_l3_offset);
 
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			ipv4_hdr->hdr_checksum = 0;
 
 		udp_hdr = (struct rte_udp_hdr *)((char *)ipv4_hdr +
@@ -938,21 +938,21 @@ static inline void hinic_calculate_checksum(struct rte_mbuf *mbuf,
 {
 	uint64_t ol_flags = mbuf->ol_flags;
 
-	switch (ol_flags & PKT_TX_L4_MASK) {
-	case PKT_TX_UDP_CKSUM:
+	switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		hinic_calculate_udp_checksum(mbuf, off_info, inner_l3_offset);
 		break;
 
-	case PKT_TX_TCP_CKSUM:
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		hinic_calculate_tcp_checksum(mbuf, off_info, inner_l3_offset);
 		break;
 
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		hinic_calculate_sctp_checksum(off_info);
 		break;
 
 	default:
-		if (ol_flags & PKT_TX_TCP_SEG)
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			hinic_calculate_tcp_checksum(mbuf, off_info,
 						     inner_l3_offset);
 		break;
@@ -970,8 +970,8 @@ static inline int hinic_tx_offload_pkt_prepare(struct rte_mbuf *m,
 		return 0;
 
 	/* Support only vxlan offload */
-	if (unlikely((ol_flags & PKT_TX_TUNNEL_MASK) &&
-	    !(ol_flags & PKT_TX_TUNNEL_VXLAN)))
+	if (unlikely((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) &&
+		     !(ol_flags & RTE_MBUF_F_TX_TUNNEL_VXLAN)))
 		return -ENOTSUP;
 
 #ifdef RTE_LIBRTE_ETHDEV_DEBUG
@@ -979,7 +979,7 @@ static inline int hinic_tx_offload_pkt_prepare(struct rte_mbuf *m,
 		return -EINVAL;
 #endif
 
-	if (ol_flags & PKT_TX_TUNNEL_VXLAN) {
+	if (ol_flags & RTE_MBUF_F_TX_TUNNEL_VXLAN) {
 		off_info->tunnel_type = TUNNEL_UDP_NO_CSUM;
 
 		/* inner_l4_tcp_udp csum should be set to calculate outer
@@ -987,9 +987,9 @@ static inline int hinic_tx_offload_pkt_prepare(struct rte_mbuf *m,
 		 */
 		off_info->inner_l4_tcp_udp = 1;
 
-		if ((ol_flags & PKT_TX_OUTER_IP_CKSUM) ||
-		    (ol_flags & PKT_TX_OUTER_IPV6) ||
-		    (ol_flags & PKT_TX_TCP_SEG)) {
+		if ((ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) ||
+		    (ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) ||
+		    (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 			inner_l3_offset = m->l2_len + m->outer_l2_len +
 					  m->outer_l3_len;
 			off_info->outer_l2_len = m->outer_l2_len;
@@ -1057,7 +1057,7 @@ static inline bool hinic_get_sge_txoff_info(struct rte_mbuf *mbuf_pkt,
 	sqe_info->cpy_mbuf_cnt = 0;
 
 	/* non tso mbuf */
-	if (likely(!(mbuf_pkt->ol_flags & PKT_TX_TCP_SEG))) {
+	if (likely(!(mbuf_pkt->ol_flags & RTE_MBUF_F_TX_TCP_SEG))) {
 		if (unlikely(mbuf_pkt->pkt_len > MAX_SINGLE_SGE_SIZE)) {
 			/* non tso packet len must less than 64KB */
 			return false;
diff --git a/drivers/net/hinic/hinic_pmd_tx.h b/drivers/net/hinic/hinic_pmd_tx.h
index d98abad8da..a3ec6299fb 100644
--- a/drivers/net/hinic/hinic_pmd_tx.h
+++ b/drivers/net/hinic/hinic_pmd_tx.h
@@ -13,13 +13,12 @@
 #define HINIC_GET_WQ_TAIL(txq)		\
 		((txq)->wq->queue_buf_vaddr + (txq)->wq->wq_buf_size)
 
-#define HINIC_TX_CKSUM_OFFLOAD_MASK (	\
-		PKT_TX_IP_CKSUM |	\
-		PKT_TX_TCP_CKSUM |	\
-		PKT_TX_UDP_CKSUM |      \
-		PKT_TX_SCTP_CKSUM |	\
-		PKT_TX_OUTER_IP_CKSUM |	\
-		PKT_TX_TCP_SEG)
+#define HINIC_TX_CKSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM |	\
+		RTE_MBUF_F_TX_TCP_CKSUM |	\
+		RTE_MBUF_F_TX_UDP_CKSUM |      \
+		RTE_MBUF_F_TX_SCTP_CKSUM |	\
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM |	\
+		RTE_MBUF_F_TX_TCP_SEG)
 
 enum sq_wqe_type {
 	SQ_NORMAL_WQE = 0,
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index 0e4e4269a1..7c8ee12b21 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -623,7 +623,7 @@ struct hns3_hw {
 	 *  - HNS3_SPECIAL_PORT_SW_CKSUM_MODE
 	 *     In this mode, HW can not do checksum for special UDP port like
 	 *     4789, 4790, 6081 for non-tunnel UDP packets and UDP tunnel
-	 *     packets without the PKT_TX_TUNEL_MASK in the mbuf. So, PMD need
+	 *     packets without the RTE_MBUF_F_TX_TUNEL_MASK in the mbuf. So, PMD need
 	 *     do the checksum for these packets to avoid a checksum error.
 	 *
 	 *  - HNS3_SPECIAL_PORT_HW_CKSUM_MODE
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 59ba9e7454..45e22edd74 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -2329,11 +2329,11 @@ hns3_rxd_to_vlan_tci(struct hns3_rx_queue *rxq, struct rte_mbuf *mb,
 		mb->vlan_tci = 0;
 		return;
 	case HNS3_INNER_STRP_VLAN_VLD:
-		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mb->vlan_tci = rte_le_to_cpu_16(rxd->rx.vlan_tag);
 		return;
 	case HNS3_OUTER_STRP_VLAN_VLD:
-		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mb->vlan_tci = rte_le_to_cpu_16(rxd->rx.ot_vlan_tag);
 		return;
 	default:
@@ -2383,7 +2383,7 @@ hns3_rx_ptp_timestamp_handle(struct hns3_rx_queue *rxq, struct rte_mbuf *mbuf,
 	struct hns3_pf *pf = HNS3_DEV_PRIVATE_TO_PF(rxq->hns);
 	uint64_t timestamp = rte_le_to_cpu_64(rxd->timestamp);
 
-	mbuf->ol_flags |= PKT_RX_IEEE1588_PTP | PKT_RX_IEEE1588_TMST;
+	mbuf->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP | RTE_MBUF_F_RX_IEEE1588_TMST;
 	if (hns3_timestamp_rx_dynflag > 0) {
 		*RTE_MBUF_DYNFIELD(mbuf, hns3_timestamp_dynfield_offset,
 			rte_mbuf_timestamp_t *) = timestamp;
@@ -2469,11 +2469,11 @@ hns3_recv_pkts_simple(void *rx_queue,
 		rxm->data_len = rxm->pkt_len;
 		rxm->port = rxq->port_id;
 		rxm->hash.rss = rte_le_to_cpu_32(rxd.rx.rss_hash);
-		rxm->ol_flags |= PKT_RX_RSS_HASH;
+		rxm->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		if (unlikely(bd_base_info & BIT(HNS3_RXD_LUM_B))) {
 			rxm->hash.fdir.hi =
 				rte_le_to_cpu_16(rxd.rx.fd_id);
-			rxm->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+			rxm->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		}
 		rxm->nb_segs = 1;
 		rxm->next = NULL;
@@ -2488,7 +2488,7 @@ hns3_recv_pkts_simple(void *rx_queue,
 		rxm->packet_type = hns3_rx_calc_ptype(rxq, l234_info, ol_info);
 
 		if (rxm->packet_type == RTE_PTYPE_L2_ETHER_TIMESYNC)
-			rxm->ol_flags |= PKT_RX_IEEE1588_PTP;
+			rxm->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP;
 
 		hns3_rxd_to_vlan_tci(rxq, rxm, l234_info, &rxd);
 
@@ -2687,17 +2687,17 @@ hns3_recv_scattered_pkts(void *rx_queue,
 
 		first_seg->port = rxq->port_id;
 		first_seg->hash.rss = rte_le_to_cpu_32(rxd.rx.rss_hash);
-		first_seg->ol_flags = PKT_RX_RSS_HASH;
+		first_seg->ol_flags = RTE_MBUF_F_RX_RSS_HASH;
 		if (unlikely(bd_base_info & BIT(HNS3_RXD_LUM_B))) {
 			first_seg->hash.fdir.hi =
 				rte_le_to_cpu_16(rxd.rx.fd_id);
-			first_seg->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+			first_seg->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		}
 
 		gro_size = hns3_get_field(bd_base_info, HNS3_RXD_GRO_SIZE_M,
 					  HNS3_RXD_GRO_SIZE_S);
 		if (gro_size != 0) {
-			first_seg->ol_flags |= PKT_RX_LRO;
+			first_seg->ol_flags |= RTE_MBUF_F_RX_LRO;
 			first_seg->tso_segsz = gro_size;
 		}
 
@@ -2712,7 +2712,7 @@ hns3_recv_scattered_pkts(void *rx_queue,
 						l234_info, ol_info);
 
 		if (first_seg->packet_type == RTE_PTYPE_L2_ETHER_TIMESYNC)
-			rxm->ol_flags |= PKT_RX_IEEE1588_PTP;
+			rxm->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP;
 
 		hns3_rxd_to_vlan_tci(rxq, first_seg, l234_info, &rxd);
 
@@ -3139,7 +3139,7 @@ hns3_restore_gro_conf(struct hns3_hw *hw)
 static inline bool
 hns3_pkt_is_tso(struct rte_mbuf *m)
 {
-	return (m->tso_segsz != 0 && m->ol_flags & PKT_TX_TCP_SEG);
+	return (m->tso_segsz != 0 && m->ol_flags & RTE_MBUF_F_TX_TCP_SEG);
 }
 
 static void
@@ -3172,7 +3172,7 @@ hns3_fill_first_desc(struct hns3_tx_queue *txq, struct hns3_desc *desc,
 	uint32_t paylen;
 
 	hdr_len = rxm->l2_len + rxm->l3_len + rxm->l4_len;
-	hdr_len += (ol_flags & PKT_TX_TUNNEL_MASK) ?
+	hdr_len += (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 			   rxm->outer_l2_len + rxm->outer_l3_len : 0;
 	paylen = rxm->pkt_len - hdr_len;
 	desc->tx.paylen_fd_dop_ol4cs |= rte_cpu_to_le_32(paylen);
@@ -3190,11 +3190,11 @@ hns3_fill_first_desc(struct hns3_tx_queue *txq, struct hns3_desc *desc,
 	 * To avoid the VLAN of Tx descriptor is overwritten by PVID, it should
 	 * be added to the position close to the IP header when PVID is enabled.
 	 */
-	if (!txq->pvid_sw_shift_en && ol_flags & (PKT_TX_VLAN |
-				PKT_TX_QINQ)) {
+	if (!txq->pvid_sw_shift_en && ol_flags & (RTE_MBUF_F_TX_VLAN |
+				                  RTE_MBUF_F_TX_QINQ)) {
 		desc->tx.ol_type_vlan_len_msec |=
 				rte_cpu_to_le_32(BIT(HNS3_TXD_OVLAN_B));
-		if (ol_flags & PKT_TX_QINQ)
+		if (ol_flags & RTE_MBUF_F_TX_QINQ)
 			desc->tx.outer_vlan_tag =
 					rte_cpu_to_le_16(rxm->vlan_tci_outer);
 		else
@@ -3202,14 +3202,14 @@ hns3_fill_first_desc(struct hns3_tx_queue *txq, struct hns3_desc *desc,
 					rte_cpu_to_le_16(rxm->vlan_tci);
 	}
 
-	if (ol_flags & PKT_TX_QINQ ||
-	    ((ol_flags & PKT_TX_VLAN) && txq->pvid_sw_shift_en)) {
+	if (ol_flags & RTE_MBUF_F_TX_QINQ ||
+	    ((ol_flags & RTE_MBUF_F_TX_VLAN) && txq->pvid_sw_shift_en)) {
 		desc->tx.type_cs_vlan_tso_len |=
 					rte_cpu_to_le_32(BIT(HNS3_TXD_VLAN_B));
 		desc->tx.vlan_tag = rte_cpu_to_le_16(rxm->vlan_tci);
 	}
 
-	if (ol_flags & PKT_TX_IEEE1588_TMST)
+	if (ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 		desc->tx.tp_fe_sc_vld_ra_ri |=
 				rte_cpu_to_le_16(BIT(HNS3_TXD_TSYN_B));
 }
@@ -3331,14 +3331,14 @@ hns3_parse_outer_params(struct rte_mbuf *m, uint32_t *ol_type_vlan_len_msec)
 	uint64_t ol_flags = m->ol_flags;
 
 	/* (outer) IP header type */
-	if (ol_flags & PKT_TX_OUTER_IPV4) {
-		if (ol_flags & PKT_TX_OUTER_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IPV4) {
+		if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 			tmp |= hns3_gen_field_val(HNS3_TXD_OL3T_M,
 					HNS3_TXD_OL3T_S, HNS3_OL3T_IPV4_CSUM);
 		else
 			tmp |= hns3_gen_field_val(HNS3_TXD_OL3T_M,
 				HNS3_TXD_OL3T_S, HNS3_OL3T_IPV4_NO_CSUM);
-	} else if (ol_flags & PKT_TX_OUTER_IPV6) {
+	} else if (ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) {
 		tmp |= hns3_gen_field_val(HNS3_TXD_OL3T_M, HNS3_TXD_OL3T_S,
 					HNS3_OL3T_IPV6);
 	}
@@ -3358,10 +3358,10 @@ hns3_parse_inner_params(struct rte_mbuf *m, uint32_t *ol_type_vlan_len_msec,
 	uint64_t ol_flags = m->ol_flags;
 	uint16_t inner_l2_len;
 
-	switch (ol_flags & PKT_TX_TUNNEL_MASK) {
-	case PKT_TX_TUNNEL_VXLAN_GPE:
-	case PKT_TX_TUNNEL_GENEVE:
-	case PKT_TX_TUNNEL_VXLAN:
+	switch (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
 		/* MAC in UDP tunnelling packet, include VxLAN and GENEVE */
 		tmp_outer |= hns3_gen_field_val(HNS3_TXD_TUNTYPE_M,
 				HNS3_TXD_TUNTYPE_S, HNS3_TUN_MAC_IN_UDP);
@@ -3380,7 +3380,7 @@ hns3_parse_inner_params(struct rte_mbuf *m, uint32_t *ol_type_vlan_len_msec,
 
 		inner_l2_len = m->l2_len - RTE_ETHER_VXLAN_HLEN;
 		break;
-	case PKT_TX_TUNNEL_GRE:
+	case RTE_MBUF_F_TX_TUNNEL_GRE:
 		tmp_outer |= hns3_gen_field_val(HNS3_TXD_TUNTYPE_M,
 					HNS3_TXD_TUNTYPE_S, HNS3_TUN_NVGRE);
 		/*
@@ -3429,7 +3429,7 @@ hns3_parse_tunneling_params(struct hns3_tx_queue *txq, struct rte_mbuf *m,
 	 * calculations, the length of the L2 header include the outer and
 	 * inner, will be filled during the parsing of tunnel packects.
 	 */
-	if (!(ol_flags & PKT_TX_TUNNEL_MASK)) {
+	if (!(ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 		/*
 		 * For non tunnel type the tunnel type id is 0, so no need to
 		 * assign a value to it. Only the inner(normal) L2 header length
@@ -3445,7 +3445,7 @@ hns3_parse_tunneling_params(struct hns3_tx_queue *txq, struct rte_mbuf *m,
 		 * calculate the header length.
 		 */
 		if (unlikely(!(ol_flags &
-			(PKT_TX_OUTER_IP_CKSUM | PKT_TX_OUTER_UDP_CKSUM)) &&
+			(RTE_MBUF_F_TX_OUTER_IP_CKSUM | RTE_MBUF_F_TX_OUTER_UDP_CKSUM)) &&
 					m->outer_l2_len == 0)) {
 			struct rte_net_hdr_lens hdr_len;
 			(void)rte_net_get_ptype(m, &hdr_len,
@@ -3462,7 +3462,7 @@ hns3_parse_tunneling_params(struct hns3_tx_queue *txq, struct rte_mbuf *m,
 
 	desc->tx.ol_type_vlan_len_msec = rte_cpu_to_le_32(tmp_outer);
 	desc->tx.type_cs_vlan_tso_len = rte_cpu_to_le_32(tmp_inner);
-	tmp_ol4cs = ol_flags & PKT_TX_OUTER_UDP_CKSUM ?
+	tmp_ol4cs = ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM ?
 			BIT(HNS3_TXD_OL4CS_B) : 0;
 	desc->tx.paylen_fd_dop_ol4cs = rte_cpu_to_le_32(tmp_ol4cs);
 
@@ -3477,9 +3477,9 @@ hns3_parse_l3_cksum_params(struct rte_mbuf *m, uint32_t *type_cs_vlan_tso_len)
 	uint32_t tmp;
 
 	tmp = *type_cs_vlan_tso_len;
-	if (ol_flags & PKT_TX_IPV4)
+	if (ol_flags & RTE_MBUF_F_TX_IPV4)
 		l3_type = HNS3_L3T_IPV4;
-	else if (ol_flags & PKT_TX_IPV6)
+	else if (ol_flags & RTE_MBUF_F_TX_IPV6)
 		l3_type = HNS3_L3T_IPV6;
 	else
 		l3_type = HNS3_L3T_NONE;
@@ -3491,7 +3491,7 @@ hns3_parse_l3_cksum_params(struct rte_mbuf *m, uint32_t *type_cs_vlan_tso_len)
 	tmp |= hns3_gen_field_val(HNS3_TXD_L3T_M, HNS3_TXD_L3T_S, l3_type);
 
 	/* Enable L3 checksum offloads */
-	if (ol_flags & PKT_TX_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 		tmp |= BIT(HNS3_TXD_L3CS_B);
 	*type_cs_vlan_tso_len = tmp;
 }
@@ -3502,20 +3502,20 @@ hns3_parse_l4_cksum_params(struct rte_mbuf *m, uint32_t *type_cs_vlan_tso_len)
 	uint64_t ol_flags = m->ol_flags;
 	uint32_t tmp;
 	/* Enable L4 checksum offloads */
-	switch (ol_flags & (PKT_TX_L4_MASK | PKT_TX_TCP_SEG)) {
-	case PKT_TX_TCP_CKSUM | PKT_TX_TCP_SEG:
-	case PKT_TX_TCP_CKSUM:
-	case PKT_TX_TCP_SEG:
+	switch (ol_flags & (RTE_MBUF_F_TX_L4_MASK | RTE_MBUF_F_TX_TCP_SEG)) {
+	case RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_TCP_SEG:
+	case RTE_MBUF_F_TX_TCP_CKSUM:
+	case RTE_MBUF_F_TX_TCP_SEG:
 		tmp = *type_cs_vlan_tso_len;
 		tmp |= hns3_gen_field_val(HNS3_TXD_L4T_M, HNS3_TXD_L4T_S,
 					HNS3_L4T_TCP);
 		break;
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		tmp = *type_cs_vlan_tso_len;
 		tmp |= hns3_gen_field_val(HNS3_TXD_L4T_M, HNS3_TXD_L4T_S,
 					HNS3_L4T_UDP);
 		break;
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		tmp = *type_cs_vlan_tso_len;
 		tmp |= hns3_gen_field_val(HNS3_TXD_L4T_M, HNS3_TXD_L4T_S,
 					HNS3_L4T_SCTP);
@@ -3572,7 +3572,7 @@ hns3_pkt_need_linearized(struct rte_mbuf *tx_pkts, uint32_t bd_num,
 
 	/* ensure the first 8 frags is greater than mss + header */
 	hdr_len = tx_pkts->l2_len + tx_pkts->l3_len + tx_pkts->l4_len;
-	hdr_len += (tx_pkts->ol_flags & PKT_TX_TUNNEL_MASK) ?
+	hdr_len += (tx_pkts->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 		   tx_pkts->outer_l2_len + tx_pkts->outer_l3_len : 0;
 	if (tot_len + m_last->data_len < tx_pkts->tso_segsz + hdr_len)
 		return true;
@@ -3602,15 +3602,15 @@ hns3_outer_ipv4_cksum_prepared(struct rte_mbuf *m, uint64_t ol_flags,
 	struct rte_ipv4_hdr *ipv4_hdr;
 	ipv4_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv4_hdr *,
 					   m->outer_l2_len);
-	if (ol_flags & PKT_TX_OUTER_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 		ipv4_hdr->hdr_checksum = 0;
-	if (ol_flags & PKT_TX_OUTER_UDP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM) {
 		struct rte_udp_hdr *udp_hdr;
 		/*
 		 * If OUTER_UDP_CKSUM is support, HW can caclulate the pseudo
 		 * header for TSO packets
 		 */
-		if (ol_flags & PKT_TX_TCP_SEG)
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			return true;
 		udp_hdr = rte_pktmbuf_mtod_offset(m, struct rte_udp_hdr *,
 				m->outer_l2_len + m->outer_l3_len);
@@ -3629,13 +3629,13 @@ hns3_outer_ipv6_cksum_prepared(struct rte_mbuf *m, uint64_t ol_flags,
 	struct rte_ipv6_hdr *ipv6_hdr;
 	ipv6_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv6_hdr *,
 					   m->outer_l2_len);
-	if (ol_flags & PKT_TX_OUTER_UDP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM) {
 		struct rte_udp_hdr *udp_hdr;
 		/*
 		 * If OUTER_UDP_CKSUM is support, HW can caclulate the pseudo
 		 * header for TSO packets
 		 */
-		if (ol_flags & PKT_TX_TCP_SEG)
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			return true;
 		udp_hdr = rte_pktmbuf_mtod_offset(m, struct rte_udp_hdr *,
 				m->outer_l2_len + m->outer_l3_len);
@@ -3654,10 +3654,10 @@ hns3_outer_header_cksum_prepare(struct rte_mbuf *m)
 	uint32_t paylen, hdr_len, l4_proto;
 	struct rte_udp_hdr *udp_hdr;
 
-	if (!(ol_flags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6)))
+	if (!(ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IPV6)))
 		return;
 
-	if (ol_flags & PKT_TX_OUTER_IPV4) {
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IPV4) {
 		if (hns3_outer_ipv4_cksum_prepared(m, ol_flags, &l4_proto))
 			return;
 	} else {
@@ -3666,7 +3666,7 @@ hns3_outer_header_cksum_prepare(struct rte_mbuf *m)
 	}
 
 	/* driver should ensure the outer udp cksum is 0 for TUNNEL TSO */
-	if (l4_proto == IPPROTO_UDP && (ol_flags & PKT_TX_TCP_SEG)) {
+	if (l4_proto == IPPROTO_UDP && (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		hdr_len = m->l2_len + m->l3_len + m->l4_len;
 		hdr_len += m->outer_l2_len + m->outer_l3_len;
 		paylen = m->pkt_len - hdr_len;
@@ -3692,7 +3692,7 @@ hns3_check_tso_pkt_valid(struct rte_mbuf *m)
 		return -EINVAL;
 
 	hdr_len = m->l2_len + m->l3_len + m->l4_len;
-	hdr_len += (m->ol_flags & PKT_TX_TUNNEL_MASK) ?
+	hdr_len += (m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 			m->outer_l2_len + m->outer_l3_len : 0;
 	if (hdr_len > HNS3_MAX_TSO_HDR_SIZE)
 		return -EINVAL;
@@ -3742,12 +3742,12 @@ hns3_vld_vlan_chk(struct hns3_tx_queue *txq, struct rte_mbuf *m)
 	 * implementation function named hns3_prep_pkts to inform users that
 	 * these packets will be discarded.
 	 */
-	if (m->ol_flags & PKT_TX_QINQ)
+	if (m->ol_flags & RTE_MBUF_F_TX_QINQ)
 		return -EINVAL;
 
 	eh = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 	if (eh->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN)) {
-		if (m->ol_flags & PKT_TX_VLAN)
+		if (m->ol_flags & RTE_MBUF_F_TX_VLAN)
 			return -EINVAL;
 
 		/* Ensure the incoming packet is not a QinQ packet */
@@ -3767,7 +3767,7 @@ hns3_udp_cksum_help(struct rte_mbuf *m)
 	uint16_t cksum = 0;
 	uint32_t l4_len;
 
-	if (ol_flags & PKT_TX_IPV4) {
+	if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		struct rte_ipv4_hdr *ipv4_hdr = rte_pktmbuf_mtod_offset(m,
 				struct rte_ipv4_hdr *, m->l2_len);
 		l4_len = rte_be_to_cpu_16(ipv4_hdr->total_length) - m->l3_len;
@@ -3798,8 +3798,8 @@ hns3_validate_tunnel_cksum(struct hns3_tx_queue *tx_queue, struct rte_mbuf *m)
 	uint16_t dst_port;
 
 	if (tx_queue->udp_cksum_mode == HNS3_SPECIAL_PORT_HW_CKSUM_MODE ||
-	    ol_flags & PKT_TX_TUNNEL_MASK ||
-	    (ol_flags & PKT_TX_L4_MASK) != PKT_TX_UDP_CKSUM)
+	    ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK ||
+	    (ol_flags & RTE_MBUF_F_TX_L4_MASK) != RTE_MBUF_F_TX_UDP_CKSUM)
 		return true;
 	/*
 	 * A UDP packet with the same dst_port as VXLAN\VXLAN_GPE\GENEVE will
@@ -3816,7 +3816,7 @@ hns3_validate_tunnel_cksum(struct hns3_tx_queue *tx_queue, struct rte_mbuf *m)
 	case RTE_VXLAN_GPE_DEFAULT_PORT:
 	case RTE_GENEVE_DEFAULT_PORT:
 		udp_hdr->dgram_cksum = hns3_udp_cksum_help(m);
-		m->ol_flags = ol_flags & ~PKT_TX_L4_MASK;
+		m->ol_flags = ol_flags & ~RTE_MBUF_F_TX_L4_MASK;
 		return false;
 	default:
 		return true;
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index cd7c21c1d0..fd2ffc30d9 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -471,7 +471,7 @@ struct hns3_tx_queue {
 	 *  - HNS3_SPECIAL_PORT_SW_CKSUM_MODE
 	 *     In this mode, HW can not do checksum for special UDP port like
 	 *     4789, 4790, 6081 for non-tunnel UDP packets and UDP tunnel
-	 *     packets without the PKT_TX_TUNEL_MASK in the mbuf. So, PMD need
+	 *     packets without the RTE_MBUF_F_TX_TUNEL_MASK in the mbuf. So, PMD need
 	 *     do the checksum for these packets to avoid a checksum error.
 	 *
 	 *  - HNS3_SPECIAL_PORT_HW_CKSUM_MODE
@@ -545,12 +545,11 @@ struct hns3_queue_info {
 	unsigned int socket_id;
 };
 
-#define HNS3_TX_CKSUM_OFFLOAD_MASK ( \
-	PKT_TX_OUTER_UDP_CKSUM | \
-	PKT_TX_OUTER_IP_CKSUM | \
-	PKT_TX_IP_CKSUM | \
-	PKT_TX_TCP_SEG | \
-	PKT_TX_L4_MASK)
+#define HNS3_TX_CKSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \
+	RTE_MBUF_F_TX_OUTER_IP_CKSUM | \
+	RTE_MBUF_F_TX_IP_CKSUM | \
+	RTE_MBUF_F_TX_TCP_SEG | \
+	RTE_MBUF_F_TX_L4_MASK)
 
 enum hns3_cksum_status {
 	HNS3_CKSUM_NONE = 0,
@@ -574,29 +573,29 @@ hns3_rx_set_cksum_flag(struct hns3_rx_queue *rxq,
 				 BIT(HNS3_RXD_OL4E_B))
 
 	if (likely((l234_info & HNS3_RXD_CKSUM_ERR_MASK) == 0)) {
-		rxm->ol_flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		rxm->ol_flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 		return;
 	}
 
 	if (unlikely(l234_info & BIT(HNS3_RXD_L3E_B))) {
-		rxm->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+		rxm->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		rxq->dfx_stats.l3_csum_errors++;
 	} else {
-		rxm->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+		rxm->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 	}
 
 	if (unlikely(l234_info & BIT(HNS3_RXD_L4E_B))) {
-		rxm->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+		rxm->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		rxq->dfx_stats.l4_csum_errors++;
 	} else {
-		rxm->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+		rxm->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 	}
 
 	if (unlikely(l234_info & BIT(HNS3_RXD_OL3E_B)))
 		rxq->dfx_stats.ol3_csum_errors++;
 
 	if (unlikely(l234_info & BIT(HNS3_RXD_OL4E_B))) {
-		rxm->ol_flags |= PKT_RX_OUTER_L4_CKSUM_BAD;
+		rxm->ol_flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 		rxq->dfx_stats.ol4_csum_errors++;
 	}
 }
diff --git a/drivers/net/hns3/hns3_rxtx_vec_neon.h b/drivers/net/hns3/hns3_rxtx_vec_neon.h
index 74c848d5ef..0edd4756f1 100644
--- a/drivers/net/hns3/hns3_rxtx_vec_neon.h
+++ b/drivers/net/hns3/hns3_rxtx_vec_neon.h
@@ -105,7 +105,7 @@ hns3_desc_parse_field(struct hns3_rx_queue *rxq,
 		pkt = sw_ring[i].mbuf;
 
 		/* init rte_mbuf.rearm_data last 64-bit */
-		pkt->ol_flags = PKT_RX_RSS_HASH;
+		pkt->ol_flags = RTE_MBUF_F_RX_RSS_HASH;
 
 		l234_info = rxdp[i].rx.l234_info;
 		ol_info = rxdp[i].rx.ol_info;
diff --git a/drivers/net/hns3/hns3_rxtx_vec_sve.c b/drivers/net/hns3/hns3_rxtx_vec_sve.c
index d5c49333b2..be1fdbcdf0 100644
--- a/drivers/net/hns3/hns3_rxtx_vec_sve.c
+++ b/drivers/net/hns3/hns3_rxtx_vec_sve.c
@@ -43,7 +43,7 @@ hns3_desc_parse_field_sve(struct hns3_rx_queue *rxq,
 
 	for (i = 0; i < (int)bd_vld_num; i++) {
 		/* init rte_mbuf.rearm_data last 64-bit */
-		rx_pkts[i]->ol_flags = PKT_RX_RSS_HASH;
+		rx_pkts[i]->ol_flags = RTE_MBUF_F_RX_RSS_HASH;
 
 		ret = hns3_handle_bdinfo(rxq, rx_pkts[i], key->bd_base_info[i],
 					 key->l234_info[i]);
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 33a6a9e840..4cf4fa8a45 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -44,42 +44,39 @@
 #define I40E_TXD_CMD (I40E_TX_DESC_CMD_EOP | I40E_TX_DESC_CMD_RS)
 
 #ifdef RTE_LIBRTE_IEEE1588
-#define I40E_TX_IEEE1588_TMST PKT_TX_IEEE1588_TMST
+#define I40E_TX_IEEE1588_TMST RTE_MBUF_F_TX_IEEE1588_TMST
 #else
 #define I40E_TX_IEEE1588_TMST 0
 #endif
 
-#define I40E_TX_CKSUM_OFFLOAD_MASK (		 \
-		PKT_TX_IP_CKSUM |		 \
-		PKT_TX_L4_MASK |		 \
-		PKT_TX_TCP_SEG |		 \
-		PKT_TX_OUTER_IP_CKSUM)
-
-#define I40E_TX_OFFLOAD_MASK (  \
-		PKT_TX_OUTER_IPV4 |	\
-		PKT_TX_OUTER_IPV6 |	\
-		PKT_TX_IPV4 |		\
-		PKT_TX_IPV6 |		\
-		PKT_TX_IP_CKSUM |       \
-		PKT_TX_L4_MASK |        \
-		PKT_TX_OUTER_IP_CKSUM | \
-		PKT_TX_TCP_SEG |        \
-		PKT_TX_QINQ |       \
-		PKT_TX_VLAN |	\
-		PKT_TX_TUNNEL_MASK |	\
+#define I40E_TX_CKSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM |		 \
+		RTE_MBUF_F_TX_L4_MASK |		 \
+		RTE_MBUF_F_TX_TCP_SEG |		 \
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+
+#define I40E_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_OUTER_IPV4 |	\
+		RTE_MBUF_F_TX_OUTER_IPV6 |	\
+		RTE_MBUF_F_TX_IPV4 |		\
+		RTE_MBUF_F_TX_IPV6 |		\
+		RTE_MBUF_F_TX_IP_CKSUM |       \
+		RTE_MBUF_F_TX_L4_MASK |        \
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM | \
+		RTE_MBUF_F_TX_TCP_SEG |        \
+		RTE_MBUF_F_TX_QINQ |       \
+		RTE_MBUF_F_TX_VLAN |	\
+		RTE_MBUF_F_TX_TUNNEL_MASK |	\
 		I40E_TX_IEEE1588_TMST)
 
 #define I40E_TX_OFFLOAD_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ I40E_TX_OFFLOAD_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ I40E_TX_OFFLOAD_MASK)
 
-#define I40E_TX_OFFLOAD_SIMPLE_SUP_MASK ( \
-		PKT_TX_IPV4 | \
-		PKT_TX_IPV6 | \
-		PKT_TX_OUTER_IPV4 | \
-		PKT_TX_OUTER_IPV6)
+#define I40E_TX_OFFLOAD_SIMPLE_SUP_MASK (RTE_MBUF_F_TX_IPV4 | \
+		RTE_MBUF_F_TX_IPV6 | \
+		RTE_MBUF_F_TX_OUTER_IPV4 | \
+		RTE_MBUF_F_TX_OUTER_IPV6)
 
 #define I40E_TX_OFFLOAD_SIMPLE_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ I40E_TX_OFFLOAD_SIMPLE_SUP_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ I40E_TX_OFFLOAD_SIMPLE_SUP_MASK)
 
 static int
 i40e_monitor_callback(const uint64_t value,
@@ -119,7 +116,7 @@ i40e_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union i40e_rx_desc *rxdp)
 {
 	if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
 		(1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)) {
-		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mb->vlan_tci =
 			rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1);
 		PMD_RX_LOG(DEBUG, "Descriptor l2tag1: %u",
@@ -130,8 +127,8 @@ i40e_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union i40e_rx_desc *rxdp)
 #ifndef RTE_LIBRTE_I40E_16BYTE_RX_DESC
 	if (rte_le_to_cpu_16(rxdp->wb.qword2.ext_status) &
 		(1 << I40E_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT)) {
-		mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ |
-			PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+		mb->ol_flags |= RTE_MBUF_F_RX_QINQ_STRIPPED | RTE_MBUF_F_RX_QINQ |
+			RTE_MBUF_F_RX_VLAN_STRIPPED | RTE_MBUF_F_RX_VLAN;
 		mb->vlan_tci_outer = mb->vlan_tci;
 		mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_2);
 		PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
@@ -154,11 +151,11 @@ i40e_rxd_status_to_pkt_flags(uint64_t qword)
 	/* Check if RSS_HASH */
 	flags = (((qword >> I40E_RX_DESC_STATUS_FLTSTAT_SHIFT) &
 					I40E_RX_DESC_FLTSTAT_RSS_HASH) ==
-			I40E_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0;
+			I40E_RX_DESC_FLTSTAT_RSS_HASH) ? RTE_MBUF_F_RX_RSS_HASH : 0;
 
 	/* Check if FDIR Match */
 	flags |= (qword & (1 << I40E_RX_DESC_STATUS_FLM_SHIFT) ?
-							PKT_RX_FDIR : 0);
+							RTE_MBUF_F_RX_FDIR : 0);
 
 	return flags;
 }
@@ -171,22 +168,22 @@ i40e_rxd_error_to_pkt_flags(uint64_t qword)
 
 #define I40E_RX_ERR_BITS 0x3f
 	if (likely((error_bits & I40E_RX_ERR_BITS) == 0)) {
-		flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 		return flags;
 	}
 
 	if (unlikely(error_bits & (1 << I40E_RX_DESC_ERROR_IPE_SHIFT)))
-		flags |= PKT_RX_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		flags |= PKT_RX_IP_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	if (unlikely(error_bits & (1 << I40E_RX_DESC_ERROR_L4E_SHIFT)))
-		flags |= PKT_RX_L4_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	else
-		flags |= PKT_RX_L4_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	if (unlikely(error_bits & (1 << I40E_RX_DESC_ERROR_EIPE_SHIFT)))
-		flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 
 	return flags;
 }
@@ -205,9 +202,9 @@ i40e_get_iee15888_flags(struct rte_mbuf *mb, uint64_t qword)
 
 	if ((mb->packet_type & RTE_PTYPE_L2_MASK)
 			== RTE_PTYPE_L2_ETHER_TIMESYNC)
-		pkt_flags = PKT_RX_IEEE1588_PTP;
+		pkt_flags = RTE_MBUF_F_RX_IEEE1588_PTP;
 	if (tsyn & 0x04) {
-		pkt_flags |= PKT_RX_IEEE1588_TMST;
+		pkt_flags |= RTE_MBUF_F_RX_IEEE1588_TMST;
 		mb->timesync = tsyn & 0x03;
 	}
 
@@ -233,21 +230,21 @@ i40e_rxd_build_fdir(volatile union i40e_rx_desc *rxdp, struct rte_mbuf *mb)
 	if (flexbh == I40E_RX_DESC_EXT_STATUS_FLEXBH_FD_ID) {
 		mb->hash.fdir.hi =
 			rte_le_to_cpu_32(rxdp->wb.qword3.hi_dword.fd_id);
-		flags |= PKT_RX_FDIR_ID;
+		flags |= RTE_MBUF_F_RX_FDIR_ID;
 	} else if (flexbh == I40E_RX_DESC_EXT_STATUS_FLEXBH_FLEX) {
 		mb->hash.fdir.hi =
 			rte_le_to_cpu_32(rxdp->wb.qword3.hi_dword.flex_bytes_hi);
-		flags |= PKT_RX_FDIR_FLX;
+		flags |= RTE_MBUF_F_RX_FDIR_FLX;
 	}
 	if (flexbl == I40E_RX_DESC_EXT_STATUS_FLEXBL_FLEX) {
 		mb->hash.fdir.lo =
 			rte_le_to_cpu_32(rxdp->wb.qword3.lo_dword.flex_bytes_lo);
-		flags |= PKT_RX_FDIR_FLX;
+		flags |= RTE_MBUF_F_RX_FDIR_FLX;
 	}
 #else
 	mb->hash.fdir.hi =
 		rte_le_to_cpu_32(rxdp->wb.qword0.hi_dword.fd_id);
-	flags |= PKT_RX_FDIR_ID;
+	flags |= RTE_MBUF_F_RX_FDIR_ID;
 #endif
 	return flags;
 }
@@ -258,11 +255,11 @@ i40e_parse_tunneling_params(uint64_t ol_flags,
 			    uint32_t *cd_tunneling)
 {
 	/* EIPT: External (outer) IP header type */
-	if (ol_flags & PKT_TX_OUTER_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 		*cd_tunneling |= I40E_TX_CTX_EXT_IP_IPV4;
-	else if (ol_flags & PKT_TX_OUTER_IPV4)
+	else if (ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)
 		*cd_tunneling |= I40E_TX_CTX_EXT_IP_IPV4_NO_CSUM;
-	else if (ol_flags & PKT_TX_OUTER_IPV6)
+	else if (ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)
 		*cd_tunneling |= I40E_TX_CTX_EXT_IP_IPV6;
 
 	/* EIPLEN: External (outer) IP header length, in DWords */
@@ -270,15 +267,15 @@ i40e_parse_tunneling_params(uint64_t ol_flags,
 		I40E_TXD_CTX_QW0_EXT_IPLEN_SHIFT;
 
 	/* L4TUNT: L4 Tunneling Type */
-	switch (ol_flags & PKT_TX_TUNNEL_MASK) {
-	case PKT_TX_TUNNEL_IPIP:
+	switch (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+	case RTE_MBUF_F_TX_TUNNEL_IPIP:
 		/* for non UDP / GRE tunneling, set to 00b */
 		break;
-	case PKT_TX_TUNNEL_VXLAN:
-	case PKT_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 		*cd_tunneling |= I40E_TXD_CTX_UDP_TUNNELING;
 		break;
-	case PKT_TX_TUNNEL_GRE:
+	case RTE_MBUF_F_TX_TUNNEL_GRE:
 		*cd_tunneling |= I40E_TXD_CTX_GRE_TUNNELING;
 		break;
 	default:
@@ -306,7 +303,7 @@ i40e_txd_enable_checksum(uint64_t ol_flags,
 			union i40e_tx_offload tx_offload)
 {
 	/* Set MACLEN */
-	if (ol_flags & PKT_TX_TUNNEL_MASK)
+	if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 		*td_offset |= (tx_offload.outer_l2_len >> 1)
 				<< I40E_TX_DESC_LENGTH_MACLEN_SHIFT;
 	else
@@ -314,21 +311,21 @@ i40e_txd_enable_checksum(uint64_t ol_flags,
 			<< I40E_TX_DESC_LENGTH_MACLEN_SHIFT;
 
 	/* Enable L3 checksum offloads */
-	if (ol_flags & PKT_TX_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		*td_cmd |= I40E_TX_DESC_CMD_IIPT_IPV4_CSUM;
 		*td_offset |= (tx_offload.l3_len >> 2)
 				<< I40E_TX_DESC_LENGTH_IPLEN_SHIFT;
-	} else if (ol_flags & PKT_TX_IPV4) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		*td_cmd |= I40E_TX_DESC_CMD_IIPT_IPV4;
 		*td_offset |= (tx_offload.l3_len >> 2)
 				<< I40E_TX_DESC_LENGTH_IPLEN_SHIFT;
-	} else if (ol_flags & PKT_TX_IPV6) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV6) {
 		*td_cmd |= I40E_TX_DESC_CMD_IIPT_IPV6;
 		*td_offset |= (tx_offload.l3_len >> 2)
 				<< I40E_TX_DESC_LENGTH_IPLEN_SHIFT;
 	}
 
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		*td_cmd |= I40E_TX_DESC_CMD_L4T_EOFT_TCP;
 		*td_offset |= (tx_offload.l4_len >> 2)
 			<< I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
@@ -336,18 +333,18 @@ i40e_txd_enable_checksum(uint64_t ol_flags,
 	}
 
 	/* Enable L4 checksum offloads */
-	switch (ol_flags & PKT_TX_L4_MASK) {
-	case PKT_TX_TCP_CKSUM:
+	switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		*td_cmd |= I40E_TX_DESC_CMD_L4T_EOFT_TCP;
 		*td_offset |= (sizeof(struct rte_tcp_hdr) >> 2) <<
 				I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
 		break;
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		*td_cmd |= I40E_TX_DESC_CMD_L4T_EOFT_SCTP;
 		*td_offset |= (sizeof(struct rte_sctp_hdr) >> 2) <<
 				I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
 		break;
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		*td_cmd |= I40E_TX_DESC_CMD_L4T_EOFT_UDP;
 		*td_offset |= (sizeof(struct rte_udp_hdr) >> 2) <<
 				I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
@@ -526,10 +523,10 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
 				ptype_tbl[(uint8_t)((qword1 &
 				I40E_RXD_QW1_PTYPE_MASK) >>
 				I40E_RXD_QW1_PTYPE_SHIFT)];
-			if (pkt_flags & PKT_RX_RSS_HASH)
+			if (pkt_flags & RTE_MBUF_F_RX_RSS_HASH)
 				mb->hash.rss = rte_le_to_cpu_32(\
 					rxdp[j].wb.qword0.hi_dword.rss);
-			if (pkt_flags & PKT_RX_FDIR)
+			if (pkt_flags & RTE_MBUF_F_RX_FDIR)
 				pkt_flags |= i40e_rxd_build_fdir(&rxdp[j], mb);
 
 #ifdef RTE_LIBRTE_IEEE1588
@@ -789,10 +786,10 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		rxm->packet_type =
 			ptype_tbl[(uint8_t)((qword1 &
 			I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT)];
-		if (pkt_flags & PKT_RX_RSS_HASH)
+		if (pkt_flags & RTE_MBUF_F_RX_RSS_HASH)
 			rxm->hash.rss =
 				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
-		if (pkt_flags & PKT_RX_FDIR)
+		if (pkt_flags & RTE_MBUF_F_RX_FDIR)
 			pkt_flags |= i40e_rxd_build_fdir(&rxd, rxm);
 
 #ifdef RTE_LIBRTE_IEEE1588
@@ -957,10 +954,10 @@ i40e_recv_scattered_pkts(void *rx_queue,
 		first_seg->packet_type =
 			ptype_tbl[(uint8_t)((qword1 &
 			I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT)];
-		if (pkt_flags & PKT_RX_RSS_HASH)
+		if (pkt_flags & RTE_MBUF_F_RX_RSS_HASH)
 			first_seg->hash.rss =
 				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
-		if (pkt_flags & PKT_RX_FDIR)
+		if (pkt_flags & RTE_MBUF_F_RX_FDIR)
 			pkt_flags |= i40e_rxd_build_fdir(&rxd, first_seg);
 
 #ifdef RTE_LIBRTE_IEEE1588
@@ -1004,13 +1001,13 @@ i40e_recv_scattered_pkts(void *rx_queue,
 static inline uint16_t
 i40e_calc_context_desc(uint64_t flags)
 {
-	static uint64_t mask = PKT_TX_OUTER_IP_CKSUM |
-		PKT_TX_TCP_SEG |
-		PKT_TX_QINQ |
-		PKT_TX_TUNNEL_MASK;
+	static uint64_t mask = RTE_MBUF_F_TX_OUTER_IP_CKSUM |
+		RTE_MBUF_F_TX_TCP_SEG |
+		RTE_MBUF_F_TX_QINQ |
+		RTE_MBUF_F_TX_TUNNEL_MASK;
 
 #ifdef RTE_LIBRTE_IEEE1588
-	mask |= PKT_TX_IEEE1588_TMST;
+	mask |= RTE_MBUF_F_TX_IEEE1588_TMST;
 #endif
 
 	return (flags & mask) ? 1 : 0;
@@ -1029,7 +1026,7 @@ i40e_set_tso_ctx(struct rte_mbuf *mbuf, union i40e_tx_offload tx_offload)
 	}
 
 	hdr_len = tx_offload.l2_len + tx_offload.l3_len + tx_offload.l4_len;
-	hdr_len += (mbuf->ol_flags & PKT_TX_TUNNEL_MASK) ?
+	hdr_len += (mbuf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 		   tx_offload.outer_l2_len + tx_offload.outer_l3_len : 0;
 
 	cd_cmd = I40E_TX_CTX_DESC_TSO;
@@ -1122,7 +1119,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		 * the mbuf data size exceeds max data size that hw allows
 		 * per tx desc.
 		 */
-		if (ol_flags & PKT_TX_TCP_SEG)
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			nb_used = (uint16_t)(i40e_calc_pkt_desc(tx_pkt) +
 					     nb_ctx);
 		else
@@ -1151,7 +1148,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Descriptor based VLAN insertion */
-		if (ol_flags & (PKT_TX_VLAN | PKT_TX_QINQ)) {
+		if (ol_flags & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) {
 			td_cmd |= I40E_TX_DESC_CMD_IL2TAG1;
 			td_tag = tx_pkt->vlan_tci;
 		}
@@ -1161,7 +1158,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 
 		/* Fill in tunneling parameters if necessary */
 		cd_tunneling_params = 0;
-		if (ol_flags & PKT_TX_TUNNEL_MASK)
+		if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 			i40e_parse_tunneling_params(ol_flags, tx_offload,
 						    &cd_tunneling_params);
 		/* Enable checksum offloading */
@@ -1186,12 +1183,12 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			}
 
 			/* TSO enabled means no timestamp */
-			if (ol_flags & PKT_TX_TCP_SEG)
+			if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 				cd_type_cmd_tso_mss |=
 					i40e_set_tso_ctx(tx_pkt, tx_offload);
 			else {
 #ifdef RTE_LIBRTE_IEEE1588
-				if (ol_flags & PKT_TX_IEEE1588_TMST)
+				if (ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 					cd_type_cmd_tso_mss |=
 						((uint64_t)I40E_TX_CTX_DESC_TSYN <<
 						 I40E_TXD_CTX_QW1_CMD_SHIFT);
@@ -1200,7 +1197,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 
 			ctx_txd->tunneling_params =
 				rte_cpu_to_le_32(cd_tunneling_params);
-			if (ol_flags & PKT_TX_QINQ) {
+			if (ol_flags & RTE_MBUF_F_TX_QINQ) {
 				cd_l2tag2 = tx_pkt->vlan_tci_outer;
 				cd_type_cmd_tso_mss |=
 					((uint64_t)I40E_TX_CTX_DESC_IL2TAG2 <<
@@ -1239,7 +1236,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			slen = m_seg->data_len;
 			buf_dma_addr = rte_mbuf_data_iova(m_seg);
 
-			while ((ol_flags & PKT_TX_TCP_SEG) &&
+			while ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) &&
 				unlikely(slen > I40E_MAX_DATA_PER_TXD)) {
 				txd->buffer_addr =
 					rte_cpu_to_le_64(buf_dma_addr);
@@ -1580,7 +1577,7 @@ i40e_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		ol_flags = m->ol_flags;
 
 		/* Check for m->nb_segs to not exceed the limits. */
-		if (!(ol_flags & PKT_TX_TCP_SEG)) {
+		if (!(ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 			if (m->nb_segs > I40E_TX_MAX_MTU_SEG ||
 			    m->pkt_len > I40E_FRAME_SIZE_MAX) {
 				rte_errno = EINVAL;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index b99323992f..d0bf86dfba 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -117,26 +117,26 @@ desc_to_olflags_v(vector unsigned long descs[4], struct rte_mbuf **rx_pkts)
 	/* map rss and vlan type to rss hash and vlan flag */
 	const vector unsigned char vlan_flags = (vector unsigned char){
 			0, 0, 0, 0,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0, 0, 0,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0};
 
 	const vector unsigned char rss_flags = (vector unsigned char){
-			0, PKT_RX_FDIR, 0, 0,
-			0, 0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH | PKT_RX_FDIR,
+			0, RTE_MBUF_F_RX_FDIR, 0, 0,
+			0, 0, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR,
 			0, 0, 0, 0,
 			0, 0, 0, 0};
 
 	const vector unsigned char l3_l4e_flags = (vector unsigned char){
 			0,
-			PKT_RX_IP_CKSUM_BAD,
-			PKT_RX_L4_CKSUM_BAD,
-			PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD,
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD,
-			PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD,
-			PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD
-					     | PKT_RX_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_L4_CKSUM_BAD,
+			RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD
+					     | RTE_MBUF_F_RX_IP_CKSUM_BAD,
 			0, 0, 0, 0, 0, 0, 0, 0};
 
 	vlan0 = (vector unsigned int)vec_mergel(descs[0], descs[1]);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index 3b9eef91a9..ca10e0dd15 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -78,7 +78,7 @@ desc_fdir_processing_32b(volatile union i40e_rx_desc *rxdp,
 	 * - Position that bit correctly based on packet number
 	 * - OR in the resulting bit to mbuf_flags
 	 */
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
 	__m256i mbuf_flag_mask = _mm256_set_epi32(0, 0, 0, 1 << 13,
 						  0, 0, 0, 1 << 13);
 	__m256i desc_flag_bit =  _mm256_and_si256(mbuf_flag_mask, fdir_mask);
@@ -208,8 +208,8 @@ _recv_raw_pkts_vec_avx2(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	 * destination
 	 */
 	const __m256i vlan_flags_shuf = _mm256_set_epi32(
-			0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0,
-			0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0);
+			0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0,
+			0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0);
 	/*
 	 * data to be shuffled by result of flag mask, shifted down 11.
 	 * If RSS/FDIR bits are set, shuffle moves appropriate flags in
@@ -217,11 +217,11 @@ _recv_raw_pkts_vec_avx2(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	 */
 	const __m256i rss_flags_shuf = _mm256_set_epi8(
 			0, 0, 0, 0, 0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH, 0, 0,
-			0, 0, PKT_RX_FDIR, 0, /* end up 128-bits */
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH, 0, 0,
+			0, 0, RTE_MBUF_F_RX_FDIR, 0, /* end up 128-bits */
 			0, 0, 0, 0, 0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH, 0, 0,
-			0, 0, PKT_RX_FDIR, 0);
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH, 0, 0,
+			0, 0, RTE_MBUF_F_RX_FDIR, 0);
 
 	/*
 	 * data to be shuffled by the result of the flags mask shifted by 22
@@ -229,37 +229,37 @@ _recv_raw_pkts_vec_avx2(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	 */
 	const __m256i l3_l4_flags_shuf = _mm256_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 			/* shift right 1 bit to make sure it not exceed 255 */
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD  |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD  |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD  | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD  | PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD  | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD  | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
 			/* second 128-bits */
 			0, 0, 0, 0, 0, 0, 0, 0,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD  |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD  |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD  | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD  | PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1);
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD  | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD  | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 
 	const __m256i cksum_mask = _mm256_set1_epi32(
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD);
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 
 	RTE_SET_USED(avx_aligned); /* for 32B descriptors we don't use this */
 
@@ -442,7 +442,7 @@ _recv_raw_pkts_vec_avx2(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 			 * order (hi->lo): [1, 3, 5, 7, 0, 2, 4, 6]
 			 * Then OR FDIR flags to mbuf_flags on FDIR ID hit.
 			 */
-			RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
+			RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
 			const __m256i pkt_fdir_bit = _mm256_set1_epi32(1 << 13);
 			const __m256i fdir_mask = _mm256_cmpeq_epi32(fdir, fdir_id);
 			__m256i fdir_bits = _mm256_and_si256(fdir_mask, pkt_fdir_bit);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index bd21d64223..2c779fa2a6 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -204,7 +204,7 @@ desc_fdir_processing_32b(volatile union i40e_rx_desc *rxdp,
 	 * - Position that bit correctly based on packet number
 	 * - OR in the resulting bit to mbuf_flags
 	 */
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
 	__m256i mbuf_flag_mask = _mm256_set_epi32(0, 0, 0, 1 << 13,
 						  0, 0, 0, 1 << 13);
 	__m256i desc_flag_bit =  _mm256_and_si256(mbuf_flag_mask, fdir_mask);
@@ -319,8 +319,8 @@ _recv_raw_pkts_vec_avx512(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	 * destination
 	 */
 	const __m256i vlan_flags_shuf = _mm256_set_epi32
-		(0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0,
-		0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0);
+		(0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0,
+		0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0);
 
 	/* data to be shuffled by result of flag mask, shifted down 11.
 	 * If RSS/FDIR bits are set, shuffle moves appropriate flags in
@@ -328,11 +328,11 @@ _recv_raw_pkts_vec_avx512(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	 */
 	const __m256i rss_flags_shuf = _mm256_set_epi8
 		(0, 0, 0, 0, 0, 0, 0, 0,
-		PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH, 0, 0,
-		0, 0, PKT_RX_FDIR, 0, /* end up 128-bits */
+		RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH, 0, 0,
+		0, 0, RTE_MBUF_F_RX_FDIR, 0, /* end up 128-bits */
 		0, 0, 0, 0, 0, 0, 0, 0,
-		PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH, 0, 0,
-		0, 0, PKT_RX_FDIR, 0);
+		RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH, 0, 0,
+		0, 0, RTE_MBUF_F_RX_FDIR, 0);
 
 	/* data to be shuffled by the result of the flags mask shifted by 22
 	 * bits.  This gives use the l3_l4 flags.
@@ -340,33 +340,33 @@ _recv_raw_pkts_vec_avx512(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	const __m256i l3_l4_flags_shuf = _mm256_set_epi8
 		(0, 0, 0, 0, 0, 0, 0, 0,
 		/* shift right 1 bit to make sure it not exceed 255 */
-		(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-		PKT_RX_IP_CKSUM_BAD >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1,
 		/* second 128-bits */
 		0, 0, 0, 0, 0, 0, 0, 0,
-		(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-		PKT_RX_IP_CKSUM_BAD >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1);
+		(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1);
 
 	const __m256i cksum_mask = _mm256_set1_epi32
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-		PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-		PKT_RX_OUTER_IP_CKSUM_BAD);
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+		RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 
 	uint16_t i, received;
 
@@ -571,7 +571,7 @@ _recv_raw_pkts_vec_avx512(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 			 * order (hi->lo): [1, 3, 5, 7, 0, 2, 4, 6]
 			 * Then OR FDIR flags to mbuf_flags on FDIR ID hit.
 			 */
-			RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
+			RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
 			const __m256i pkt_fdir_bit = _mm256_set1_epi32(1 << 13);
 			const __m256i fdir_mask =
 				_mm256_cmpeq_epi32(fdir, fdir_id);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index b2683fda60..b9d9dec769 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -93,43 +93,43 @@ desc_to_olflags_v(struct i40e_rx_queue *rxq, uint64x2_t descs[4],
 			0x1c03804, 0x1c03804, 0x1c03804, 0x1c03804};
 
 	const uint32x4_t cksum_mask = {
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD};
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD};
 
 	/* map rss and vlan type to rss hash and vlan flag */
 	const uint8x16_t vlan_flags = {
 			0, 0, 0, 0,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0, 0, 0,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0};
 
 	const uint8x16_t rss_flags = {
-			0, PKT_RX_FDIR, 0, 0,
-			0, 0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH | PKT_RX_FDIR,
+			0, RTE_MBUF_F_RX_FDIR, 0, 0,
+			0, 0, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR,
 			0, 0, 0, 0,
 			0, 0, 0, 0};
 
 	const uint8x16_t l3_l4e_flags = {
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1,
-			PKT_RX_IP_CKSUM_BAD >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-			 PKT_RX_L4_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+			 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
 			0, 0, 0, 0, 0, 0, 0, 0};
 
 	vlan0 = vzipq_u32(vreinterpretq_u32_u64(descs[0]),
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index b235502db5..497b2404c6 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -143,7 +143,7 @@ descs_to_fdir_32b(volatile union i40e_rx_desc *rxdp, struct rte_mbuf **rx_pkt)
 	 * correct location in the mbuf->olflags
 	 */
 	const uint32_t FDIR_ID_BIT_SHIFT = 13;
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << FDIR_ID_BIT_SHIFT));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << FDIR_ID_BIT_SHIFT));
 	v_fd_id_mask = _mm_srli_epi32(v_fd_id_mask, 31);
 	v_fd_id_mask = _mm_slli_epi32(v_fd_id_mask, FDIR_ID_BIT_SHIFT);
 
@@ -203,9 +203,9 @@ descs_to_fdir_16b(__m128i fltstat, __m128i descs[4], struct rte_mbuf **rx_pkt)
 	__m128i v_desc0_mask = _mm_and_si128(v_desc_fdir_mask, v_desc0_shift);
 	descs[0] = _mm_blendv_epi8(descs[0], _mm_setzero_si128(), v_desc0_mask);
 
-	/* Shift to 1 or 0 bit per u32 lane, then to PKT_RX_FDIR_ID offset */
+	/* Shift to 1 or 0 bit per u32 lane, then to RTE_MBUF_F_RX_FDIR_ID offset */
 	const uint32_t FDIR_ID_BIT_SHIFT = 13;
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << FDIR_ID_BIT_SHIFT));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << FDIR_ID_BIT_SHIFT));
 	__m128i v_mask_one_bit = _mm_srli_epi32(v_fdir_id_mask, 31);
 	return _mm_slli_epi32(v_mask_one_bit, FDIR_ID_BIT_SHIFT);
 }
@@ -228,44 +228,44 @@ desc_to_olflags_v(struct i40e_rx_queue *rxq, volatile union i40e_rx_desc *rxdp,
 			0x1c03804, 0x1c03804, 0x1c03804, 0x1c03804);
 
 	const __m128i cksum_mask = _mm_set_epi32(
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD);
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 
 	/* map rss and vlan type to rss hash and vlan flag */
 	const __m128i vlan_flags = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
-			0, 0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
+			0, 0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
 			0, 0, 0, 0);
 
 	const __m128i rss_flags = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH, 0, 0,
-			0, 0, PKT_RX_FDIR, 0);
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH, 0, 0,
+			0, 0, RTE_MBUF_F_RX_FDIR, 0);
 
 	const __m128i l3_l4e_flags = _mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 			/* shift right 1 bit to make sure it not exceed 255 */
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD  |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD  |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD  | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD  | PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1);
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD  | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD  | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 
 	/* Unpack "status" from quadword 1, bits 0:32 */
 	vlan0 = _mm_unpackhi_epi32(descs[0], descs[1]);
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index cba1ba8052..1cc1e96b1f 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -379,14 +379,14 @@ iavf_rxd_to_pkt_fields_by_comms_ovs(__rte_unused struct iavf_rx_queue *rxq,
 #endif
 
 	if (desc->flow_id != 0xFFFFFFFF) {
-		mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mb->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
 	}
 
 #ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
 	stat_err = rte_le_to_cpu_16(desc->status_error0);
 	if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
-		mb->ol_flags |= PKT_RX_RSS_HASH;
+		mb->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
 	}
 #endif
@@ -403,13 +403,13 @@ iavf_rxd_to_pkt_fields_by_comms_aux_v1(struct iavf_rx_queue *rxq,
 
 	stat_err = rte_le_to_cpu_16(desc->status_error0);
 	if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
-		mb->ol_flags |= PKT_RX_RSS_HASH;
+		mb->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
 	}
 
 #ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
 	if (desc->flow_id != 0xFFFFFFFF) {
-		mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mb->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
 	}
 
@@ -445,13 +445,13 @@ iavf_rxd_to_pkt_fields_by_comms_aux_v2(struct iavf_rx_queue *rxq,
 
 	stat_err = rte_le_to_cpu_16(desc->status_error0);
 	if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
-		mb->ol_flags |= PKT_RX_RSS_HASH;
+		mb->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
 	}
 
 #ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
 	if (desc->flow_id != 0xFFFFFFFF) {
-		mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mb->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
 	}
 
@@ -1044,7 +1044,7 @@ iavf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union iavf_rx_desc *rxdp)
 {
 	if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
 		(1 << IAVF_RX_DESC_STATUS_L2TAG1P_SHIFT)) {
-		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mb->vlan_tci =
 			rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1);
 	} else {
@@ -1072,7 +1072,7 @@ iavf_flex_rxd_to_vlan_tci(struct rte_mbuf *mb,
 #endif
 
 	if (vlan_tci) {
-		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mb->vlan_tci = vlan_tci;
 	}
 }
@@ -1089,26 +1089,26 @@ iavf_rxd_to_pkt_flags(uint64_t qword)
 	/* Check if RSS_HASH */
 	flags = (((qword >> IAVF_RX_DESC_STATUS_FLTSTAT_SHIFT) &
 					IAVF_RX_DESC_FLTSTAT_RSS_HASH) ==
-			IAVF_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0;
+			IAVF_RX_DESC_FLTSTAT_RSS_HASH) ? RTE_MBUF_F_RX_RSS_HASH : 0;
 
 	/* Check if FDIR Match */
 	flags |= (qword & (1 << IAVF_RX_DESC_STATUS_FLM_SHIFT) ?
-				PKT_RX_FDIR : 0);
+				RTE_MBUF_F_RX_FDIR : 0);
 
 	if (likely((error_bits & IAVF_RX_ERR_BITS) == 0)) {
-		flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 		return flags;
 	}
 
 	if (unlikely(error_bits & (1 << IAVF_RX_DESC_ERROR_IPE_SHIFT)))
-		flags |= PKT_RX_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		flags |= PKT_RX_IP_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	if (unlikely(error_bits & (1 << IAVF_RX_DESC_ERROR_L4E_SHIFT)))
-		flags |= PKT_RX_L4_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	else
-		flags |= PKT_RX_L4_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	/* TODO: Oversize error bit is not processed here */
 
@@ -1129,12 +1129,12 @@ iavf_rxd_build_fdir(volatile union iavf_rx_desc *rxdp, struct rte_mbuf *mb)
 	if (flexbh == IAVF_RX_DESC_EXT_STATUS_FLEXBH_FD_ID) {
 		mb->hash.fdir.hi =
 			rte_le_to_cpu_32(rxdp->wb.qword3.hi_dword.fd_id);
-		flags |= PKT_RX_FDIR_ID;
+		flags |= RTE_MBUF_F_RX_FDIR_ID;
 	}
 #else
 	mb->hash.fdir.hi =
 		rte_le_to_cpu_32(rxdp->wb.qword0.hi_dword.fd_id);
-	flags |= PKT_RX_FDIR_ID;
+	flags |= RTE_MBUF_F_RX_FDIR_ID;
 #endif
 	return flags;
 }
@@ -1158,22 +1158,22 @@ iavf_flex_rxd_error_to_pkt_flags(uint16_t stat_err0)
 		return 0;
 
 	if (likely(!(stat_err0 & IAVF_RX_FLEX_ERR0_BITS))) {
-		flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 		return flags;
 	}
 
 	if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S)))
-		flags |= PKT_RX_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		flags |= PKT_RX_IP_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_L4E_S)))
-		flags |= PKT_RX_L4_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	else
-		flags |= PKT_RX_L4_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)))
-		flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 
 	return flags;
 }
@@ -1292,11 +1292,11 @@ iavf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 			ptype_tbl[(uint8_t)((qword1 &
 			IAVF_RXD_QW1_PTYPE_MASK) >> IAVF_RXD_QW1_PTYPE_SHIFT)];
 
-		if (pkt_flags & PKT_RX_RSS_HASH)
+		if (pkt_flags & RTE_MBUF_F_RX_RSS_HASH)
 			rxm->hash.rss =
 				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
 
-		if (pkt_flags & PKT_RX_FDIR)
+		if (pkt_flags & RTE_MBUF_F_RX_FDIR)
 			pkt_flags |= iavf_rxd_build_fdir(&rxd, rxm);
 
 		rxm->ol_flags |= pkt_flags;
@@ -1693,11 +1693,11 @@ iavf_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 			ptype_tbl[(uint8_t)((qword1 &
 			IAVF_RXD_QW1_PTYPE_MASK) >> IAVF_RXD_QW1_PTYPE_SHIFT)];
 
-		if (pkt_flags & PKT_RX_RSS_HASH)
+		if (pkt_flags & RTE_MBUF_F_RX_RSS_HASH)
 			first_seg->hash.rss =
 				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
 
-		if (pkt_flags & PKT_RX_FDIR)
+		if (pkt_flags & RTE_MBUF_F_RX_FDIR)
 			pkt_flags |= iavf_rxd_build_fdir(&rxd, first_seg);
 
 		first_seg->ol_flags |= pkt_flags;
@@ -1862,11 +1862,11 @@ iavf_rx_scan_hw_ring(struct iavf_rx_queue *rxq)
 				IAVF_RXD_QW1_PTYPE_MASK) >>
 				IAVF_RXD_QW1_PTYPE_SHIFT)];
 
-			if (pkt_flags & PKT_RX_RSS_HASH)
+			if (pkt_flags & RTE_MBUF_F_RX_RSS_HASH)
 				mb->hash.rss = rte_le_to_cpu_32(
 					rxdp[j].wb.qword0.hi_dword.rss);
 
-			if (pkt_flags & PKT_RX_FDIR)
+			if (pkt_flags & RTE_MBUF_F_RX_FDIR)
 				pkt_flags |= iavf_rxd_build_fdir(&rxdp[j], mb);
 
 			mb->ol_flags |= pkt_flags;
@@ -2072,9 +2072,9 @@ iavf_xmit_cleanup(struct iavf_tx_queue *txq)
 static inline uint16_t
 iavf_calc_context_desc(uint64_t flags, uint8_t vlan_flag)
 {
-	if (flags & PKT_TX_TCP_SEG)
+	if (flags & RTE_MBUF_F_TX_TCP_SEG)
 		return 1;
-	if (flags & PKT_TX_VLAN &&
+	if (flags & RTE_MBUF_F_TX_VLAN &&
 	    vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2)
 		return 1;
 	return 0;
@@ -2091,21 +2091,21 @@ iavf_txd_enable_checksum(uint64_t ol_flags,
 		      IAVF_TX_DESC_LENGTH_MACLEN_SHIFT;
 
 	/* Enable L3 checksum offloads */
-	if (ol_flags & PKT_TX_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		*td_cmd |= IAVF_TX_DESC_CMD_IIPT_IPV4_CSUM;
 		*td_offset |= (tx_offload.l3_len >> 2) <<
 			      IAVF_TX_DESC_LENGTH_IPLEN_SHIFT;
-	} else if (ol_flags & PKT_TX_IPV4) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		*td_cmd |= IAVF_TX_DESC_CMD_IIPT_IPV4;
 		*td_offset |= (tx_offload.l3_len >> 2) <<
 			      IAVF_TX_DESC_LENGTH_IPLEN_SHIFT;
-	} else if (ol_flags & PKT_TX_IPV6) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV6) {
 		*td_cmd |= IAVF_TX_DESC_CMD_IIPT_IPV6;
 		*td_offset |= (tx_offload.l3_len >> 2) <<
 			      IAVF_TX_DESC_LENGTH_IPLEN_SHIFT;
 	}
 
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		*td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_TCP;
 		*td_offset |= (tx_offload.l4_len >> 2) <<
 			      IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
@@ -2113,18 +2113,18 @@ iavf_txd_enable_checksum(uint64_t ol_flags,
 	}
 
 	/* Enable L4 checksum offloads */
-	switch (ol_flags & PKT_TX_L4_MASK) {
-	case PKT_TX_TCP_CKSUM:
+	switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		*td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_TCP;
 		*td_offset |= (sizeof(struct rte_tcp_hdr) >> 2) <<
 			      IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
 		break;
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		*td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_SCTP;
 		*td_offset |= (sizeof(struct rte_sctp_hdr) >> 2) <<
 			      IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
 		break;
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		*td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_UDP;
 		*td_offset |= (sizeof(struct rte_udp_hdr) >> 2) <<
 			      IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
@@ -2260,7 +2260,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Descriptor based VLAN insertion */
-		if (ol_flags & PKT_TX_VLAN &&
+		if (ol_flags & RTE_MBUF_F_TX_VLAN &&
 		    txq->vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG1) {
 			td_cmd |= IAVF_TX_DESC_CMD_IL2TAG1;
 			td_tag = tx_pkt->vlan_tci;
@@ -2297,12 +2297,12 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			}
 
 			/* TSO enabled */
-			if (ol_flags & PKT_TX_TCP_SEG)
+			if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 				cd_type_cmd_tso_mss |=
 					iavf_set_tso_ctx(tx_pkt, tx_offload);
 
-			if (ol_flags & PKT_TX_VLAN &&
-			   txq->vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2) {
+			if (ol_flags & RTE_MBUF_F_TX_VLAN &&
+			    txq->vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2) {
 				cd_type_cmd_tso_mss |= IAVF_TX_CTX_DESC_IL2TAG2
 					<< IAVF_TXD_CTX_QW1_CMD_SHIFT;
 				cd_l2tag2 = tx_pkt->vlan_tci;
@@ -2415,7 +2415,7 @@ iavf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		ol_flags = m->ol_flags;
 
 		/* Check condition for nb_segs > IAVF_TX_MAX_MTU_SEG. */
-		if (!(ol_flags & PKT_TX_TCP_SEG)) {
+		if (!(ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 			if (m->nb_segs > IAVF_TX_MAX_MTU_SEG) {
 				rte_errno = EINVAL;
 				return i;
@@ -2446,7 +2446,7 @@ iavf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		}
 
 		if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_QOS &&
-		    ol_flags & (PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN)) {
+		    ol_flags & (RTE_MBUF_F_RX_VLAN_STRIPPED | RTE_MBUF_F_RX_VLAN)) {
 			ret = iavf_check_vlan_up2tc(txq, m);
 			if (ret != 0) {
 				rte_errno = -ret;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 6c3fdbb3b2..2ff18505f4 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -52,23 +52,21 @@
 #define IAVF_TSO_MAX_SEG          UINT8_MAX
 #define IAVF_TX_MAX_MTU_SEG       8
 
-#define IAVF_TX_CKSUM_OFFLOAD_MASK (		 \
-		PKT_TX_IP_CKSUM |		 \
-		PKT_TX_L4_MASK |		 \
-		PKT_TX_TCP_SEG)
-
-#define IAVF_TX_OFFLOAD_MASK (  \
-		PKT_TX_OUTER_IPV6 |		 \
-		PKT_TX_OUTER_IPV4 |		 \
-		PKT_TX_IPV6 |			 \
-		PKT_TX_IPV4 |			 \
-		PKT_TX_VLAN |		 \
-		PKT_TX_IP_CKSUM |		 \
-		PKT_TX_L4_MASK |		 \
-		PKT_TX_TCP_SEG)
+#define IAVF_TX_CKSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM |		 \
+		RTE_MBUF_F_TX_L4_MASK |		 \
+		RTE_MBUF_F_TX_TCP_SEG)
+
+#define IAVF_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_OUTER_IPV6 |		 \
+		RTE_MBUF_F_TX_OUTER_IPV4 |		 \
+		RTE_MBUF_F_TX_IPV6 |			 \
+		RTE_MBUF_F_TX_IPV4 |			 \
+		RTE_MBUF_F_TX_VLAN |		 \
+		RTE_MBUF_F_TX_IP_CKSUM |		 \
+		RTE_MBUF_F_TX_L4_MASK |		 \
+		RTE_MBUF_F_TX_TCP_SEG)
 
 #define IAVF_TX_OFFLOAD_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ IAVF_TX_OFFLOAD_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IAVF_TX_OFFLOAD_MASK)
 
 /**
  * Rx Flex Descriptors
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 475070e036..b1d70036e5 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -127,8 +127,8 @@ _iavf_recv_raw_pkts_vec_avx2(struct iavf_rx_queue *rxq,
 	 * destination
 	 */
 	const __m256i vlan_flags_shuf =
-		_mm256_set_epi32(0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0,
-				 0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0);
+		_mm256_set_epi32(0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0,
+				 0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0);
 	/**
 	 * data to be shuffled by result of flag mask, shifted down 11.
 	 * If RSS/FDIR bits are set, shuffle moves appropriate flags in
@@ -136,11 +136,11 @@ _iavf_recv_raw_pkts_vec_avx2(struct iavf_rx_queue *rxq,
 	 */
 	const __m256i rss_flags_shuf =
 		_mm256_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
-				PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH,
-				0, 0, 0, 0, PKT_RX_FDIR, 0,/* end up 128-bits */
+				RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH,
+				0, 0, 0, 0, RTE_MBUF_F_RX_FDIR, 0,/* end up 128-bits */
 				0, 0, 0, 0, 0, 0, 0, 0,
-				PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH,
-				0, 0, 0, 0, PKT_RX_FDIR, 0);
+				RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH,
+				0, 0, 0, 0, RTE_MBUF_F_RX_FDIR, 0);
 
 	/**
 	 * data to be shuffled by the result of the flags mask shifted by 22
@@ -148,33 +148,33 @@ _iavf_recv_raw_pkts_vec_avx2(struct iavf_rx_queue *rxq,
 	 */
 	const __m256i l3_l4_flags_shuf = _mm256_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 			/* shift right 1 bit to make sure it not exceed 255 */
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-			 PKT_RX_L4_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-			PKT_RX_IP_CKSUM_BAD >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+			 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1,
 			/* second 128-bits */
 			0, 0, 0, 0, 0, 0, 0, 0,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-			 PKT_RX_L4_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-			PKT_RX_IP_CKSUM_BAD >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1);
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+			 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1);
 
 	const __m256i cksum_mask =
-		 _mm256_set1_epi32(PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-				   PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-				   PKT_RX_OUTER_IP_CKSUM_BAD);
+		 _mm256_set1_epi32(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+				   RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+				   RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 
 	RTE_SET_USED(avx_aligned); /* for 32B descriptors we don't use this */
 
@@ -502,10 +502,10 @@ static inline __m256i
 flex_rxd_to_fdir_flags_vec_avx2(const __m256i fdir_id0_7)
 {
 #define FDID_MIS_MAGIC 0xFFFFFFFF
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR != (1 << 2));
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
-	const __m256i pkt_fdir_bit = _mm256_set1_epi32(PKT_RX_FDIR |
-			PKT_RX_FDIR_ID);
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR != (1 << 2));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
+	const __m256i pkt_fdir_bit = _mm256_set1_epi32(RTE_MBUF_F_RX_FDIR |
+			RTE_MBUF_F_RX_FDIR_ID);
 	/* desc->flow_id field == 0xFFFFFFFF means fdir mismatch */
 	const __m256i fdir_mis_mask = _mm256_set1_epi32(FDID_MIS_MAGIC);
 	__m256i fdir_mask = _mm256_cmpeq_epi32(fdir_id0_7,
@@ -626,36 +626,36 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
 	 */
 	const __m256i l3_l4_flags_shuf = _mm256_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 			/* shift right 1 bit to make sure it not exceed 255 */
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
 			/* second 128-bits */
 			0, 0, 0, 0, 0, 0, 0, 0,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1);
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 	const __m256i cksum_mask =
-		 _mm256_set1_epi32(PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-				   PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-				   PKT_RX_OUTER_IP_CKSUM_BAD);
+		 _mm256_set1_epi32(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+				   RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+				   RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 	/**
 	 * data to be shuffled by result of flag mask, shifted down 12.
 	 * If RSS(bit12)/VLAN(bit13) are set,
@@ -664,27 +664,27 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
 	const __m256i rss_flags_shuf = _mm256_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH, 0,
-			PKT_RX_RSS_HASH, 0,
+			RTE_MBUF_F_RX_RSS_HASH, 0,
+			RTE_MBUF_F_RX_RSS_HASH, 0,
 			/* end up 128-bits */
 			0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH, 0,
-			PKT_RX_RSS_HASH, 0);
+			RTE_MBUF_F_RX_RSS_HASH, 0,
+			RTE_MBUF_F_RX_RSS_HASH, 0);
 
 	const __m256i vlan_flags_shuf = _mm256_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
 			0, 0,
 			/* end up 128-bits */
 			0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
 			0, 0);
 
 	uint16_t i, received;
@@ -1025,8 +1025,8 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
 							0, 0, 0, 0,
 							0, 0, 0, 0,
 							0, 0,
-							PKT_RX_VLAN |
-							PKT_RX_VLAN_STRIPPED,
+							RTE_MBUF_F_RX_VLAN |
+							RTE_MBUF_F_RX_VLAN_STRIPPED,
 							0);
 
 				vlan_flags =
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 571161c0cd..6d19a2ecdc 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -431,8 +431,8 @@ _iavf_recv_raw_pkts_vec_avx512(struct iavf_rx_queue *rxq,
 			 * destination
 			 */
 			const __m256i vlan_flags_shuf =
-				_mm256_set_epi32(0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0,
-						 0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0);
+				_mm256_set_epi32(0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0,
+						 0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0);
 #endif
 
 #ifdef IAVF_RX_RSS_OFFLOAD
@@ -443,11 +443,11 @@ _iavf_recv_raw_pkts_vec_avx512(struct iavf_rx_queue *rxq,
 			 */
 			const __m256i rss_flags_shuf =
 				_mm256_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
-						PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH,
-						0, 0, 0, 0, PKT_RX_FDIR, 0,/* end up 128-bits */
+						RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH,
+						0, 0, 0, 0, RTE_MBUF_F_RX_FDIR, 0,/* end up 128-bits */
 						0, 0, 0, 0, 0, 0, 0, 0,
-						PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH,
-						0, 0, 0, 0, PKT_RX_FDIR, 0);
+						RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH,
+						0, 0, 0, 0, RTE_MBUF_F_RX_FDIR, 0);
 #endif
 
 #ifdef IAVF_RX_CSUM_OFFLOAD
@@ -457,33 +457,33 @@ _iavf_recv_raw_pkts_vec_avx512(struct iavf_rx_queue *rxq,
 			 */
 			const __m256i l3_l4_flags_shuf = _mm256_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 					/* shift right 1 bit to make sure it not exceed 255 */
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-					 PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-					 PKT_RX_L4_CKSUM_BAD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-					PKT_RX_IP_CKSUM_BAD >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+					 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+					 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+					RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1,
 					/* second 128-bits */
 					0, 0, 0, 0, 0, 0, 0, 0,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-					 PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-					 PKT_RX_L4_CKSUM_BAD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-					PKT_RX_IP_CKSUM_BAD >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1);
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+					 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+					 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+					RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1);
 
 			const __m256i cksum_mask =
-				_mm256_set1_epi32(PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-						  PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-						  PKT_RX_OUTER_IP_CKSUM_BAD);
+				_mm256_set1_epi32(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+						  RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+						  RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 #endif
 
 #if defined(IAVF_RX_CSUM_OFFLOAD) || defined(IAVF_RX_VLAN_OFFLOAD) || defined(IAVF_RX_RSS_OFFLOAD)
@@ -688,10 +688,10 @@ static __rte_always_inline __m256i
 flex_rxd_to_fdir_flags_vec_avx512(const __m256i fdir_id0_7)
 {
 #define FDID_MIS_MAGIC 0xFFFFFFFF
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR != (1 << 2));
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
-	const __m256i pkt_fdir_bit = _mm256_set1_epi32(PKT_RX_FDIR |
-						       PKT_RX_FDIR_ID);
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR != (1 << 2));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
+	const __m256i pkt_fdir_bit = _mm256_set1_epi32(RTE_MBUF_F_RX_FDIR |
+						       RTE_MBUF_F_RX_FDIR_ID);
 	/* desc->flow_id field == 0xFFFFFFFF means fdir mismatch */
 	const __m256i fdir_mis_mask = _mm256_set1_epi32(FDID_MIS_MAGIC);
 	__m256i fdir_mask = _mm256_cmpeq_epi32(fdir_id0_7,
@@ -974,36 +974,36 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
 			 */
 			const __m256i l3_l4_flags_shuf = _mm256_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 					/* shift right 1 bit to make sure it not exceed 255 */
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-					 PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-					 PKT_RX_IP_CKSUM_GOOD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-					 PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-					 PKT_RX_IP_CKSUM_GOOD) >> 1,
-					(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-					(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+					 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+					 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+					 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+					 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
 					/* second 128-bits */
 					0, 0, 0, 0, 0, 0, 0, 0,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-					 PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-					 PKT_RX_IP_CKSUM_GOOD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-					 PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-					 PKT_RX_IP_CKSUM_GOOD) >> 1,
-					(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-					(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1);
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+					 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+					 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+					 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+					 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 			const __m256i cksum_mask =
-				_mm256_set1_epi32(PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-						  PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-						  PKT_RX_OUTER_IP_CKSUM_BAD);
+				_mm256_set1_epi32(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+						  RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+						  RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 #endif
 #if defined(IAVF_RX_VLAN_OFFLOAD) || defined(IAVF_RX_RSS_OFFLOAD)
 			/**
@@ -1015,28 +1015,28 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
 					(0, 0, 0, 0,
 					 0, 0, 0, 0,
 					 0, 0, 0, 0,
-					 PKT_RX_RSS_HASH, 0,
-					 PKT_RX_RSS_HASH, 0,
+					 RTE_MBUF_F_RX_RSS_HASH, 0,
+					 RTE_MBUF_F_RX_RSS_HASH, 0,
 					 /* end up 128-bits */
 					 0, 0, 0, 0,
 					 0, 0, 0, 0,
 					 0, 0, 0, 0,
-					 PKT_RX_RSS_HASH, 0,
-					 PKT_RX_RSS_HASH, 0);
+					 RTE_MBUF_F_RX_RSS_HASH, 0,
+					 RTE_MBUF_F_RX_RSS_HASH, 0);
 
 			const __m256i vlan_flags_shuf = _mm256_set_epi8
 					(0, 0, 0, 0,
 					 0, 0, 0, 0,
 					 0, 0, 0, 0,
-					 PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-					 PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
+					 RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+					 RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
 					 0, 0,
 					 /* end up 128-bits */
 					 0, 0, 0, 0,
 					 0, 0, 0, 0,
 					 0, 0, 0, 0,
-					 PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-					 PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
+					 RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+					 RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
 					 0, 0);
 #endif
 
@@ -1273,8 +1273,8 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
 							 0, 0, 0, 0,
 							 0, 0, 0, 0,
 							 0, 0,
-							 PKT_RX_VLAN |
-							 PKT_RX_VLAN_STRIPPED,
+							 RTE_MBUF_F_RX_VLAN |
+							 RTE_MBUF_F_RX_VLAN_STRIPPED,
 							 0);
 
 					vlan_flags =
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 457d6339e1..1fd37b74c1 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -326,33 +326,33 @@ iavf_txd_enable_offload(__rte_unused struct rte_mbuf *tx_pkt,
 		     IAVF_TX_DESC_LENGTH_MACLEN_SHIFT;
 
 	/* Enable L3 checksum offloads */
-	if (ol_flags & PKT_TX_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		td_cmd |= IAVF_TX_DESC_CMD_IIPT_IPV4_CSUM;
 		td_offset |= (tx_pkt->l3_len >> 2) <<
 			     IAVF_TX_DESC_LENGTH_IPLEN_SHIFT;
-	} else if (ol_flags & PKT_TX_IPV4) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		td_cmd |= IAVF_TX_DESC_CMD_IIPT_IPV4;
 		td_offset |= (tx_pkt->l3_len >> 2) <<
 			     IAVF_TX_DESC_LENGTH_IPLEN_SHIFT;
-	} else if (ol_flags & PKT_TX_IPV6) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV6) {
 		td_cmd |= IAVF_TX_DESC_CMD_IIPT_IPV6;
 		td_offset |= (tx_pkt->l3_len >> 2) <<
 			     IAVF_TX_DESC_LENGTH_IPLEN_SHIFT;
 	}
 
 	/* Enable L4 checksum offloads */
-	switch (ol_flags & PKT_TX_L4_MASK) {
-	case PKT_TX_TCP_CKSUM:
+	switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_TCP;
 		td_offset |= (sizeof(struct rte_tcp_hdr) >> 2) <<
 			     IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
 		break;
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_SCTP;
 		td_offset |= (sizeof(struct rte_sctp_hdr) >> 2) <<
 			     IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
 		break;
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_UDP;
 		td_offset |= (sizeof(struct rte_udp_hdr) >> 2) <<
 			     IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
@@ -365,7 +365,7 @@ iavf_txd_enable_offload(__rte_unused struct rte_mbuf *tx_pkt,
 #endif
 
 #ifdef IAVF_TX_VLAN_QINQ_OFFLOAD
-	if (ol_flags & (PKT_TX_VLAN | PKT_TX_QINQ)) {
+	if (ol_flags & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) {
 		td_cmd |= IAVF_TX_DESC_CMD_IL2TAG1;
 		*txd_hi |= ((uint64_t)tx_pkt->vlan_tci <<
 			    IAVF_TXD_QW1_L2TAG1_SHIFT);
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index ee1e905525..363d0e62df 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -108,42 +108,42 @@ desc_to_olflags_v(struct iavf_rx_queue *rxq, __m128i descs[4],
 			0x1c03804, 0x1c03804, 0x1c03804, 0x1c03804);
 
 	const __m128i cksum_mask = _mm_set_epi32(
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD);
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 
 	/* map rss and vlan type to rss hash and vlan flag */
 	const __m128i vlan_flags = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
-			0, 0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
+			0, 0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
 			0, 0, 0, 0);
 
 	const __m128i rss_flags = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH, 0, 0,
-			0, 0, PKT_RX_FDIR, 0);
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH, 0, 0,
+			0, 0, RTE_MBUF_F_RX_FDIR, 0);
 
 	const __m128i l3_l4e_flags = _mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 			/* shift right 1 bit to make sure it not exceed 255 */
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-			 PKT_RX_L4_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-			PKT_RX_IP_CKSUM_BAD >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1);
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+			 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1);
 
 	vlan0 = _mm_unpackhi_epi32(descs[0], descs[1]);
 	vlan1 = _mm_unpackhi_epi32(descs[2], descs[3]);
@@ -193,10 +193,10 @@ static inline __m128i
 flex_rxd_to_fdir_flags_vec(const __m128i fdir_id0_3)
 {
 #define FDID_MIS_MAGIC 0xFFFFFFFF
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR != (1 << 2));
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
-	const __m128i pkt_fdir_bit = _mm_set1_epi32(PKT_RX_FDIR |
-			PKT_RX_FDIR_ID);
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR != (1 << 2));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
+	const __m128i pkt_fdir_bit = _mm_set1_epi32(RTE_MBUF_F_RX_FDIR |
+			RTE_MBUF_F_RX_FDIR_ID);
 	/* desc->flow_id field == 0xFFFFFFFF means fdir mismatch */
 	const __m128i fdir_mis_mask = _mm_set1_epi32(FDID_MIS_MAGIC);
 	__m128i fdir_mask = _mm_cmpeq_epi32(fdir_id0_3,
@@ -225,43 +225,43 @@ flex_desc_to_olflags_v(struct iavf_rx_queue *rxq, __m128i descs[4],
 	const __m128i desc_mask = _mm_set_epi32(0x3070, 0x3070,
 						0x3070, 0x3070);
 
-	const __m128i cksum_mask = _mm_set_epi32(PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD,
-						 PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD,
-						 PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD,
-						 PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD);
+	const __m128i cksum_mask = _mm_set_epi32(RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+						 RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+						 RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+						 RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 
 	/* map the checksum, rss and vlan fields to the checksum, rss
 	 * and vlan flag
 	 */
 	const __m128i cksum_flags = _mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 			/* shift right 1 bit to make sure it not exceed 255 */
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1);
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 
 	const __m128i rss_vlan_flags = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_RSS_HASH, 0);
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_RSS_HASH, 0);
 
 	/* merge 4 descriptors */
 	flags = _mm_unpackhi_epi32(descs[0], descs[1]);
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 5d7ab4f047..0be124e170 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -10,11 +10,10 @@
 #include "ice_rxtx.h"
 #include "ice_rxtx_vec_common.h"
 
-#define ICE_TX_CKSUM_OFFLOAD_MASK (		 \
-		PKT_TX_IP_CKSUM |		 \
-		PKT_TX_L4_MASK |		 \
-		PKT_TX_TCP_SEG |		 \
-		PKT_TX_OUTER_IP_CKSUM)
+#define ICE_TX_CKSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM |		 \
+		RTE_MBUF_F_TX_L4_MASK |		 \
+		RTE_MBUF_F_TX_TCP_SEG |		 \
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 
 /* Offset of mbuf dynamic field for protocol extraction data */
 int rte_net_ice_dynfield_proto_xtr_metadata_offs = -1;
@@ -88,13 +87,13 @@ ice_rxd_to_pkt_fields_by_comms_generic(__rte_unused struct ice_rx_queue *rxq,
 	uint16_t stat_err = rte_le_to_cpu_16(desc->status_error0);
 
 	if (likely(stat_err & (1 << ICE_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
-		mb->ol_flags |= PKT_RX_RSS_HASH;
+		mb->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
 	}
 
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
 	if (desc->flow_id != 0xFFFFFFFF) {
-		mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mb->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
 	}
 #endif
@@ -112,14 +111,14 @@ ice_rxd_to_pkt_fields_by_comms_ovs(__rte_unused struct ice_rx_queue *rxq,
 #endif
 
 	if (desc->flow_id != 0xFFFFFFFF) {
-		mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mb->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
 	}
 
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
 	stat_err = rte_le_to_cpu_16(desc->status_error0);
 	if (likely(stat_err & (1 << ICE_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
-		mb->ol_flags |= PKT_RX_RSS_HASH;
+		mb->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
 	}
 #endif
@@ -136,13 +135,13 @@ ice_rxd_to_pkt_fields_by_comms_aux_v1(struct ice_rx_queue *rxq,
 
 	stat_err = rte_le_to_cpu_16(desc->status_error0);
 	if (likely(stat_err & (1 << ICE_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
-		mb->ol_flags |= PKT_RX_RSS_HASH;
+		mb->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
 	}
 
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
 	if (desc->flow_id != 0xFFFFFFFF) {
-		mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mb->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
 	}
 
@@ -178,13 +177,13 @@ ice_rxd_to_pkt_fields_by_comms_aux_v2(struct ice_rx_queue *rxq,
 
 	stat_err = rte_le_to_cpu_16(desc->status_error0);
 	if (likely(stat_err & (1 << ICE_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
-		mb->ol_flags |= PKT_RX_RSS_HASH;
+		mb->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
 	}
 
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
 	if (desc->flow_id != 0xFFFFFFFF) {
-		mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mb->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
 	}
 
@@ -1473,27 +1472,27 @@ ice_rxd_error_to_pkt_flags(uint16_t stat_err0)
 		return 0;
 
 	if (likely(!(stat_err0 & ICE_RX_FLEX_ERR0_BITS))) {
-		flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 		return flags;
 	}
 
 	if (unlikely(stat_err0 & (1 << ICE_RX_FLEX_DESC_STATUS0_XSUM_IPE_S)))
-		flags |= PKT_RX_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		flags |= PKT_RX_IP_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	if (unlikely(stat_err0 & (1 << ICE_RX_FLEX_DESC_STATUS0_XSUM_L4E_S)))
-		flags |= PKT_RX_L4_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	else
-		flags |= PKT_RX_L4_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	if (unlikely(stat_err0 & (1 << ICE_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)))
-		flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 
 	if (unlikely(stat_err0 & (1 << ICE_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S)))
-		flags |= PKT_RX_OUTER_L4_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 	else
-		flags |= PKT_RX_OUTER_L4_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
 
 	return flags;
 }
@@ -1503,7 +1502,7 @@ ice_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union ice_rx_flex_desc *rxdp)
 {
 	if (rte_le_to_cpu_16(rxdp->wb.status_error0) &
 	    (1 << ICE_RX_FLEX_DESC_STATUS0_L2TAG1P_S)) {
-		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mb->vlan_tci =
 			rte_le_to_cpu_16(rxdp->wb.l2tag1);
 		PMD_RX_LOG(DEBUG, "Descriptor l2tag1: %u",
@@ -1515,8 +1514,8 @@ ice_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union ice_rx_flex_desc *rxdp)
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
 	if (rte_le_to_cpu_16(rxdp->wb.status_error1) &
 	    (1 << ICE_RX_FLEX_DESC_STATUS1_L2TAG2P_S)) {
-		mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ |
-				PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+		mb->ol_flags |= RTE_MBUF_F_RX_QINQ_STRIPPED | RTE_MBUF_F_RX_QINQ |
+				RTE_MBUF_F_RX_VLAN_STRIPPED | RTE_MBUF_F_RX_VLAN;
 		mb->vlan_tci_outer = mb->vlan_tci;
 		mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd);
 		PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
@@ -2319,11 +2318,11 @@ ice_parse_tunneling_params(uint64_t ol_flags,
 			    uint32_t *cd_tunneling)
 {
 	/* EIPT: External (outer) IP header type */
-	if (ol_flags & PKT_TX_OUTER_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 		*cd_tunneling |= ICE_TX_CTX_EIPT_IPV4;
-	else if (ol_flags & PKT_TX_OUTER_IPV4)
+	else if (ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)
 		*cd_tunneling |= ICE_TX_CTX_EIPT_IPV4_NO_CSUM;
-	else if (ol_flags & PKT_TX_OUTER_IPV6)
+	else if (ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)
 		*cd_tunneling |= ICE_TX_CTX_EIPT_IPV6;
 
 	/* EIPLEN: External (outer) IP header length, in DWords */
@@ -2331,16 +2330,16 @@ ice_parse_tunneling_params(uint64_t ol_flags,
 		ICE_TXD_CTX_QW0_EIPLEN_S;
 
 	/* L4TUNT: L4 Tunneling Type */
-	switch (ol_flags & PKT_TX_TUNNEL_MASK) {
-	case PKT_TX_TUNNEL_IPIP:
+	switch (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+	case RTE_MBUF_F_TX_TUNNEL_IPIP:
 		/* for non UDP / GRE tunneling, set to 00b */
 		break;
-	case PKT_TX_TUNNEL_VXLAN:
-	case PKT_TX_TUNNEL_GTP:
-	case PKT_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
+	case RTE_MBUF_F_TX_TUNNEL_GTP:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 		*cd_tunneling |= ICE_TXD_CTX_UDP_TUNNELING;
 		break;
-	case PKT_TX_TUNNEL_GRE:
+	case RTE_MBUF_F_TX_TUNNEL_GRE:
 		*cd_tunneling |= ICE_TXD_CTX_GRE_TUNNELING;
 		break;
 	default:
@@ -2377,7 +2376,7 @@ ice_txd_enable_checksum(uint64_t ol_flags,
 			union ice_tx_offload tx_offload)
 {
 	/* Set MACLEN */
-	if (ol_flags & PKT_TX_TUNNEL_MASK)
+	if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 		*td_offset |= (tx_offload.outer_l2_len >> 1)
 			<< ICE_TX_DESC_LEN_MACLEN_S;
 	else
@@ -2385,21 +2384,21 @@ ice_txd_enable_checksum(uint64_t ol_flags,
 			<< ICE_TX_DESC_LEN_MACLEN_S;
 
 	/* Enable L3 checksum offloads */
-	if (ol_flags & PKT_TX_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4_CSUM;
 		*td_offset |= (tx_offload.l3_len >> 2) <<
 			      ICE_TX_DESC_LEN_IPLEN_S;
-	} else if (ol_flags & PKT_TX_IPV4) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4;
 		*td_offset |= (tx_offload.l3_len >> 2) <<
 			      ICE_TX_DESC_LEN_IPLEN_S;
-	} else if (ol_flags & PKT_TX_IPV6) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV6) {
 		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV6;
 		*td_offset |= (tx_offload.l3_len >> 2) <<
 			      ICE_TX_DESC_LEN_IPLEN_S;
 	}
 
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_TCP;
 		*td_offset |= (tx_offload.l4_len >> 2) <<
 			      ICE_TX_DESC_LEN_L4_LEN_S;
@@ -2407,18 +2406,18 @@ ice_txd_enable_checksum(uint64_t ol_flags,
 	}
 
 	/* Enable L4 checksum offloads */
-	switch (ol_flags & PKT_TX_L4_MASK) {
-	case PKT_TX_TCP_CKSUM:
+	switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_TCP;
 		*td_offset |= (sizeof(struct rte_tcp_hdr) >> 2) <<
 			      ICE_TX_DESC_LEN_L4_LEN_S;
 		break;
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_SCTP;
 		*td_offset |= (sizeof(struct rte_sctp_hdr) >> 2) <<
 			      ICE_TX_DESC_LEN_L4_LEN_S;
 		break;
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_UDP;
 		*td_offset |= (sizeof(struct rte_udp_hdr) >> 2) <<
 			      ICE_TX_DESC_LEN_L4_LEN_S;
@@ -2496,10 +2495,10 @@ ice_build_ctob(uint32_t td_cmd,
 static inline uint16_t
 ice_calc_context_desc(uint64_t flags)
 {
-	static uint64_t mask = PKT_TX_TCP_SEG |
-		PKT_TX_QINQ |
-		PKT_TX_OUTER_IP_CKSUM |
-		PKT_TX_TUNNEL_MASK;
+	static uint64_t mask = RTE_MBUF_F_TX_TCP_SEG |
+		RTE_MBUF_F_TX_QINQ |
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM |
+		RTE_MBUF_F_TX_TUNNEL_MASK;
 
 	return (flags & mask) ? 1 : 0;
 }
@@ -2517,7 +2516,7 @@ ice_set_tso_ctx(struct rte_mbuf *mbuf, union ice_tx_offload tx_offload)
 	}
 
 	hdr_len = tx_offload.l2_len + tx_offload.l3_len + tx_offload.l4_len;
-	hdr_len += (mbuf->ol_flags & PKT_TX_TUNNEL_MASK) ?
+	hdr_len += (mbuf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 		   tx_offload.outer_l2_len + tx_offload.outer_l3_len : 0;
 
 	cd_cmd = ICE_TX_CTX_DESC_TSO;
@@ -2604,7 +2603,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		 * the mbuf data size exceeds max data size that hw allows
 		 * per tx desc.
 		 */
-		if (ol_flags & PKT_TX_TCP_SEG)
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			nb_used = (uint16_t)(ice_calc_pkt_desc(tx_pkt) +
 					     nb_ctx);
 		else
@@ -2633,14 +2632,14 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Descriptor based VLAN insertion */
-		if (ol_flags & (PKT_TX_VLAN | PKT_TX_QINQ)) {
+		if (ol_flags & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) {
 			td_cmd |= ICE_TX_DESC_CMD_IL2TAG1;
 			td_tag = tx_pkt->vlan_tci;
 		}
 
 		/* Fill in tunneling parameters if necessary */
 		cd_tunneling_params = 0;
-		if (ol_flags & PKT_TX_TUNNEL_MASK)
+		if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 			ice_parse_tunneling_params(ol_flags, tx_offload,
 						   &cd_tunneling_params);
 
@@ -2664,7 +2663,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 				txe->mbuf = NULL;
 			}
 
-			if (ol_flags & PKT_TX_TCP_SEG)
+			if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 				cd_type_cmd_tso_mss |=
 					ice_set_tso_ctx(tx_pkt, tx_offload);
 
@@ -2672,7 +2671,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 				rte_cpu_to_le_32(cd_tunneling_params);
 
 			/* TX context descriptor based double VLAN insert */
-			if (ol_flags & PKT_TX_QINQ) {
+			if (ol_flags & RTE_MBUF_F_TX_QINQ) {
 				cd_l2tag2 = tx_pkt->vlan_tci_outer;
 				cd_type_cmd_tso_mss |=
 					((uint64_t)ICE_TX_CTX_DESC_IL2TAG2 <<
@@ -2700,7 +2699,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			slen = m_seg->data_len;
 			buf_dma_addr = rte_mbuf_data_iova(m_seg);
 
-			while ((ol_flags & PKT_TX_TCP_SEG) &&
+			while ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) &&
 				unlikely(slen > ICE_MAX_DATA_PER_TXD)) {
 				txd->buf_addr = rte_cpu_to_le_64(buf_dma_addr);
 				txd->cmd_type_offset_bsz =
@@ -3287,7 +3286,7 @@ ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		m = tx_pkts[i];
 		ol_flags = m->ol_flags;
 
-		if (ol_flags & PKT_TX_TCP_SEG &&
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG &&
 		    (m->tso_segsz < ICE_MIN_TSO_MSS ||
 		     m->tso_segsz > ICE_MAX_TSO_MSS ||
 		     m->pkt_len > ICE_MAX_TSO_FRAME_SIZE)) {
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 9725ac0180..c20927dc5c 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -20,10 +20,10 @@ static __rte_always_inline __m256i
 ice_flex_rxd_to_fdir_flags_vec_avx2(const __m256i fdir_id0_7)
 {
 #define FDID_MIS_MAGIC 0xFFFFFFFF
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR != (1 << 2));
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
-	const __m256i pkt_fdir_bit = _mm256_set1_epi32(PKT_RX_FDIR |
-			PKT_RX_FDIR_ID);
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR != (1 << 2));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
+	const __m256i pkt_fdir_bit = _mm256_set1_epi32(RTE_MBUF_F_RX_FDIR |
+			RTE_MBUF_F_RX_FDIR_ID);
 	/* desc->flow_id field == 0xFFFFFFFF means fdir mismatch */
 	const __m256i fdir_mis_mask = _mm256_set1_epi32(FDID_MIS_MAGIC);
 	__m256i fdir_mask = _mm256_cmpeq_epi32(fdir_id0_7,
@@ -142,82 +142,82 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	 * bits.  This gives use the l3_l4 flags.
 	 */
 	const __m256i l3_l4_flags_shuf =
-		_mm256_set_epi8((PKT_RX_OUTER_L4_CKSUM_BAD >> 20 |
-		 PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-		  PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
+		_mm256_set_epi8((RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 |
+		 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		  RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
 		/**
 		 * second 128-bits
 		 * shift right 20 bits to use the low two bits to indicate
 		 * outer checksum status
 		 * shift right 1 bit to make sure it not exceed 255
 		 */
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1);
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 	const __m256i cksum_mask =
-		 _mm256_set1_epi32(PKT_RX_IP_CKSUM_MASK |
-				   PKT_RX_L4_CKSUM_MASK |
-				   PKT_RX_OUTER_IP_CKSUM_BAD |
-				   PKT_RX_OUTER_L4_CKSUM_MASK);
+		 _mm256_set1_epi32(RTE_MBUF_F_RX_IP_CKSUM_MASK |
+				   RTE_MBUF_F_RX_L4_CKSUM_MASK |
+				   RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+				   RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK);
 	/**
 	 * data to be shuffled by result of flag mask, shifted down 12.
 	 * If RSS(bit12)/VLAN(bit13) are set,
@@ -226,16 +226,16 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	const __m256i rss_vlan_flags_shuf = _mm256_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_RSS_HASH, 0,
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_RSS_HASH, 0,
 			/* end up 128-bits */
 			0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_RSS_HASH, 0);
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_RSS_HASH, 0);
 
 	RTE_SET_USED(avx_aligned); /* for 32B descriptors we don't use this */
 
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 5bba9887d2..1fe3de5aa2 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -135,10 +135,10 @@ static inline __m256i
 ice_flex_rxd_to_fdir_flags_vec_avx512(const __m256i fdir_id0_7)
 {
 #define FDID_MIS_MAGIC 0xFFFFFFFF
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR != (1 << 2));
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
-	const __m256i pkt_fdir_bit = _mm256_set1_epi32(PKT_RX_FDIR |
-			PKT_RX_FDIR_ID);
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR != (1 << 2));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
+	const __m256i pkt_fdir_bit = _mm256_set1_epi32(RTE_MBUF_F_RX_FDIR |
+			RTE_MBUF_F_RX_FDIR_ID);
 	/* desc->flow_id field == 0xFFFFFFFF means fdir mismatch */
 	const __m256i fdir_mis_mask = _mm256_set1_epi32(FDID_MIS_MAGIC);
 	__m256i fdir_mask = _mm256_cmpeq_epi32(fdir_id0_7,
@@ -242,82 +242,82 @@ _ice_recv_raw_pkts_vec_avx512(struct ice_rx_queue *rxq,
 	 * bits.  This gives use the l3_l4 flags.
 	 */
 	const __m256i l3_l4_flags_shuf =
-		_mm256_set_epi8((PKT_RX_OUTER_L4_CKSUM_BAD >> 20 |
-		 PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-		  PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
+		_mm256_set_epi8((RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 |
+		 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		  RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
 		/**
 		 * second 128-bits
 		 * shift right 20 bits to use the low two bits to indicate
 		 * outer checksum status
 		 * shift right 1 bit to make sure it not exceed 255
 		 */
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1);
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 	const __m256i cksum_mask =
-		 _mm256_set1_epi32(PKT_RX_IP_CKSUM_MASK |
-				   PKT_RX_L4_CKSUM_MASK |
-				   PKT_RX_OUTER_IP_CKSUM_BAD |
-				   PKT_RX_OUTER_L4_CKSUM_MASK);
+		 _mm256_set1_epi32(RTE_MBUF_F_RX_IP_CKSUM_MASK |
+				   RTE_MBUF_F_RX_L4_CKSUM_MASK |
+				   RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+				   RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK);
 	/**
 	 * data to be shuffled by result of flag mask, shifted down 12.
 	 * If RSS(bit12)/VLAN(bit13) are set,
@@ -326,16 +326,16 @@ _ice_recv_raw_pkts_vec_avx512(struct ice_rx_queue *rxq,
 	const __m256i rss_vlan_flags_shuf = _mm256_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_RSS_HASH, 0,
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_RSS_HASH, 0,
 			/* 2nd 128-bits */
 			0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_RSS_HASH, 0);
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_RSS_HASH, 0);
 
 	uint16_t i, received;
 
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 2d8ef7dc8a..6de054f237 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -565,33 +565,33 @@ ice_txd_enable_offload(struct rte_mbuf *tx_pkt,
 			ICE_TX_DESC_LEN_MACLEN_S;
 
 	/* Enable L3 checksum offload */
-	if (ol_flags & PKT_TX_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4_CSUM;
 		td_offset |= (tx_pkt->l3_len >> 2) <<
 			ICE_TX_DESC_LEN_IPLEN_S;
-	} else if (ol_flags & PKT_TX_IPV4) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4;
 		td_offset |= (tx_pkt->l3_len >> 2) <<
 			ICE_TX_DESC_LEN_IPLEN_S;
-	} else if (ol_flags & PKT_TX_IPV6) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV6) {
 		td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV6;
 		td_offset |= (tx_pkt->l3_len >> 2) <<
 			ICE_TX_DESC_LEN_IPLEN_S;
 	}
 
 	/* Enable L4 checksum offloads */
-	switch (ol_flags & PKT_TX_L4_MASK) {
-	case PKT_TX_TCP_CKSUM:
+	switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_TCP;
 		td_offset |= (sizeof(struct rte_tcp_hdr) >> 2) <<
 			ICE_TX_DESC_LEN_L4_LEN_S;
 		break;
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_SCTP;
 		td_offset |= (sizeof(struct rte_sctp_hdr) >> 2) <<
 			ICE_TX_DESC_LEN_L4_LEN_S;
 		break;
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_UDP;
 		td_offset |= (sizeof(struct rte_udp_hdr) >> 2) <<
 			ICE_TX_DESC_LEN_L4_LEN_S;
@@ -603,7 +603,7 @@ ice_txd_enable_offload(struct rte_mbuf *tx_pkt,
 	*txd_hi |= ((uint64_t)td_offset) << ICE_TXD_QW1_OFFSET_S;
 
 	/* Tx VLAN/QINQ insertion Offload */
-	if (ol_flags & (PKT_TX_VLAN | PKT_TX_QINQ)) {
+	if (ol_flags & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) {
 		td_cmd |= ICE_TX_DESC_CMD_IL2TAG1;
 		*txd_hi |= ((uint64_t)tx_pkt->vlan_tci <<
 				ICE_TXD_QW1_L2TAG1_S);
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 653bd28b41..df1347e64d 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -14,10 +14,10 @@ static inline __m128i
 ice_flex_rxd_to_fdir_flags_vec(const __m128i fdir_id0_3)
 {
 #define FDID_MIS_MAGIC 0xFFFFFFFF
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR != (1 << 2));
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
-	const __m128i pkt_fdir_bit = _mm_set1_epi32(PKT_RX_FDIR |
-			PKT_RX_FDIR_ID);
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR != (1 << 2));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
+	const __m128i pkt_fdir_bit = _mm_set1_epi32(RTE_MBUF_F_RX_FDIR |
+			RTE_MBUF_F_RX_FDIR_ID);
 	/* desc->flow_id field == 0xFFFFFFFF means fdir mismatch */
 	const __m128i fdir_mis_mask = _mm_set1_epi32(FDID_MIS_MAGIC);
 	__m128i fdir_mask = _mm_cmpeq_epi32(fdir_id0_3,
@@ -116,72 +116,72 @@ ice_rx_desc_to_olflags_v(struct ice_rx_queue *rxq, __m128i descs[4],
 	 */
 	const __m128i desc_mask = _mm_set_epi32(0x30f0, 0x30f0,
 						0x30f0, 0x30f0);
-	const __m128i cksum_mask = _mm_set_epi32(PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD,
-						 PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD,
-						 PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD,
-						 PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD);
+	const __m128i cksum_mask = _mm_set_epi32(RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+						 RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+						 RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+						 RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 
 	/* map the checksum, rss and vlan fields to the checksum, rss
 	 * and vlan flag
 	 */
 	const __m128i cksum_flags =
-		_mm_set_epi8((PKT_RX_OUTER_L4_CKSUM_BAD >> 20 |
-		 PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-		  PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
+		_mm_set_epi8((RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 |
+		 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		  RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
 		/**
 		 * shift right 20 bits to use the low two bits to indicate
 		 * outer checksum status
 		 * shift right 1 bit to make sure it not exceed 255
 		 */
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1);
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 
 	const __m128i rss_vlan_flags = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_RSS_HASH, 0);
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_RSS_HASH, 0);
 
 	/* merge 4 descriptors */
 	flags = _mm_unpackhi_epi32(descs[0], descs[1]);
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index 9848afd9ca..fdb388568b 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -74,17 +74,16 @@
 #define IGC_TSO_MAX_MSS			9216
 
 /* Bit Mask to indicate what bits required for building TX context */
-#define IGC_TX_OFFLOAD_MASK (		\
-		PKT_TX_OUTER_IPV4 |	\
-		PKT_TX_IPV6 |		\
-		PKT_TX_IPV4 |		\
-		PKT_TX_VLAN |	\
-		PKT_TX_IP_CKSUM |	\
-		PKT_TX_L4_MASK |	\
-		PKT_TX_TCP_SEG |	\
-		PKT_TX_UDP_SEG)
-
-#define IGC_TX_OFFLOAD_SEG	(PKT_TX_TCP_SEG | PKT_TX_UDP_SEG)
+#define IGC_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_OUTER_IPV4 |	\
+		RTE_MBUF_F_TX_IPV6 |		\
+		RTE_MBUF_F_TX_IPV4 |		\
+		RTE_MBUF_F_TX_VLAN |	\
+		RTE_MBUF_F_TX_IP_CKSUM |	\
+		RTE_MBUF_F_TX_L4_MASK |	\
+		RTE_MBUF_F_TX_TCP_SEG |	\
+		RTE_MBUF_F_TX_UDP_SEG)
+
+#define IGC_TX_OFFLOAD_SEG	(RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)
 
 #define IGC_ADVTXD_POPTS_TXSM	0x00000200 /* L4 Checksum offload request */
 #define IGC_ADVTXD_POPTS_IXSM	0x00000100 /* IP Checksum offload request */
@@ -92,7 +91,7 @@
 /* L4 Packet TYPE of Reserved */
 #define IGC_ADVTXD_TUCMD_L4T_RSV	0x00001800
 
-#define IGC_TX_OFFLOAD_NOTSUP_MASK (PKT_TX_OFFLOAD_MASK ^ IGC_TX_OFFLOAD_MASK)
+#define IGC_TX_OFFLOAD_NOTSUP_MASK (RTE_MBUF_F_TX_OFFLOAD_MASK ^ IGC_TX_OFFLOAD_MASK)
 
 /**
  * Structure associated with each descriptor of the RX ring of a RX queue.
@@ -215,16 +214,18 @@ struct igc_tx_queue {
 static inline uint64_t
 rx_desc_statuserr_to_pkt_flags(uint32_t statuserr)
 {
-	static uint64_t l4_chksum_flags[] = {0, 0, PKT_RX_L4_CKSUM_GOOD,
-			PKT_RX_L4_CKSUM_BAD};
+	static uint64_t l4_chksum_flags[] = {0, 0,
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD,
+			RTE_MBUF_F_RX_L4_CKSUM_BAD};
 
-	static uint64_t l3_chksum_flags[] = {0, 0, PKT_RX_IP_CKSUM_GOOD,
-			PKT_RX_IP_CKSUM_BAD};
+	static uint64_t l3_chksum_flags[] = {0, 0,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD};
 	uint64_t pkt_flags = 0;
 	uint32_t tmp;
 
 	if (statuserr & IGC_RXD_STAT_VP)
-		pkt_flags |= PKT_RX_VLAN_STRIPPED;
+		pkt_flags |= RTE_MBUF_F_RX_VLAN_STRIPPED;
 
 	tmp = !!(statuserr & (IGC_RXD_STAT_L4CS | IGC_RXD_STAT_UDPCS));
 	tmp = (tmp << 1) | (uint32_t)!!(statuserr & IGC_RXD_EXT_ERR_L4E);
@@ -332,10 +333,10 @@ rx_desc_get_pkt_info(struct igc_rx_queue *rxq, struct rte_mbuf *rxm,
 	rxm->vlan_tci = rte_le_to_cpu_16(rxd->wb.upper.vlan);
 
 	pkt_flags = (hlen_type_rss & IGC_RXD_RSS_TYPE_MASK) ?
-			PKT_RX_RSS_HASH : 0;
+			RTE_MBUF_F_RX_RSS_HASH : 0;
 
 	if (hlen_type_rss & IGC_RXD_VPKT)
-		pkt_flags |= PKT_RX_VLAN;
+		pkt_flags |= RTE_MBUF_F_RX_VLAN;
 
 	pkt_flags |= rx_desc_statuserr_to_pkt_flags(staterr);
 
@@ -1468,7 +1469,7 @@ check_tso_para(uint64_t ol_req, union igc_tx_offload ol_para)
 	if (ol_para.tso_segsz > IGC_TSO_MAX_MSS || ol_para.l2_len +
 		ol_para.l3_len + ol_para.l4_len > IGC_TSO_MAX_HDRLEN) {
 		ol_req &= ~IGC_TX_OFFLOAD_SEG;
-		ol_req |= PKT_TX_TCP_CKSUM;
+		ol_req |= RTE_MBUF_F_TX_TCP_CKSUM;
 	}
 	return ol_req;
 }
@@ -1530,20 +1531,20 @@ igc_set_xmit_ctx(struct igc_tx_queue *txq,
 	/* Specify which HW CTX to upload. */
 	mss_l4len_idx = (ctx_curr << IGC_ADVTXD_IDX_SHIFT);
 
-	if (ol_flags & PKT_TX_VLAN)
+	if (ol_flags & RTE_MBUF_F_TX_VLAN)
 		tx_offload_mask.vlan_tci = 0xffff;
 
 	/* check if TCP segmentation required for this packet */
 	if (ol_flags & IGC_TX_OFFLOAD_SEG) {
 		/* implies IP cksum in IPv4 */
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			type_tucmd_mlhl = IGC_ADVTXD_TUCMD_IPV4 |
 				IGC_ADVTXD_DTYP_CTXT | IGC_ADVTXD_DCMD_DEXT;
 		else
 			type_tucmd_mlhl = IGC_ADVTXD_TUCMD_IPV6 |
 				IGC_ADVTXD_DTYP_CTXT | IGC_ADVTXD_DCMD_DEXT;
 
-		if (ol_flags & PKT_TX_TCP_SEG)
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			type_tucmd_mlhl |= IGC_ADVTXD_TUCMD_L4T_TCP;
 		else
 			type_tucmd_mlhl |= IGC_ADVTXD_TUCMD_L4T_UDP;
@@ -1554,26 +1555,26 @@ igc_set_xmit_ctx(struct igc_tx_queue *txq,
 		mss_l4len_idx |= (uint32_t)tx_offload.l4_len <<
 				IGC_ADVTXD_L4LEN_SHIFT;
 	} else { /* no TSO, check if hardware checksum is needed */
-		if (ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK))
+		if (ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK))
 			tx_offload_mask.data |= TX_MACIP_LEN_CMP_MASK;
 
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			type_tucmd_mlhl = IGC_ADVTXD_TUCMD_IPV4;
 
-		switch (ol_flags & PKT_TX_L4_MASK) {
-		case PKT_TX_TCP_CKSUM:
+		switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+		case RTE_MBUF_F_TX_TCP_CKSUM:
 			type_tucmd_mlhl |= IGC_ADVTXD_TUCMD_L4T_TCP |
 				IGC_ADVTXD_DTYP_CTXT | IGC_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= (uint32_t)sizeof(struct rte_tcp_hdr)
 				<< IGC_ADVTXD_L4LEN_SHIFT;
 			break;
-		case PKT_TX_UDP_CKSUM:
+		case RTE_MBUF_F_TX_UDP_CKSUM:
 			type_tucmd_mlhl |= IGC_ADVTXD_TUCMD_L4T_UDP |
 				IGC_ADVTXD_DTYP_CTXT | IGC_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= (uint32_t)sizeof(struct rte_udp_hdr)
 				<< IGC_ADVTXD_L4LEN_SHIFT;
 			break;
-		case PKT_TX_SCTP_CKSUM:
+		case RTE_MBUF_F_TX_SCTP_CKSUM:
 			type_tucmd_mlhl |= IGC_ADVTXD_TUCMD_L4T_SCTP |
 				IGC_ADVTXD_DTYP_CTXT | IGC_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= (uint32_t)sizeof(struct rte_sctp_hdr)
@@ -1604,7 +1605,7 @@ tx_desc_vlan_flags_to_cmdtype(uint64_t ol_flags)
 	uint32_t cmdtype;
 	static uint32_t vlan_cmd[2] = {0, IGC_ADVTXD_DCMD_VLE};
 	static uint32_t tso_cmd[2] = {0, IGC_ADVTXD_DCMD_TSE};
-	cmdtype = vlan_cmd[(ol_flags & PKT_TX_VLAN) != 0];
+	cmdtype = vlan_cmd[(ol_flags & RTE_MBUF_F_TX_VLAN) != 0];
 	cmdtype |= tso_cmd[(ol_flags & IGC_TX_OFFLOAD_SEG) != 0];
 	return cmdtype;
 }
@@ -1616,8 +1617,8 @@ tx_desc_cksum_flags_to_olinfo(uint64_t ol_flags)
 	static const uint32_t l3_olinfo[2] = {0, IGC_ADVTXD_POPTS_IXSM};
 	uint32_t tmp;
 
-	tmp  = l4_olinfo[(ol_flags & PKT_TX_L4_MASK)  != PKT_TX_L4_NO_CKSUM];
-	tmp |= l3_olinfo[(ol_flags & PKT_TX_IP_CKSUM) != 0];
+	tmp  = l4_olinfo[(ol_flags & RTE_MBUF_F_TX_L4_MASK)  != RTE_MBUF_F_TX_L4_NO_CKSUM];
+	tmp |= l3_olinfo[(ol_flags & RTE_MBUF_F_TX_IP_CKSUM) != 0];
 	tmp |= l4_olinfo[(ol_flags & IGC_TX_OFFLOAD_SEG) != 0];
 	return tmp;
 }
@@ -1774,7 +1775,7 @@ igc_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		 * Timer 0 should be used to for packet timestamping,
 		 * sample the packet timestamp to reg 0
 		 */
-		if (ol_flags & PKT_TX_IEEE1588_TMST)
+		if (ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 			cmd_type_len |= IGC_ADVTXD_MAC_TSTAMP;
 
 		if (tx_ol_req) {
diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
index 431435eea0..db6107dfc1 100644
--- a/drivers/net/ionic/ionic_rxtx.c
+++ b/drivers/net/ionic/ionic_rxtx.c
@@ -258,7 +258,7 @@ ionic_tx_tcp_pseudo_csum(struct rte_mbuf *txm)
 	struct rte_tcp_hdr *tcp_hdr = (struct rte_tcp_hdr *)
 		(l3_hdr + txm->l3_len);
 
-	if (txm->ol_flags & PKT_TX_IP_CKSUM) {
+	if (txm->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		struct rte_ipv4_hdr *ipv4_hdr = (struct rte_ipv4_hdr *)l3_hdr;
 		ipv4_hdr->hdr_checksum = 0;
 		tcp_hdr->cksum = 0;
@@ -279,7 +279,7 @@ ionic_tx_tcp_inner_pseudo_csum(struct rte_mbuf *txm)
 	struct rte_tcp_hdr *tcp_hdr = (struct rte_tcp_hdr *)
 		(l3_hdr + txm->l3_len);
 
-	if (txm->ol_flags & PKT_TX_IPV4) {
+	if (txm->ol_flags & RTE_MBUF_F_TX_IPV4) {
 		struct rte_ipv4_hdr *ipv4_hdr = (struct rte_ipv4_hdr *)l3_hdr;
 		ipv4_hdr->hdr_checksum = 0;
 		tcp_hdr->cksum = 0;
@@ -356,14 +356,14 @@ ionic_tx_tso(struct ionic_tx_qcq *txq, struct rte_mbuf *txm)
 	uint32_t offset = 0;
 	bool start, done;
 	bool encap;
-	bool has_vlan = !!(txm->ol_flags & PKT_TX_VLAN);
+	bool has_vlan = !!(txm->ol_flags & RTE_MBUF_F_TX_VLAN);
 	uint16_t vlan_tci = txm->vlan_tci;
 	uint64_t ol_flags = txm->ol_flags;
 
-	encap = ((ol_flags & PKT_TX_OUTER_IP_CKSUM) ||
-		(ol_flags & PKT_TX_OUTER_UDP_CKSUM)) &&
-		((ol_flags & PKT_TX_OUTER_IPV4) ||
-		(ol_flags & PKT_TX_OUTER_IPV6));
+	encap = ((ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) ||
+		 (ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM)) &&
+		((ol_flags & RTE_MBUF_F_TX_OUTER_IPV4) ||
+		 (ol_flags & RTE_MBUF_F_TX_OUTER_IPV6));
 
 	/* Preload inner-most TCP csum field with IP pseudo hdr
 	 * calculated with IP length set to zero.  HW will later
@@ -478,15 +478,15 @@ ionic_tx(struct ionic_tx_qcq *txq, struct rte_mbuf *txm)
 	desc = &desc_base[q->head_idx];
 	info = IONIC_INFO_PTR(q, q->head_idx);
 
-	if ((ol_flags & PKT_TX_IP_CKSUM) &&
+	if ((ol_flags & RTE_MBUF_F_TX_IP_CKSUM) &&
 	    (txq->flags & IONIC_QCQ_F_CSUM_L3)) {
 		opcode = IONIC_TXQ_DESC_OPCODE_CSUM_HW;
 		flags |= IONIC_TXQ_DESC_FLAG_CSUM_L3;
 	}
 
-	if (((ol_flags & PKT_TX_TCP_CKSUM) &&
+	if (((ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) &&
 	     (txq->flags & IONIC_QCQ_F_CSUM_TCP)) ||
-	    ((ol_flags & PKT_TX_UDP_CKSUM) &&
+	    ((ol_flags & RTE_MBUF_F_TX_UDP_CKSUM) &&
 	     (txq->flags & IONIC_QCQ_F_CSUM_UDP))) {
 		opcode = IONIC_TXQ_DESC_OPCODE_CSUM_HW;
 		flags |= IONIC_TXQ_DESC_FLAG_CSUM_L4;
@@ -495,11 +495,11 @@ ionic_tx(struct ionic_tx_qcq *txq, struct rte_mbuf *txm)
 	if (opcode == IONIC_TXQ_DESC_OPCODE_CSUM_NONE)
 		stats->no_csum++;
 
-	has_vlan = (ol_flags & PKT_TX_VLAN);
-	encap = ((ol_flags & PKT_TX_OUTER_IP_CKSUM) ||
-			(ol_flags & PKT_TX_OUTER_UDP_CKSUM)) &&
-			((ol_flags & PKT_TX_OUTER_IPV4) ||
-			(ol_flags & PKT_TX_OUTER_IPV6));
+	has_vlan = (ol_flags & RTE_MBUF_F_TX_VLAN);
+	encap = ((ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) ||
+			(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM)) &&
+			((ol_flags & RTE_MBUF_F_TX_OUTER_IPV4) ||
+			 (ol_flags & RTE_MBUF_F_TX_OUTER_IPV6));
 
 	flags |= has_vlan ? IONIC_TXQ_DESC_FLAG_VLAN : 0;
 	flags |= encap ? IONIC_TXQ_DESC_FLAG_ENCAP : 0;
@@ -556,7 +556,7 @@ ionic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			rte_prefetch0(&q->info[next_q_head_idx]);
 		}
 
-		if (tx_pkts[nb_tx]->ol_flags & PKT_TX_TCP_SEG)
+		if (tx_pkts[nb_tx]->ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			err = ionic_tx_tso(txq, tx_pkts[nb_tx]);
 		else
 			err = ionic_tx(txq, tx_pkts[nb_tx]);
@@ -586,16 +586,15 @@ ionic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
  *
  **********************************************************************/
 
-#define IONIC_TX_OFFLOAD_MASK (	\
-	PKT_TX_IPV4 |		\
-	PKT_TX_IPV6 |		\
-	PKT_TX_VLAN |		\
-	PKT_TX_IP_CKSUM |	\
-	PKT_TX_TCP_SEG |	\
-	PKT_TX_L4_MASK)
+#define IONIC_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_IPV4 |		\
+	RTE_MBUF_F_TX_IPV6 |		\
+	RTE_MBUF_F_TX_VLAN |		\
+	RTE_MBUF_F_TX_IP_CKSUM |	\
+	RTE_MBUF_F_TX_TCP_SEG |	\
+	RTE_MBUF_F_TX_L4_MASK)
 
 #define IONIC_TX_OFFLOAD_NOTSUP_MASK \
-	(PKT_TX_OFFLOAD_MASK ^ IONIC_TX_OFFLOAD_MASK)
+	(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IONIC_TX_OFFLOAD_MASK)
 
 uint16_t
 ionic_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
@@ -842,30 +841,30 @@ ionic_rx_clean(struct ionic_rx_qcq *rxq,
 	}
 
 	/* RSS */
-	pkt_flags |= PKT_RX_RSS_HASH;
+	pkt_flags |= RTE_MBUF_F_RX_RSS_HASH;
 	rxm->hash.rss = cq_desc->rss_hash;
 
 	/* Vlan Strip */
 	if (cq_desc->csum_flags & IONIC_RXQ_COMP_CSUM_F_VLAN) {
-		pkt_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		pkt_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		rxm->vlan_tci = cq_desc->vlan_tci;
 	}
 
 	/* Checksum */
 	if (cq_desc->csum_flags & IONIC_RXQ_COMP_CSUM_F_CALC) {
 		if (cq_desc->csum_flags & IONIC_RXQ_COMP_CSUM_F_IP_OK)
-			pkt_flags |= PKT_RX_IP_CKSUM_GOOD;
+			pkt_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 		else if (cq_desc->csum_flags & IONIC_RXQ_COMP_CSUM_F_IP_BAD)
-			pkt_flags |= PKT_RX_IP_CKSUM_BAD;
+			pkt_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 
 		if ((cq_desc->csum_flags & IONIC_RXQ_COMP_CSUM_F_TCP_OK) ||
 			(cq_desc->csum_flags & IONIC_RXQ_COMP_CSUM_F_UDP_OK))
-			pkt_flags |= PKT_RX_L4_CKSUM_GOOD;
+			pkt_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		else if ((cq_desc->csum_flags &
 				IONIC_RXQ_COMP_CSUM_F_TCP_BAD) ||
 				(cq_desc->csum_flags &
 				IONIC_RXQ_COMP_CSUM_F_UDP_BAD))
-			pkt_flags |= PKT_RX_L4_CKSUM_BAD;
+			pkt_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	}
 
 	rxm->ol_flags = pkt_flags;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 47693c0c47..b1e694e2a0 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1967,10 +1967,10 @@ ixgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
 	rxq = dev->data->rx_queues[queue];
 
 	if (on) {
-		rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		rxq->vlan_flags = RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
 	} else {
-		rxq->vlan_flags = PKT_RX_VLAN;
+		rxq->vlan_flags = RTE_MBUF_F_RX_VLAN;
 		rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
 	}
 }
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 717ae8f775..795b3c1363 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -54,27 +54,26 @@
 #include "ixgbe_rxtx.h"
 
 #ifdef RTE_LIBRTE_IEEE1588
-#define IXGBE_TX_IEEE1588_TMST PKT_TX_IEEE1588_TMST
+#define IXGBE_TX_IEEE1588_TMST RTE_MBUF_F_TX_IEEE1588_TMST
 #else
 #define IXGBE_TX_IEEE1588_TMST 0
 #endif
 /* Bit Mask to indicate what bits required for building TX context */
-#define IXGBE_TX_OFFLOAD_MASK (			 \
-		PKT_TX_OUTER_IPV6 |		 \
-		PKT_TX_OUTER_IPV4 |		 \
-		PKT_TX_IPV6 |			 \
-		PKT_TX_IPV4 |			 \
-		PKT_TX_VLAN |		 \
-		PKT_TX_IP_CKSUM |		 \
-		PKT_TX_L4_MASK |		 \
-		PKT_TX_TCP_SEG |		 \
-		PKT_TX_MACSEC |			 \
-		PKT_TX_OUTER_IP_CKSUM |		 \
-		PKT_TX_SEC_OFFLOAD |	 \
+#define IXGBE_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_OUTER_IPV6 |		 \
+		RTE_MBUF_F_TX_OUTER_IPV4 |		 \
+		RTE_MBUF_F_TX_IPV6 |			 \
+		RTE_MBUF_F_TX_IPV4 |			 \
+		RTE_MBUF_F_TX_VLAN |		 \
+		RTE_MBUF_F_TX_IP_CKSUM |		 \
+		RTE_MBUF_F_TX_L4_MASK |		 \
+		RTE_MBUF_F_TX_TCP_SEG |		 \
+		RTE_MBUF_F_TX_MACSEC |			 \
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM |		 \
+		RTE_MBUF_F_TX_SEC_OFFLOAD |	 \
 		IXGBE_TX_IEEE1588_TMST)
 
 #define IXGBE_TX_OFFLOAD_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ IXGBE_TX_OFFLOAD_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IXGBE_TX_OFFLOAD_MASK)
 
 #if 1
 #define RTE_PMD_USE_PREFETCH
@@ -384,14 +383,14 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 	/* Specify which HW CTX to upload. */
 	mss_l4len_idx |= (ctx_idx << IXGBE_ADVTXD_IDX_SHIFT);
 
-	if (ol_flags & PKT_TX_VLAN) {
+	if (ol_flags & RTE_MBUF_F_TX_VLAN) {
 		tx_offload_mask.vlan_tci |= ~0;
 	}
 
 	/* check if TCP segmentation required for this packet */
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		/* implies IP cksum in IPv4 */
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			type_tucmd_mlhl = IXGBE_ADVTXD_TUCMD_IPV4 |
 				IXGBE_ADVTXD_TUCMD_L4T_TCP |
 				IXGBE_ADVTXD_DTYP_CTXT | IXGBE_ADVTXD_DCMD_DEXT;
@@ -407,14 +406,14 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 		mss_l4len_idx |= tx_offload.tso_segsz << IXGBE_ADVTXD_MSS_SHIFT;
 		mss_l4len_idx |= tx_offload.l4_len << IXGBE_ADVTXD_L4LEN_SHIFT;
 	} else { /* no TSO, check if hardware checksum is needed */
-		if (ol_flags & PKT_TX_IP_CKSUM) {
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 			type_tucmd_mlhl = IXGBE_ADVTXD_TUCMD_IPV4;
 			tx_offload_mask.l2_len |= ~0;
 			tx_offload_mask.l3_len |= ~0;
 		}
 
-		switch (ol_flags & PKT_TX_L4_MASK) {
-		case PKT_TX_UDP_CKSUM:
+		switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+		case RTE_MBUF_F_TX_UDP_CKSUM:
 			type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_UDP |
 				IXGBE_ADVTXD_DTYP_CTXT | IXGBE_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= sizeof(struct rte_udp_hdr)
@@ -422,7 +421,7 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 			tx_offload_mask.l2_len |= ~0;
 			tx_offload_mask.l3_len |= ~0;
 			break;
-		case PKT_TX_TCP_CKSUM:
+		case RTE_MBUF_F_TX_TCP_CKSUM:
 			type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_TCP |
 				IXGBE_ADVTXD_DTYP_CTXT | IXGBE_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= sizeof(struct rte_tcp_hdr)
@@ -430,7 +429,7 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 			tx_offload_mask.l2_len |= ~0;
 			tx_offload_mask.l3_len |= ~0;
 			break;
-		case PKT_TX_SCTP_CKSUM:
+		case RTE_MBUF_F_TX_SCTP_CKSUM:
 			type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_SCTP |
 				IXGBE_ADVTXD_DTYP_CTXT | IXGBE_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= sizeof(struct rte_sctp_hdr)
@@ -445,7 +444,7 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 		}
 	}
 
-	if (ol_flags & PKT_TX_OUTER_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) {
 		tx_offload_mask.outer_l2_len |= ~0;
 		tx_offload_mask.outer_l3_len |= ~0;
 		tx_offload_mask.l2_len |= ~0;
@@ -455,7 +454,7 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 			       << IXGBE_ADVTXD_TUNNEL_LEN;
 	}
 #ifdef RTE_LIB_SECURITY
-	if (ol_flags & PKT_TX_SEC_OFFLOAD) {
+	if (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) {
 		union ixgbe_crypto_tx_desc_md *md =
 				(union ixgbe_crypto_tx_desc_md *)mdata;
 		seqnum_seed |=
@@ -479,7 +478,7 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 
 	ctx_txd->type_tucmd_mlhl = rte_cpu_to_le_32(type_tucmd_mlhl);
 	vlan_macip_lens = tx_offload.l3_len;
-	if (ol_flags & PKT_TX_OUTER_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 		vlan_macip_lens |= (tx_offload.outer_l2_len <<
 				    IXGBE_ADVTXD_MACLEN_SHIFT);
 	else
@@ -529,11 +528,11 @@ tx_desc_cksum_flags_to_olinfo(uint64_t ol_flags)
 {
 	uint32_t tmp = 0;
 
-	if ((ol_flags & PKT_TX_L4_MASK) != PKT_TX_L4_NO_CKSUM)
+	if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) != RTE_MBUF_F_TX_L4_NO_CKSUM)
 		tmp |= IXGBE_ADVTXD_POPTS_TXSM;
-	if (ol_flags & PKT_TX_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 		tmp |= IXGBE_ADVTXD_POPTS_IXSM;
-	if (ol_flags & PKT_TX_TCP_SEG)
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 		tmp |= IXGBE_ADVTXD_POPTS_TXSM;
 	return tmp;
 }
@@ -543,13 +542,13 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
 {
 	uint32_t cmdtype = 0;
 
-	if (ol_flags & PKT_TX_VLAN)
+	if (ol_flags & RTE_MBUF_F_TX_VLAN)
 		cmdtype |= IXGBE_ADVTXD_DCMD_VLE;
-	if (ol_flags & PKT_TX_TCP_SEG)
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 		cmdtype |= IXGBE_ADVTXD_DCMD_TSE;
-	if (ol_flags & PKT_TX_OUTER_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 		cmdtype |= (1 << IXGBE_ADVTXD_OUTERIPCS_SHIFT);
-	if (ol_flags & PKT_TX_MACSEC)
+	if (ol_flags & RTE_MBUF_F_TX_MACSEC)
 		cmdtype |= IXGBE_ADVTXD_MAC_LINKSEC;
 	return cmdtype;
 }
@@ -678,7 +677,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		 */
 		ol_flags = tx_pkt->ol_flags;
 #ifdef RTE_LIB_SECURITY
-		use_ipsec = txq->using_ipsec && (ol_flags & PKT_TX_SEC_OFFLOAD);
+		use_ipsec = txq->using_ipsec && (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD);
 #endif
 
 		/* If hardware offload required */
@@ -826,14 +825,14 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			IXGBE_ADVTXD_DCMD_IFCS | IXGBE_ADVTXD_DCMD_DEXT;
 
 #ifdef RTE_LIBRTE_IEEE1588
-		if (ol_flags & PKT_TX_IEEE1588_TMST)
+		if (ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 			cmd_type_len |= IXGBE_ADVTXD_MAC_1588;
 #endif
 
 		olinfo_status = 0;
 		if (tx_ol_req) {
 
-			if (ol_flags & PKT_TX_TCP_SEG) {
+			if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 				/* when TSO is on, paylen in descriptor is the
 				 * not the packet len but the tcp payload len */
 				pkt_len -= (tx_offload.l2_len +
@@ -1433,14 +1432,14 @@ static inline uint64_t
 ixgbe_rxd_pkt_info_to_pkt_flags(uint16_t pkt_info)
 {
 	static uint64_t ip_rss_types_map[16] __rte_cache_aligned = {
-		0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-		0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
-		PKT_RX_RSS_HASH, 0, 0, 0,
-		0, 0, 0,  PKT_RX_FDIR,
+		0, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+		0, RTE_MBUF_F_RX_RSS_HASH, 0, RTE_MBUF_F_RX_RSS_HASH,
+		RTE_MBUF_F_RX_RSS_HASH, 0, 0, 0,
+		0, 0, 0,  RTE_MBUF_F_RX_FDIR,
 	};
 #ifdef RTE_LIBRTE_IEEE1588
 	static uint64_t ip_pkt_etqf_map[8] = {
-		0, 0, 0, PKT_RX_IEEE1588_PTP,
+		0, 0, 0, RTE_MBUF_F_RX_IEEE1588_PTP,
 		0, 0, 0, 0,
 	};
 
@@ -1468,7 +1467,7 @@ rx_desc_status_to_pkt_flags(uint32_t rx_status, uint64_t vlan_flags)
 
 #ifdef RTE_LIBRTE_IEEE1588
 	if (rx_status & IXGBE_RXD_STAT_TMST)
-		pkt_flags = pkt_flags | PKT_RX_IEEE1588_TMST;
+		pkt_flags = pkt_flags | RTE_MBUF_F_RX_IEEE1588_TMST;
 #endif
 	return pkt_flags;
 }
@@ -1484,10 +1483,10 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status, uint16_t pkt_info,
 	 * Bit 30: L4I, L4I integrity error
 	 */
 	static uint64_t error_to_pkt_flags_map[4] = {
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD,
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD,
-		PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD,
-		PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD
 	};
 	pkt_flags = error_to_pkt_flags_map[(rx_status >>
 		IXGBE_RXDADV_ERR_CKSUM_BIT) & IXGBE_RXDADV_ERR_CKSUM_MSK];
@@ -1499,18 +1498,18 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status, uint16_t pkt_info,
 	if ((rx_status & IXGBE_RXDADV_ERR_TCPE) &&
 	    (pkt_info & IXGBE_RXDADV_PKTTYPE_UDP) &&
 	    rx_udp_csum_zero_err)
-		pkt_flags &= ~PKT_RX_L4_CKSUM_BAD;
+		pkt_flags &= ~RTE_MBUF_F_RX_L4_CKSUM_BAD;
 
 	if ((rx_status & IXGBE_RXD_STAT_OUTERIPCS) &&
 	    (rx_status & IXGBE_RXDADV_ERR_OUTERIPER)) {
-		pkt_flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+		pkt_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 	}
 
 #ifdef RTE_LIB_SECURITY
 	if (rx_status & IXGBE_RXD_STAT_SECP) {
-		pkt_flags |= PKT_RX_SEC_OFFLOAD;
+		pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD;
 		if (rx_status & IXGBE_RXDADV_LNKSEC_ERROR_BAD_SIG)
-			pkt_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+			pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
 	}
 #endif
 
@@ -1597,10 +1596,10 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
 				ixgbe_rxd_pkt_info_to_pkt_type
 					(pkt_info[j], rxq->pkt_type_mask);
 
-			if (likely(pkt_flags & PKT_RX_RSS_HASH))
+			if (likely(pkt_flags & RTE_MBUF_F_RX_RSS_HASH))
 				mb->hash.rss = rte_le_to_cpu_32(
 				    rxdp[j].wb.lower.hi_dword.rss);
-			else if (pkt_flags & PKT_RX_FDIR) {
+			else if (pkt_flags & RTE_MBUF_F_RX_FDIR) {
 				mb->hash.fdir.hash = rte_le_to_cpu_16(
 				    rxdp[j].wb.lower.hi_dword.csum_ip.csum) &
 				    IXGBE_ATR_HASH_MASK;
@@ -1918,7 +1917,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		rxm->port = rxq->port_id;
 
 		pkt_info = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
-		/* Only valid if PKT_RX_VLAN set in pkt_flags */
+		/* Only valid if RTE_MBUF_F_RX_VLAN set in pkt_flags */
 		rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
 
 		pkt_flags = rx_desc_status_to_pkt_flags(staterr, vlan_flags);
@@ -1932,10 +1931,10 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 			ixgbe_rxd_pkt_info_to_pkt_type(pkt_info,
 						       rxq->pkt_type_mask);
 
-		if (likely(pkt_flags & PKT_RX_RSS_HASH))
+		if (likely(pkt_flags & RTE_MBUF_F_RX_RSS_HASH))
 			rxm->hash.rss = rte_le_to_cpu_32(
 						rxd.wb.lower.hi_dword.rss);
-		else if (pkt_flags & PKT_RX_FDIR) {
+		else if (pkt_flags & RTE_MBUF_F_RX_FDIR) {
 			rxm->hash.fdir.hash = rte_le_to_cpu_16(
 					rxd.wb.lower.hi_dword.csum_ip.csum) &
 					IXGBE_ATR_HASH_MASK;
@@ -2011,7 +2010,7 @@ ixgbe_fill_cluster_head_buf(
 
 	head->port = rxq->port_id;
 
-	/* The vlan_tci field is only valid when PKT_RX_VLAN is
+	/* The vlan_tci field is only valid when RTE_MBUF_F_RX_VLAN is
 	 * set in the pkt_flags field.
 	 */
 	head->vlan_tci = rte_le_to_cpu_16(desc->wb.upper.vlan);
@@ -2024,9 +2023,9 @@ ixgbe_fill_cluster_head_buf(
 	head->packet_type =
 		ixgbe_rxd_pkt_info_to_pkt_type(pkt_info, rxq->pkt_type_mask);
 
-	if (likely(pkt_flags & PKT_RX_RSS_HASH))
+	if (likely(pkt_flags & RTE_MBUF_F_RX_RSS_HASH))
 		head->hash.rss = rte_le_to_cpu_32(desc->wb.lower.hi_dword.rss);
-	else if (pkt_flags & PKT_RX_FDIR) {
+	else if (pkt_flags & RTE_MBUF_F_RX_FDIR) {
 		head->hash.fdir.hash =
 			rte_le_to_cpu_16(desc->wb.lower.hi_dword.csum_ip.csum)
 							  & IXGBE_ATR_HASH_MASK;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index c541f537c7..90b254ea26 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -105,10 +105,10 @@ desc_to_olflags_v(uint8x16x2_t sterr_tmp1, uint8x16x2_t sterr_tmp2,
 			0x00, 0x00, 0x00, 0x00};
 
 	const uint8x16_t rss_flags = {
-			0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, 0, 0, 0,
-			0, 0, 0, PKT_RX_FDIR};
+			0, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			0, RTE_MBUF_F_RX_RSS_HASH, 0, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, 0, 0, 0,
+			0, 0, 0, RTE_MBUF_F_RX_FDIR};
 
 	/* mask everything except vlan present and l4/ip csum error */
 	const uint8x16_t vlan_csum_msk = {
@@ -123,23 +123,23 @@ desc_to_olflags_v(uint8x16x2_t sterr_tmp1, uint8x16x2_t sterr_tmp2,
 
 	/* map vlan present (0x8), IPE (0x2), L4E (0x1) to ol_flags */
 	const uint8x16_t vlan_csum_map_lo = {
-			PKT_RX_IP_CKSUM_GOOD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
 			0, 0, 0, 0,
-			vlan_flags | PKT_RX_IP_CKSUM_GOOD,
-			vlan_flags | PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD,
-			vlan_flags | PKT_RX_IP_CKSUM_BAD,
-			vlan_flags | PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD,
+			vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_GOOD,
+			vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+			vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_BAD,
+			vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
 			0, 0, 0, 0};
 
 	const uint8x16_t vlan_csum_map_hi = {
-			PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
-			PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
 			0, 0, 0, 0,
-			PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
-			PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
 			0, 0, 0, 0};
 
 	/* change mask from 0x200(IXGBE_RXDADV_PKTTYPE_UDP) to 0x2 */
@@ -153,7 +153,7 @@ desc_to_olflags_v(uint8x16x2_t sterr_tmp1, uint8x16x2_t sterr_tmp2,
 			0, 0, 0, 0};
 
 	const uint8x16_t udp_csum_bad_shuf = {
-			0xFF, ~(uint8_t)PKT_RX_L4_CKSUM_BAD, 0, 0,
+			0xFF, ~(uint8_t)RTE_MBUF_F_RX_L4_CKSUM_BAD, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0};
@@ -194,7 +194,7 @@ desc_to_olflags_v(uint8x16x2_t sterr_tmp1, uint8x16x2_t sterr_tmp2,
 	vtag_lo = vorrq_u8(ptype, vtag_lo);
 
 	/* convert the UDP header present 0x2 to 0x1 for aligning with each
-	 * PKT_RX_L4_CKSUM_BAD value in low byte of 8 bits word ol_flag in
+	 * RTE_MBUF_F_RX_L4_CKSUM_BAD value in low byte of 8 bits word ol_flag in
 	 * vtag_lo (4x8). Then mask out the bad checksum value by shuffle and
 	 * bit-mask.
 	 */
@@ -337,7 +337,7 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	sw_ring = &rxq->sw_ring[rxq->rx_tail];
 
 	/* ensure these 2 flags are in the lower 8 bits */
-	RTE_BUILD_BUG_ON((PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED) > UINT8_MAX);
+	RTE_BUILD_BUG_ON((RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED) > UINT8_MAX);
 	vlan_flags = rxq->vlan_flags & UINT8_MAX;
 
 	/* A. load 4 packet in one loop
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index 1dea95e73b..1eed949495 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -108,9 +108,9 @@ desc_to_olflags_v_ipsec(__m128i descs[4], struct rte_mbuf **rx_pkts)
 	const __m128i ipsec_proc_msk  =
 			_mm_set1_epi32(IXGBE_RXDADV_IPSEC_STATUS_SECP);
 	const __m128i ipsec_err_flag  =
-			_mm_set1_epi32(PKT_RX_SEC_OFFLOAD_FAILED |
-				       PKT_RX_SEC_OFFLOAD);
-	const __m128i ipsec_proc_flag = _mm_set1_epi32(PKT_RX_SEC_OFFLOAD);
+			_mm_set1_epi32(RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED |
+				       RTE_MBUF_F_RX_SEC_OFFLOAD);
+	const __m128i ipsec_proc_flag = _mm_set1_epi32(RTE_MBUF_F_RX_SEC_OFFLOAD);
 
 	rearm = _mm_set_epi32(*rearm3, *rearm2, *rearm1, *rearm0);
 	sterr = _mm_set_epi32(_mm_extract_epi32(descs[3], 2),
@@ -148,10 +148,10 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
 			0x00FF, 0x00FF, 0x00FF, 0x00FF);
 
 	/* map rss type to rss hash flag */
-	const __m128i rss_flags = _mm_set_epi8(PKT_RX_FDIR, 0, 0, 0,
-			0, 0, 0, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH, 0,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, 0);
+	const __m128i rss_flags = _mm_set_epi8(RTE_MBUF_F_RX_FDIR, 0, 0, 0,
+			0, 0, 0, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, 0, RTE_MBUF_F_RX_RSS_HASH, 0,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, 0);
 
 	/* mask everything except vlan present and l4/ip csum error */
 	const __m128i vlan_csum_msk = _mm_set_epi16(
@@ -165,23 +165,23 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
 	/* map vlan present (0x8), IPE (0x2), L4E (0x1) to ol_flags */
 	const __m128i vlan_csum_map_lo = _mm_set_epi8(
 		0, 0, 0, 0,
-		vlan_flags | PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD,
-		vlan_flags | PKT_RX_IP_CKSUM_BAD,
-		vlan_flags | PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD,
-		vlan_flags | PKT_RX_IP_CKSUM_GOOD,
+		vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+		vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_BAD,
+		vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+		vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_GOOD,
 		0, 0, 0, 0,
-		PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD,
-		PKT_RX_IP_CKSUM_BAD,
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD,
-		PKT_RX_IP_CKSUM_GOOD);
+		RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD);
 
 	const __m128i vlan_csum_map_hi = _mm_set_epi8(
 		0, 0, 0, 0,
-		0, PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
-		PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t),
+		0, RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
+		RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t),
 		0, 0, 0, 0,
-		0, PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
-		PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t));
+		0, RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
+		RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t));
 
 	/* mask everything except UDP header present if specified */
 	const __m128i udp_hdr_p_msk = _mm_set_epi16
@@ -190,7 +190,7 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
 
 	const __m128i udp_csum_bad_shuf = _mm_set_epi8
 		(0, 0, 0, 0, 0, 0, 0, 0,
-		 0, 0, 0, 0, 0, 0, ~(uint8_t)PKT_RX_L4_CKSUM_BAD, 0xFF);
+		 0, 0, 0, 0, 0, 0, ~(uint8_t)RTE_MBUF_F_RX_L4_CKSUM_BAD, 0xFF);
 
 	ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
 	ptype1 = _mm_unpacklo_epi16(descs[2], descs[3]);
@@ -228,7 +228,7 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
 	vtag1 = _mm_or_si128(ptype0, vtag1);
 
 	/* convert the UDP header present 0x200 to 0x1 for aligning with each
-	 * PKT_RX_L4_CKSUM_BAD value in low byte of 16 bits word ol_flag in
+	 * RTE_MBUF_F_RX_L4_CKSUM_BAD value in low byte of 16 bits word ol_flag in
 	 * vtag1 (4x16). Then mask out the bad checksum value by shuffle and
 	 * bit-mask.
 	 */
@@ -428,7 +428,7 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	sw_ring = &rxq->sw_ring[rxq->rx_tail];
 
 	/* ensure these 2 flags are in the lower 8 bits */
-	RTE_BUILD_BUG_ON((PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED) > UINT8_MAX);
+	RTE_BUILD_BUG_ON((RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED) > UINT8_MAX);
 	vlan_flags = rxq->vlan_flags & UINT8_MAX;
 
 	/* A. load 4 packet in one loop
diff --git a/drivers/net/liquidio/lio_rxtx.c b/drivers/net/liquidio/lio_rxtx.c
index a067b60e47..0fd6659978 100644
--- a/drivers/net/liquidio/lio_rxtx.c
+++ b/drivers/net/liquidio/lio_rxtx.c
@@ -437,7 +437,7 @@ lio_droq_fast_process_packet(struct lio_device *lio_dev,
 				if (rh->r_dh.has_hash) {
 					uint64_t *hash_ptr;
 
-					nicbuf->ol_flags |= PKT_RX_RSS_HASH;
+					nicbuf->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 					hash_ptr = rte_pktmbuf_mtod(nicbuf,
 								    uint64_t *);
 					lio_swap_8B_data(hash_ptr, 1);
@@ -494,7 +494,7 @@ lio_droq_fast_process_packet(struct lio_device *lio_dev,
 						uint64_t *hash_ptr;
 
 						nicbuf->ol_flags |=
-						    PKT_RX_RSS_HASH;
+						    RTE_MBUF_F_RX_RSS_HASH;
 						hash_ptr = rte_pktmbuf_mtod(
 						    nicbuf, uint64_t *);
 						lio_swap_8B_data(hash_ptr, 1);
@@ -547,10 +547,10 @@ lio_droq_fast_process_packet(struct lio_device *lio_dev,
 		struct rte_mbuf *m = rx_pkts[data_pkts - 1];
 
 		if (rh->r_dh.csum_verified & LIO_IP_CSUM_VERIFIED)
-			m->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 		if (rh->r_dh.csum_verified & LIO_L4_CSUM_VERIFIED)
-			m->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 	}
 
 	if (droq->refill_count >= droq->refill_threshold) {
@@ -1675,13 +1675,13 @@ lio_dev_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
 		cmdsetup.s.iq_no = iq_no;
 
 		/* check checksum offload flags to form cmd */
-		if (m->ol_flags & PKT_TX_IP_CKSUM)
+		if (m->ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			cmdsetup.s.ip_csum = 1;
 
-		if (m->ol_flags & PKT_TX_OUTER_IP_CKSUM)
+		if (m->ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 			cmdsetup.s.tnl_csum = 1;
-		else if ((m->ol_flags & PKT_TX_TCP_CKSUM) ||
-				(m->ol_flags & PKT_TX_UDP_CKSUM))
+		else if ((m->ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) ||
+				(m->ol_flags & RTE_MBUF_F_TX_UDP_CKSUM))
 			cmdsetup.s.transport_csum = 1;
 
 		if (m->nb_segs == 1) {
diff --git a/drivers/net/mlx4/mlx4_rxtx.c b/drivers/net/mlx4/mlx4_rxtx.c
index ecf08f53cf..ed9e41fcde 100644
--- a/drivers/net/mlx4/mlx4_rxtx.c
+++ b/drivers/net/mlx4/mlx4_rxtx.c
@@ -406,7 +406,7 @@ mlx4_tx_burst_tso_get_params(struct rte_mbuf *buf,
 {
 	struct mlx4_sq *sq = &txq->msq;
 	const uint8_t tunneled = txq->priv->hw_csum_l2tun &&
-				 (buf->ol_flags & PKT_TX_TUNNEL_MASK);
+				 (buf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK);
 
 	tinfo->tso_header_size = buf->l2_len + buf->l3_len + buf->l4_len;
 	if (tunneled)
@@ -915,7 +915,7 @@ mlx4_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
 			uint16_t flags16[2];
 		} srcrb;
 		uint32_t lkey;
-		bool tso = txq->priv->tso && (buf->ol_flags & PKT_TX_TCP_SEG);
+		bool tso = txq->priv->tso && (buf->ol_flags & RTE_MBUF_F_TX_TCP_SEG);
 
 		/* Clean up old buffer. */
 		if (likely(elt->buf != NULL)) {
@@ -991,15 +991,15 @@ mlx4_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
 		/* Enable HW checksum offload if requested */
 		if (txq->csum &&
 		    (buf->ol_flags &
-		     (PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM))) {
+		     (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM))) {
 			const uint64_t is_tunneled = (buf->ol_flags &
-						      (PKT_TX_TUNNEL_GRE |
-						       PKT_TX_TUNNEL_VXLAN));
+						      (RTE_MBUF_F_TX_TUNNEL_GRE |
+						       RTE_MBUF_F_TX_TUNNEL_VXLAN));
 
 			if (is_tunneled && txq->csum_l2tun) {
 				owner_opcode |= MLX4_WQE_CTRL_IIP_HDR_CSUM |
 						MLX4_WQE_CTRL_IL4_HDR_CSUM;
-				if (buf->ol_flags & PKT_TX_OUTER_IP_CKSUM)
+				if (buf->ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 					srcrb.flags |=
 					    RTE_BE32(MLX4_WQE_CTRL_IP_HDR_CSUM);
 			} else {
@@ -1112,18 +1112,18 @@ rxq_cq_to_ol_flags(uint32_t flags, int csum, int csum_l2tun)
 		ol_flags |=
 			mlx4_transpose(flags,
 				       MLX4_CQE_STATUS_IP_HDR_CSUM_OK,
-				       PKT_RX_IP_CKSUM_GOOD) |
+				       RTE_MBUF_F_RX_IP_CKSUM_GOOD) |
 			mlx4_transpose(flags,
 				       MLX4_CQE_STATUS_TCP_UDP_CSUM_OK,
-				       PKT_RX_L4_CKSUM_GOOD);
+				       RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 	if ((flags & MLX4_CQE_L2_TUNNEL) && csum_l2tun)
 		ol_flags |=
 			mlx4_transpose(flags,
 				       MLX4_CQE_L2_TUNNEL_IPOK,
-				       PKT_RX_IP_CKSUM_GOOD) |
+				       RTE_MBUF_F_RX_IP_CKSUM_GOOD) |
 			mlx4_transpose(flags,
 				       MLX4_CQE_L2_TUNNEL_L4_CSUM,
-				       PKT_RX_L4_CKSUM_GOOD);
+				       RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 	return ol_flags;
 }
 
@@ -1274,7 +1274,7 @@ mlx4_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
 			/* Update packet information. */
 			pkt->packet_type =
 				rxq_cq_to_pkt_type(cqe, rxq->l2tun_offload);
-			pkt->ol_flags = PKT_RX_RSS_HASH;
+			pkt->ol_flags = RTE_MBUF_F_RX_RSS_HASH;
 			pkt->hash.rss = cqe->immed_rss_invalid;
 			if (rxq->crc_present)
 				len -= RTE_ETHER_CRC_LEN;
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index c914a7120c..ffdd50c93d 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -9275,7 +9275,7 @@ mlx5_flow_tunnel_get_restore_info(struct rte_eth_dev *dev,
 {
 	uint64_t ol_flags = m->ol_flags;
 	const struct mlx5_flow_tbl_data_entry *tble;
-	const uint64_t mask = PKT_RX_FDIR | PKT_RX_FDIR_ID;
+	const uint64_t mask = RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 
 	if (!is_tunnel_offload_active(dev)) {
 		info->flags = 0;
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index e3b1051ba4..3ae62cb8e0 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -692,10 +692,10 @@ rxq_cq_to_ol_flags(volatile struct mlx5_cqe *cqe)
 	ol_flags =
 		TRANSPOSE(flags,
 			  MLX5_CQE_RX_L3_HDR_VALID,
-			  PKT_RX_IP_CKSUM_GOOD) |
+			  RTE_MBUF_F_RX_IP_CKSUM_GOOD) |
 		TRANSPOSE(flags,
 			  MLX5_CQE_RX_L4_HDR_VALID,
-			  PKT_RX_L4_CKSUM_GOOD);
+			  RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 	return ol_flags;
 }
 
@@ -731,7 +731,7 @@ rxq_cq_to_mbuf(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt,
 			rss_hash_res = rte_be_to_cpu_32(mcqe->rx_hash_result);
 		if (rss_hash_res) {
 			pkt->hash.rss = rss_hash_res;
-			pkt->ol_flags |= PKT_RX_RSS_HASH;
+			pkt->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		}
 	}
 	if (rxq->mark) {
@@ -745,9 +745,9 @@ rxq_cq_to_mbuf(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt,
 			mark = ((mcqe->byte_cnt_flow & 0xff) << 8) |
 				(mcqe->flow_tag_high << 16);
 		if (MLX5_FLOW_MARK_IS_VALID(mark)) {
-			pkt->ol_flags |= PKT_RX_FDIR;
+			pkt->ol_flags |= RTE_MBUF_F_RX_FDIR;
 			if (mark != RTE_BE32(MLX5_FLOW_MARK_DEFAULT)) {
-				pkt->ol_flags |= PKT_RX_FDIR_ID;
+				pkt->ol_flags |= RTE_MBUF_F_RX_FDIR_ID;
 				pkt->hash.fdir.hi = mlx5_flow_mark_get(mark);
 			}
 		}
@@ -775,7 +775,7 @@ rxq_cq_to_mbuf(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt,
 			vlan_strip = mcqe->hdr_type &
 				     RTE_BE16(MLX5_CQE_VLAN_STRIPPED);
 		if (vlan_strip) {
-			pkt->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+			pkt->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 			pkt->vlan_tci = rte_be_to_cpu_16(cqe->vlan_info);
 		}
 	}
@@ -863,7 +863,7 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
 			}
 			pkt = seg;
 			MLX5_ASSERT(len >= (rxq->crc_present << 2));
-			pkt->ol_flags &= EXT_ATTACHED_MBUF;
+			pkt->ol_flags &= RTE_MBUF_F_EXTERNAL;
 			rxq_cq_to_mbuf(rxq, pkt, cqe, mcqe);
 			if (rxq->crc_present)
 				len -= RTE_ETHER_CRC_LEN;
@@ -872,7 +872,7 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
 				mlx5_lro_update_hdr
 					(rte_pktmbuf_mtod(pkt, uint8_t *), cqe,
 					 mcqe, rxq, len);
-				pkt->ol_flags |= PKT_RX_LRO;
+				pkt->ol_flags |= RTE_MBUF_F_RX_LRO;
 				pkt->tso_segsz = len / cqe->lro_num_seg;
 			}
 		}
@@ -1144,7 +1144,7 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
 		if (cqe->lro_num_seg > 1) {
 			mlx5_lro_update_hdr(rte_pktmbuf_mtod(pkt, uint8_t *),
 					    cqe, mcqe, rxq, len);
-			pkt->ol_flags |= PKT_RX_LRO;
+			pkt->ol_flags |= RTE_MBUF_F_RX_LRO;
 			pkt->tso_segsz = len / cqe->lro_num_seg;
 		}
 		PKT_LEN(pkt) = len;
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 3f2b99fb65..a3244b8c78 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -488,7 +488,7 @@ mprq_buf_to_pkt(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt, uint32_t len,
 		shinfo = &buf->shinfos[strd_idx];
 		rte_mbuf_ext_refcnt_set(shinfo, 1);
 		/*
-		 * EXT_ATTACHED_MBUF will be set to pkt->ol_flags when
+		 * RTE_MBUF_F_EXTERNAL will be set to pkt->ol_flags when
 		 * attaching the stride to mbuf and more offload flags
 		 * will be added below by calling rxq_cq_to_mbuf().
 		 * Other fields will be overwritten.
@@ -497,7 +497,7 @@ mprq_buf_to_pkt(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt, uint32_t len,
 					  buf_len, shinfo);
 		/* Set mbuf head-room. */
 		SET_DATA_OFF(pkt, RTE_PKTMBUF_HEADROOM);
-		MLX5_ASSERT(pkt->ol_flags == EXT_ATTACHED_MBUF);
+		MLX5_ASSERT(pkt->ol_flags == RTE_MBUF_F_EXTERNAL);
 		MLX5_ASSERT(rte_pktmbuf_tailroom(pkt) >=
 			len - (hdrm_overlap > 0 ? hdrm_overlap : 0));
 		DATA_LEN(pkt) = len;
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index abd8ce7989..43c18c3d1b 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -180,7 +180,7 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl)
 		mbuf_init->nb_segs = 1;
 		mbuf_init->port = rxq->port_id;
 		if (priv->flags & RTE_PKTMBUF_POOL_F_PINNED_EXT_BUF)
-			mbuf_init->ol_flags = EXT_ATTACHED_MBUF;
+			mbuf_init->ol_flags = RTE_MBUF_F_EXTERNAL;
 		/*
 		 * prevent compiler reordering:
 		 * rearm_data covers previous fields.
diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 7b984eff35..646d2a31e2 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -255,10 +255,10 @@ mlx5_set_cksum_table(void)
 
 	/*
 	 * The index should have:
-	 * bit[0] = PKT_TX_TCP_SEG
-	 * bit[2:3] = PKT_TX_UDP_CKSUM, PKT_TX_TCP_CKSUM
-	 * bit[4] = PKT_TX_IP_CKSUM
-	 * bit[8] = PKT_TX_OUTER_IP_CKSUM
+	 * bit[0] = RTE_MBUF_F_TX_TCP_SEG
+	 * bit[2:3] = RTE_MBUF_F_TX_UDP_CKSUM, RTE_MBUF_F_TX_TCP_CKSUM
+	 * bit[4] = RTE_MBUF_F_TX_IP_CKSUM
+	 * bit[8] = RTE_MBUF_F_TX_OUTER_IP_CKSUM
 	 * bit[9] = tunnel
 	 */
 	for (i = 0; i < RTE_DIM(mlx5_cksum_table); ++i) {
@@ -293,10 +293,10 @@ mlx5_set_swp_types_table(void)
 
 	/*
 	 * The index should have:
-	 * bit[0:1] = PKT_TX_L4_MASK
-	 * bit[4] = PKT_TX_IPV6
-	 * bit[8] = PKT_TX_OUTER_IPV6
-	 * bit[9] = PKT_TX_OUTER_UDP
+	 * bit[0:1] = RTE_MBUF_F_TX_L4_MASK
+	 * bit[4] = RTE_MBUF_F_TX_IPV6
+	 * bit[8] = RTE_MBUF_F_TX_OUTER_IPV6
+	 * bit[9] = RTE_MBUF_F_TX_OUTER_UDP
 	 */
 	for (i = 0; i < RTE_DIM(mlx5_swp_types_table); ++i) {
 		v = 0;
@@ -306,7 +306,7 @@ mlx5_set_swp_types_table(void)
 			v |= MLX5_ETH_WQE_L4_OUTER_UDP;
 		if (i & (1 << 4))
 			v |= MLX5_ETH_WQE_L3_INNER_IPV6;
-		if ((i & 3) == (PKT_TX_UDP_CKSUM >> 52))
+		if ((i & 3) == (RTE_MBUF_F_TX_UDP_CKSUM >> 52))
 			v |= MLX5_ETH_WQE_L4_INNER_UDP;
 		mlx5_swp_types_table[i] = v;
 	}
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
index 68cef1a83e..bcf487c34e 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
@@ -283,20 +283,20 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				const vector unsigned char fdir_flags =
 					(vector unsigned char)
 					(vector unsigned int){
-					PKT_RX_FDIR, PKT_RX_FDIR,
-					PKT_RX_FDIR, PKT_RX_FDIR};
+					RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_FDIR,
+					RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_FDIR};
 				const vector unsigned char fdir_all_flags =
 					(vector unsigned char)
 					(vector unsigned int){
-					PKT_RX_FDIR | PKT_RX_FDIR_ID,
-					PKT_RX_FDIR | PKT_RX_FDIR_ID,
-					PKT_RX_FDIR | PKT_RX_FDIR_ID,
-					PKT_RX_FDIR | PKT_RX_FDIR_ID};
+					RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID,
+					RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID,
+					RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID,
+					RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID};
 				vector unsigned char fdir_id_flags =
 					(vector unsigned char)
 					(vector unsigned int){
-					PKT_RX_FDIR_ID, PKT_RX_FDIR_ID,
-					PKT_RX_FDIR_ID, PKT_RX_FDIR_ID};
+					RTE_MBUF_F_RX_FDIR_ID, RTE_MBUF_F_RX_FDIR_ID,
+					RTE_MBUF_F_RX_FDIR_ID, RTE_MBUF_F_RX_FDIR_ID};
 				/* Extract flow_tag field. */
 				vector unsigned char ftag0 = vec_perm(mcqe1,
 							zero, flow_mark_shuf);
@@ -316,7 +316,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 					ol_flags_mask,
 					(vector unsigned long)fdir_all_flags);
 
-				/* Set PKT_RX_FDIR if flow tag is non-zero. */
+				/* Set RTE_MBUF_F_RX_FDIR if flow tag is non-zero. */
 				invalid_mask = (vector unsigned char)
 					vec_cmpeq((vector unsigned int)ftag,
 					(vector unsigned int)zero);
@@ -376,10 +376,10 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				const vector unsigned char vlan_mask =
 					(vector unsigned char)
 					(vector unsigned int) {
-					(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED),
-					(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED),
-					(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED),
-					(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED)};
+					(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED),
+					(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED),
+					(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED),
+					(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED)};
 				const vector unsigned char cv_mask =
 					(vector unsigned char)
 					(vector unsigned int) {
@@ -433,10 +433,10 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 			}
 			const vector unsigned char hash_mask =
 				(vector unsigned char)(vector unsigned int) {
-					PKT_RX_RSS_HASH,
-					PKT_RX_RSS_HASH,
-					PKT_RX_RSS_HASH,
-					PKT_RX_RSS_HASH};
+					RTE_MBUF_F_RX_RSS_HASH,
+					RTE_MBUF_F_RX_RSS_HASH,
+					RTE_MBUF_F_RX_RSS_HASH,
+					RTE_MBUF_F_RX_RSS_HASH};
 			const vector unsigned char rearm_flags =
 				(vector unsigned char)(vector unsigned int) {
 				(uint32_t)t_pkt->ol_flags,
@@ -531,13 +531,13 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq,
 	vector unsigned char pinfo, ptype;
 	vector unsigned char ol_flags = (vector unsigned char)
 		(vector unsigned int){
-			rxq->rss_hash * PKT_RX_RSS_HASH |
+			rxq->rss_hash * RTE_MBUF_F_RX_RSS_HASH |
 				rxq->hw_timestamp * rxq->timestamp_rx_flag,
-			rxq->rss_hash * PKT_RX_RSS_HASH |
+			rxq->rss_hash * RTE_MBUF_F_RX_RSS_HASH |
 				rxq->hw_timestamp * rxq->timestamp_rx_flag,
-			rxq->rss_hash * PKT_RX_RSS_HASH |
+			rxq->rss_hash * RTE_MBUF_F_RX_RSS_HASH |
 				rxq->hw_timestamp * rxq->timestamp_rx_flag,
-			rxq->rss_hash * PKT_RX_RSS_HASH |
+			rxq->rss_hash * RTE_MBUF_F_RX_RSS_HASH |
 				rxq->hw_timestamp * rxq->timestamp_rx_flag};
 	vector unsigned char cv_flags;
 	const vector unsigned char zero = (vector unsigned char){0};
@@ -551,21 +551,21 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq,
 		(vector unsigned char)(vector unsigned int){
 		0x00000003, 0x00000003, 0x00000003, 0x00000003};
 	const vector unsigned char cv_flag_sel = (vector unsigned char){
-		0, (uint8_t)(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED),
-		(uint8_t)(PKT_RX_IP_CKSUM_GOOD >> 1), 0,
-		(uint8_t)(PKT_RX_L4_CKSUM_GOOD >> 1), 0,
-		(uint8_t)((PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1),
+		0, (uint8_t)(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED),
+		(uint8_t)(RTE_MBUF_F_RX_IP_CKSUM_GOOD >> 1), 0,
+		(uint8_t)(RTE_MBUF_F_RX_L4_CKSUM_GOOD >> 1), 0,
+		(uint8_t)((RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1),
 		0, 0, 0, 0, 0, 0, 0, 0, 0};
 	const vector unsigned char cv_mask =
 		(vector unsigned char)(vector unsigned int){
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD |
-		PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD |
-		PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD |
-		PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD |
-		PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED};
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED};
 	const vector unsigned char mbuf_init =
 		(vector unsigned char)vec_vsx_ld
 			(0, (vector unsigned char *)&rxq->mbuf_initializer);
@@ -602,19 +602,19 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq,
 			0xffffff00, 0xffffff00, 0xffffff00, 0xffffff00};
 		const vector unsigned char fdir_flags =
 			(vector unsigned char)(vector unsigned int){
-			PKT_RX_FDIR, PKT_RX_FDIR,
-			PKT_RX_FDIR, PKT_RX_FDIR};
+			RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_FDIR,
+			RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_FDIR};
 		vector unsigned char fdir_id_flags =
 			(vector unsigned char)(vector unsigned int){
-			PKT_RX_FDIR_ID, PKT_RX_FDIR_ID,
-			PKT_RX_FDIR_ID, PKT_RX_FDIR_ID};
+			RTE_MBUF_F_RX_FDIR_ID, RTE_MBUF_F_RX_FDIR_ID,
+			RTE_MBUF_F_RX_FDIR_ID, RTE_MBUF_F_RX_FDIR_ID};
 		vector unsigned char flow_tag, invalid_mask;
 
 		flow_tag = (vector unsigned char)
 			vec_and((vector unsigned long)pinfo,
 			(vector unsigned long)pinfo_ft_mask);
 
-		/* Check if flow tag is non-zero then set PKT_RX_FDIR. */
+		/* Check if flow tag is non-zero then set RTE_MBUF_F_RX_FDIR. */
 		invalid_mask = (vector unsigned char)
 			vec_cmpeq((vector unsigned int)flow_tag,
 			(vector unsigned int)zero);
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
index 5ff792f4cb..aa36df29a0 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
@@ -220,12 +220,12 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				const uint32x4_t ft_mask =
 					vdupq_n_u32(MLX5_FLOW_MARK_DEFAULT);
 				const uint32x4_t fdir_flags =
-					vdupq_n_u32(PKT_RX_FDIR);
+					vdupq_n_u32(RTE_MBUF_F_RX_FDIR);
 				const uint32x4_t fdir_all_flags =
-					vdupq_n_u32(PKT_RX_FDIR |
-						    PKT_RX_FDIR_ID);
+					vdupq_n_u32(RTE_MBUF_F_RX_FDIR |
+						    RTE_MBUF_F_RX_FDIR_ID);
 				uint32x4_t fdir_id_flags =
-					vdupq_n_u32(PKT_RX_FDIR_ID);
+					vdupq_n_u32(RTE_MBUF_F_RX_FDIR_ID);
 				uint32x4_t invalid_mask, ftag;
 
 				__asm__ volatile
@@ -240,7 +240,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				invalid_mask = vceqzq_u32(ftag);
 				ol_flags_mask = vorrq_u32(ol_flags_mask,
 							  fdir_all_flags);
-				/* Set PKT_RX_FDIR if flow tag is non-zero. */
+				/* Set RTE_MBUF_F_RX_FDIR if flow tag is non-zero. */
 				ol_flags = vorrq_u32(ol_flags,
 					vbicq_u32(fdir_flags, invalid_mask));
 				/* Mask out invalid entries. */
@@ -276,8 +276,8 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				const uint8_t pkt_hdr3 =
 					mcq[pos % 8 + 3].hdr_type;
 				const uint32x4_t vlan_mask =
-					vdupq_n_u32(PKT_RX_VLAN |
-						    PKT_RX_VLAN_STRIPPED);
+					vdupq_n_u32(RTE_MBUF_F_RX_VLAN |
+						    RTE_MBUF_F_RX_VLAN_STRIPPED);
 				const uint32x4_t cv_mask =
 					vdupq_n_u32(MLX5_CQE_VLAN_STRIPPED);
 				const uint32x4_t pkt_cv = {
@@ -317,7 +317,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				}
 			}
 			const uint32x4_t hash_flags =
-				vdupq_n_u32(PKT_RX_RSS_HASH);
+				vdupq_n_u32(RTE_MBUF_F_RX_RSS_HASH);
 			const uint32x4_t rearm_flags =
 				vdupq_n_u32((uint32_t)t_pkt->ol_flags);
 
@@ -396,22 +396,22 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq,
 	uint16x4_t ptype;
 	uint32x4_t pinfo, cv_flags;
 	uint32x4_t ol_flags =
-		vdupq_n_u32(rxq->rss_hash * PKT_RX_RSS_HASH |
+		vdupq_n_u32(rxq->rss_hash * RTE_MBUF_F_RX_RSS_HASH |
 			    rxq->hw_timestamp * rxq->timestamp_rx_flag);
 	const uint32x4_t ptype_ol_mask = { 0x106, 0x106, 0x106, 0x106 };
 	const uint8x16_t cv_flag_sel = {
 		0,
-		(uint8_t)(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED),
-		(uint8_t)(PKT_RX_IP_CKSUM_GOOD >> 1),
+		(uint8_t)(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED),
+		(uint8_t)(RTE_MBUF_F_RX_IP_CKSUM_GOOD >> 1),
 		0,
-		(uint8_t)(PKT_RX_L4_CKSUM_GOOD >> 1),
+		(uint8_t)(RTE_MBUF_F_RX_L4_CKSUM_GOOD >> 1),
 		0,
-		(uint8_t)((PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1),
+		(uint8_t)((RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1),
 		0, 0, 0, 0, 0, 0, 0, 0, 0
 	};
 	const uint32x4_t cv_mask =
-		vdupq_n_u32(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD |
-			    PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+		vdupq_n_u32(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			    RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 	const uint64x2_t mbuf_init = vld1q_u64
 				((const uint64_t *)&rxq->mbuf_initializer);
 	uint64x2_t rearm0, rearm1, rearm2, rearm3;
@@ -419,11 +419,11 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq,
 
 	if (rxq->mark) {
 		const uint32x4_t ft_def = vdupq_n_u32(MLX5_FLOW_MARK_DEFAULT);
-		const uint32x4_t fdir_flags = vdupq_n_u32(PKT_RX_FDIR);
-		uint32x4_t fdir_id_flags = vdupq_n_u32(PKT_RX_FDIR_ID);
+		const uint32x4_t fdir_flags = vdupq_n_u32(RTE_MBUF_F_RX_FDIR);
+		uint32x4_t fdir_id_flags = vdupq_n_u32(RTE_MBUF_F_RX_FDIR_ID);
 		uint32x4_t invalid_mask;
 
-		/* Check if flow tag is non-zero then set PKT_RX_FDIR. */
+		/* Check if flow tag is non-zero then set RTE_MBUF_F_RX_FDIR. */
 		invalid_mask = vceqzq_u32(flow_tag);
 		ol_flags = vorrq_u32(ol_flags,
 				     vbicq_u32(fdir_flags, invalid_mask));
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
index adf991f013..b0fc29d7b9 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
@@ -204,12 +204,12 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				const __m128i ft_mask =
 					_mm_set1_epi32(0xffffff00);
 				const __m128i fdir_flags =
-					_mm_set1_epi32(PKT_RX_FDIR);
+					_mm_set1_epi32(RTE_MBUF_F_RX_FDIR);
 				const __m128i fdir_all_flags =
-					_mm_set1_epi32(PKT_RX_FDIR |
-						       PKT_RX_FDIR_ID);
+					_mm_set1_epi32(RTE_MBUF_F_RX_FDIR |
+						       RTE_MBUF_F_RX_FDIR_ID);
 				__m128i fdir_id_flags =
-					_mm_set1_epi32(PKT_RX_FDIR_ID);
+					_mm_set1_epi32(RTE_MBUF_F_RX_FDIR_ID);
 
 				/* Extract flow_tag field. */
 				__m128i ftag0 =
@@ -223,7 +223,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 
 				ol_flags_mask = _mm_or_si128(ol_flags_mask,
 							     fdir_all_flags);
-				/* Set PKT_RX_FDIR if flow tag is non-zero. */
+				/* Set RTE_MBUF_F_RX_FDIR if flow tag is non-zero. */
 				ol_flags = _mm_or_si128(ol_flags,
 					_mm_andnot_si128(invalid_mask,
 							 fdir_flags));
@@ -260,8 +260,8 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				const uint8_t pkt_hdr3 =
 					_mm_extract_epi8(mcqe2, 8);
 				const __m128i vlan_mask =
-					_mm_set1_epi32(PKT_RX_VLAN |
-						       PKT_RX_VLAN_STRIPPED);
+					_mm_set1_epi32(RTE_MBUF_F_RX_VLAN |
+						       RTE_MBUF_F_RX_VLAN_STRIPPED);
 				const __m128i cv_mask =
 					_mm_set1_epi32(MLX5_CQE_VLAN_STRIPPED);
 				const __m128i pkt_cv =
@@ -303,7 +303,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				}
 			}
 			const __m128i hash_flags =
-				_mm_set1_epi32(PKT_RX_RSS_HASH);
+				_mm_set1_epi32(RTE_MBUF_F_RX_RSS_HASH);
 			const __m128i rearm_flags =
 				_mm_set1_epi32((uint32_t)t_pkt->ol_flags);
 
@@ -381,7 +381,7 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq, __m128i cqes[4],
 {
 	__m128i pinfo0, pinfo1;
 	__m128i pinfo, ptype;
-	__m128i ol_flags = _mm_set1_epi32(rxq->rss_hash * PKT_RX_RSS_HASH |
+	__m128i ol_flags = _mm_set1_epi32(rxq->rss_hash * RTE_MBUF_F_RX_RSS_HASH |
 					  rxq->hw_timestamp * rxq->timestamp_rx_flag);
 	__m128i cv_flags;
 	const __m128i zero = _mm_setzero_si128();
@@ -390,17 +390,17 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq, __m128i cqes[4],
 	const __m128i pinfo_mask = _mm_set1_epi32(0x3);
 	const __m128i cv_flag_sel =
 		_mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0, 0,
-			     (uint8_t)((PKT_RX_IP_CKSUM_GOOD |
-					PKT_RX_L4_CKSUM_GOOD) >> 1),
+			     (uint8_t)((RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+					RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1),
 			     0,
-			     (uint8_t)(PKT_RX_L4_CKSUM_GOOD >> 1),
+			     (uint8_t)(RTE_MBUF_F_RX_L4_CKSUM_GOOD >> 1),
 			     0,
-			     (uint8_t)(PKT_RX_IP_CKSUM_GOOD >> 1),
-			     (uint8_t)(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED),
+			     (uint8_t)(RTE_MBUF_F_RX_IP_CKSUM_GOOD >> 1),
+			     (uint8_t)(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED),
 			     0);
 	const __m128i cv_mask =
-		_mm_set1_epi32(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD |
-			      PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+		_mm_set1_epi32(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 	const __m128i mbuf_init =
 		_mm_load_si128((__m128i *)&rxq->mbuf_initializer);
 	__m128i rearm0, rearm1, rearm2, rearm3;
@@ -416,12 +416,12 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq, __m128i cqes[4],
 	ptype = _mm_unpacklo_epi64(pinfo0, pinfo1);
 	if (rxq->mark) {
 		const __m128i pinfo_ft_mask = _mm_set1_epi32(0xffffff00);
-		const __m128i fdir_flags = _mm_set1_epi32(PKT_RX_FDIR);
-		__m128i fdir_id_flags = _mm_set1_epi32(PKT_RX_FDIR_ID);
+		const __m128i fdir_flags = _mm_set1_epi32(RTE_MBUF_F_RX_FDIR);
+		__m128i fdir_id_flags = _mm_set1_epi32(RTE_MBUF_F_RX_FDIR_ID);
 		__m128i flow_tag, invalid_mask;
 
 		flow_tag = _mm_and_si128(pinfo, pinfo_ft_mask);
-		/* Check if flow tag is non-zero then set PKT_RX_FDIR. */
+		/* Check if flow tag is non-zero then set RTE_MBUF_F_RX_FDIR. */
 		invalid_mask = _mm_cmpeq_epi32(flow_tag, zero);
 		ol_flags = _mm_or_si128(ol_flags,
 					_mm_andnot_si128(invalid_mask,
diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h
index 1efe912a06..3c71d825b8 100644
--- a/drivers/net/mlx5/mlx5_tx.h
+++ b/drivers/net/mlx5/mlx5_tx.h
@@ -78,7 +78,7 @@ uint16_t mlx5_tx_burst_##func(void *txq, \
 
 /* Mbuf dynamic flag offset for inline. */
 extern uint64_t rte_net_mlx5_dynf_inline_mask;
-#define PKT_TX_DYNF_NOINLINE rte_net_mlx5_dynf_inline_mask
+#define RTE_MBUF_F_TX_DYNF_NOINLINE rte_net_mlx5_dynf_inline_mask
 
 extern uint32_t mlx5_ptype_table[] __rte_cache_aligned;
 extern uint8_t mlx5_cksum_table[1 << 10] __rte_cache_aligned;
@@ -513,22 +513,22 @@ txq_mbuf_to_swp(struct mlx5_txq_local *__rte_restrict loc,
 	if (!MLX5_TXOFF_CONFIG(SWP))
 		return 0;
 	ol = loc->mbuf->ol_flags;
-	tunnel = ol & PKT_TX_TUNNEL_MASK;
+	tunnel = ol & RTE_MBUF_F_TX_TUNNEL_MASK;
 	/*
 	 * Check whether Software Parser is required.
 	 * Only customized tunnels may ask for.
 	 */
-	if (likely(tunnel != PKT_TX_TUNNEL_UDP && tunnel != PKT_TX_TUNNEL_IP))
+	if (likely(tunnel != RTE_MBUF_F_TX_TUNNEL_UDP && tunnel != RTE_MBUF_F_TX_TUNNEL_IP))
 		return 0;
 	/*
 	 * The index should have:
-	 * bit[0:1] = PKT_TX_L4_MASK
-	 * bit[4] = PKT_TX_IPV6
-	 * bit[8] = PKT_TX_OUTER_IPV6
-	 * bit[9] = PKT_TX_OUTER_UDP
+	 * bit[0:1] = RTE_MBUF_F_TX_L4_MASK
+	 * bit[4] = RTE_MBUF_F_TX_IPV6
+	 * bit[8] = RTE_MBUF_F_TX_OUTER_IPV6
+	 * bit[9] = RTE_MBUF_F_TX_OUTER_UDP
 	 */
-	idx = (ol & (PKT_TX_L4_MASK | PKT_TX_IPV6 | PKT_TX_OUTER_IPV6)) >> 52;
-	idx |= (tunnel == PKT_TX_TUNNEL_UDP) ? (1 << 9) : 0;
+	idx = (ol & (RTE_MBUF_F_TX_L4_MASK | RTE_MBUF_F_TX_IPV6 | RTE_MBUF_F_TX_OUTER_IPV6)) >> 52;
+	idx |= (tunnel == RTE_MBUF_F_TX_TUNNEL_UDP) ? (1 << 9) : 0;
 	*swp_flags = mlx5_swp_types_table[idx];
 	/*
 	 * Set offsets for SW parser. Since ConnectX-5, SW parser just
@@ -538,19 +538,19 @@ txq_mbuf_to_swp(struct mlx5_txq_local *__rte_restrict loc,
 	 * should be set regardless of HW offload.
 	 */
 	off = loc->mbuf->outer_l2_len;
-	if (MLX5_TXOFF_CONFIG(VLAN) && ol & PKT_TX_VLAN)
+	if (MLX5_TXOFF_CONFIG(VLAN) && ol & RTE_MBUF_F_TX_VLAN)
 		off += sizeof(struct rte_vlan_hdr);
 	set = (off >> 1) << 8; /* Outer L3 offset. */
 	off += loc->mbuf->outer_l3_len;
-	if (tunnel == PKT_TX_TUNNEL_UDP)
+	if (tunnel == RTE_MBUF_F_TX_TUNNEL_UDP)
 		set |= off >> 1; /* Outer L4 offset. */
-	if (ol & (PKT_TX_IPV4 | PKT_TX_IPV6)) { /* Inner IP. */
-		const uint64_t csum = ol & PKT_TX_L4_MASK;
+	if (ol & (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IPV6)) { /* Inner IP. */
+		const uint64_t csum = ol & RTE_MBUF_F_TX_L4_MASK;
 			off += loc->mbuf->l2_len;
 		set |= (off >> 1) << 24; /* Inner L3 offset. */
-		if (csum == PKT_TX_TCP_CKSUM ||
-		    csum == PKT_TX_UDP_CKSUM ||
-		    (MLX5_TXOFF_CONFIG(TSO) && ol & PKT_TX_TCP_SEG)) {
+		if (csum == RTE_MBUF_F_TX_TCP_CKSUM ||
+		    csum == RTE_MBUF_F_TX_UDP_CKSUM ||
+		    (MLX5_TXOFF_CONFIG(TSO) && ol & RTE_MBUF_F_TX_TCP_SEG)) {
 			off += loc->mbuf->l3_len;
 			set |= (off >> 1) << 16; /* Inner L4 offset. */
 		}
@@ -572,16 +572,16 @@ static __rte_always_inline uint8_t
 txq_ol_cksum_to_cs(struct rte_mbuf *buf)
 {
 	uint32_t idx;
-	uint8_t is_tunnel = !!(buf->ol_flags & PKT_TX_TUNNEL_MASK);
-	const uint64_t ol_flags_mask = PKT_TX_TCP_SEG | PKT_TX_L4_MASK |
-				       PKT_TX_IP_CKSUM | PKT_TX_OUTER_IP_CKSUM;
+	uint8_t is_tunnel = !!(buf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK);
+	const uint64_t ol_flags_mask = RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_L4_MASK |
+				       RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_OUTER_IP_CKSUM;
 
 	/*
 	 * The index should have:
-	 * bit[0] = PKT_TX_TCP_SEG
-	 * bit[2:3] = PKT_TX_UDP_CKSUM, PKT_TX_TCP_CKSUM
-	 * bit[4] = PKT_TX_IP_CKSUM
-	 * bit[8] = PKT_TX_OUTER_IP_CKSUM
+	 * bit[0] = RTE_MBUF_F_TX_TCP_SEG
+	 * bit[2:3] = RTE_MBUF_F_TX_UDP_CKSUM, RTE_MBUF_F_TX_TCP_CKSUM
+	 * bit[4] = RTE_MBUF_F_TX_IP_CKSUM
+	 * bit[8] = RTE_MBUF_F_TX_OUTER_IP_CKSUM
 	 * bit[9] = tunnel
 	 */
 	idx = ((buf->ol_flags & ol_flags_mask) >> 50) | (!!is_tunnel << 9);
@@ -952,11 +952,11 @@ mlx5_tx_eseg_none(struct mlx5_txq_data *__rte_restrict txq __rte_unused,
 	es->swp_offs = txq_mbuf_to_swp(loc, &es->swp_flags, olx);
 	/* Fill metadata field if needed. */
 	es->metadata = MLX5_TXOFF_CONFIG(METADATA) ?
-		       loc->mbuf->ol_flags & PKT_TX_DYNF_METADATA ?
+		       loc->mbuf->ol_flags & RTE_MBUF_DYNFLAG_TX_METADATA ?
 		       *RTE_FLOW_DYNF_METADATA(loc->mbuf) : 0 : 0;
 	/* Engage VLAN tag insertion feature if requested. */
 	if (MLX5_TXOFF_CONFIG(VLAN) &&
-	    loc->mbuf->ol_flags & PKT_TX_VLAN) {
+	    loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		/*
 		 * We should get here only if device support
 		 * this feature correctly.
@@ -1012,7 +1012,7 @@ mlx5_tx_eseg_dmin(struct mlx5_txq_data *__rte_restrict txq __rte_unused,
 	es->swp_offs = txq_mbuf_to_swp(loc, &es->swp_flags, olx);
 	/* Fill metadata field if needed. */
 	es->metadata = MLX5_TXOFF_CONFIG(METADATA) ?
-		       loc->mbuf->ol_flags & PKT_TX_DYNF_METADATA ?
+		       loc->mbuf->ol_flags & RTE_MBUF_DYNFLAG_TX_METADATA ?
 		       *RTE_FLOW_DYNF_METADATA(loc->mbuf) : 0 : 0;
 	psrc = rte_pktmbuf_mtod(loc->mbuf, uint8_t *);
 	es->inline_hdr_sz = RTE_BE16(MLX5_ESEG_MIN_INLINE_SIZE);
@@ -1095,7 +1095,7 @@ mlx5_tx_eseg_data(struct mlx5_txq_data *__rte_restrict txq,
 	es->swp_offs = txq_mbuf_to_swp(loc, &es->swp_flags, olx);
 	/* Fill metadata field if needed. */
 	es->metadata = MLX5_TXOFF_CONFIG(METADATA) ?
-		       loc->mbuf->ol_flags & PKT_TX_DYNF_METADATA ?
+		       loc->mbuf->ol_flags & RTE_MBUF_DYNFLAG_TX_METADATA ?
 		       *RTE_FLOW_DYNF_METADATA(loc->mbuf) : 0 : 0;
 	psrc = rte_pktmbuf_mtod(loc->mbuf, uint8_t *);
 	es->inline_hdr_sz = rte_cpu_to_be_16(inlen);
@@ -1203,7 +1203,7 @@ mlx5_tx_mseg_memcpy(uint8_t *pdst,
 			MLX5_ASSERT(loc->mbuf_nseg > 1);
 			MLX5_ASSERT(loc->mbuf);
 			--loc->mbuf_nseg;
-			if (loc->mbuf->ol_flags & PKT_TX_DYNF_NOINLINE) {
+			if (loc->mbuf->ol_flags & RTE_MBUF_F_TX_DYNF_NOINLINE) {
 				unsigned int diff;
 
 				if (copy >= must) {
@@ -1307,7 +1307,7 @@ mlx5_tx_eseg_mdat(struct mlx5_txq_data *__rte_restrict txq,
 	es->swp_offs = txq_mbuf_to_swp(loc, &es->swp_flags, olx);
 	/* Fill metadata field if needed. */
 	es->metadata = MLX5_TXOFF_CONFIG(METADATA) ?
-		       loc->mbuf->ol_flags & PKT_TX_DYNF_METADATA ?
+		       loc->mbuf->ol_flags & RTE_MBUF_DYNFLAG_TX_METADATA ?
 		       *RTE_FLOW_DYNF_METADATA(loc->mbuf) : 0 : 0;
 	MLX5_ASSERT(inlen >= MLX5_ESEG_MIN_INLINE_SIZE);
 	pdst = (uint8_t *)&es->inline_data;
@@ -1814,13 +1814,13 @@ mlx5_tx_packet_multi_tso(struct mlx5_txq_data *__rte_restrict txq,
 	 * the required space in WQE ring buffer.
 	 */
 	dlen = rte_pktmbuf_pkt_len(loc->mbuf);
-	if (MLX5_TXOFF_CONFIG(VLAN) && loc->mbuf->ol_flags & PKT_TX_VLAN)
+	if (MLX5_TXOFF_CONFIG(VLAN) && loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN)
 		vlan = sizeof(struct rte_vlan_hdr);
 	inlen = loc->mbuf->l2_len + vlan +
 		loc->mbuf->l3_len + loc->mbuf->l4_len;
 	if (unlikely((!inlen || !loc->mbuf->tso_segsz)))
 		return MLX5_TXCMP_CODE_ERROR;
-	if (loc->mbuf->ol_flags & PKT_TX_TUNNEL_MASK)
+	if (loc->mbuf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 		inlen += loc->mbuf->outer_l2_len + loc->mbuf->outer_l3_len;
 	/* Packet must contain all TSO headers. */
 	if (unlikely(inlen > MLX5_MAX_TSO_HEADER ||
@@ -1929,7 +1929,7 @@ mlx5_tx_packet_multi_send(struct mlx5_txq_data *__rte_restrict txq,
 	/* Update sent data bytes counter. */
 	txq->stats.obytes += rte_pktmbuf_pkt_len(loc->mbuf);
 	if (MLX5_TXOFF_CONFIG(VLAN) &&
-	    loc->mbuf->ol_flags & PKT_TX_VLAN)
+	    loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN)
 		txq->stats.obytes += sizeof(struct rte_vlan_hdr);
 #endif
 	/*
@@ -2028,7 +2028,7 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
 	 * to estimate the required space for WQE.
 	 */
 	dlen = rte_pktmbuf_pkt_len(loc->mbuf);
-	if (MLX5_TXOFF_CONFIG(VLAN) && loc->mbuf->ol_flags & PKT_TX_VLAN)
+	if (MLX5_TXOFF_CONFIG(VLAN) && loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN)
 		vlan = sizeof(struct rte_vlan_hdr);
 	inlen = dlen + vlan;
 	/* Check against minimal length. */
@@ -2036,7 +2036,7 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
 		return MLX5_TXCMP_CODE_ERROR;
 	MLX5_ASSERT(txq->inlen_send >= MLX5_ESEG_MIN_INLINE_SIZE);
 	if (inlen > txq->inlen_send ||
-	    loc->mbuf->ol_flags & PKT_TX_DYNF_NOINLINE) {
+	    loc->mbuf->ol_flags & RTE_MBUF_F_TX_DYNF_NOINLINE) {
 		struct rte_mbuf *mbuf;
 		unsigned int nxlen;
 		uintptr_t start;
@@ -2058,7 +2058,7 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
 			 * support the offload, will do with software inline.
 			 */
 			inlen = MLX5_ESEG_MIN_INLINE_SIZE;
-		} else if (mbuf->ol_flags & PKT_TX_DYNF_NOINLINE ||
+		} else if (mbuf->ol_flags & RTE_MBUF_F_TX_DYNF_NOINLINE ||
 			   nxlen > txq->inlen_send) {
 			return mlx5_tx_packet_multi_send(txq, loc, olx);
 		} else {
@@ -2198,7 +2198,7 @@ mlx5_tx_burst_mseg(struct mlx5_txq_data *__rte_restrict txq,
 		if (loc->elts_free < NB_SEGS(loc->mbuf))
 			return MLX5_TXCMP_CODE_EXIT;
 		if (MLX5_TXOFF_CONFIG(TSO) &&
-		    unlikely(loc->mbuf->ol_flags & PKT_TX_TCP_SEG)) {
+		    unlikely(loc->mbuf->ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 			/* Proceed with multi-segment TSO. */
 			ret = mlx5_tx_packet_multi_tso(txq, loc, olx);
 		} else if (MLX5_TXOFF_CONFIG(INLINE)) {
@@ -2224,7 +2224,7 @@ mlx5_tx_burst_mseg(struct mlx5_txq_data *__rte_restrict txq,
 			continue;
 		/* Here ends the series of multi-segment packets. */
 		if (MLX5_TXOFF_CONFIG(TSO) &&
-		    unlikely(loc->mbuf->ol_flags & PKT_TX_TCP_SEG))
+		    unlikely(loc->mbuf->ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 			return MLX5_TXCMP_CODE_TSO;
 		return MLX5_TXCMP_CODE_SINGLE;
 	}
@@ -2291,7 +2291,7 @@ mlx5_tx_burst_tso(struct mlx5_txq_data *__rte_restrict txq,
 		}
 		dlen = rte_pktmbuf_data_len(loc->mbuf);
 		if (MLX5_TXOFF_CONFIG(VLAN) &&
-		    loc->mbuf->ol_flags & PKT_TX_VLAN) {
+		    loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) {
 			vlan = sizeof(struct rte_vlan_hdr);
 		}
 		/*
@@ -2302,7 +2302,7 @@ mlx5_tx_burst_tso(struct mlx5_txq_data *__rte_restrict txq,
 		       loc->mbuf->l3_len + loc->mbuf->l4_len;
 		if (unlikely((!hlen || !loc->mbuf->tso_segsz)))
 			return MLX5_TXCMP_CODE_ERROR;
-		if (loc->mbuf->ol_flags & PKT_TX_TUNNEL_MASK)
+		if (loc->mbuf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 			hlen += loc->mbuf->outer_l2_len +
 				loc->mbuf->outer_l3_len;
 		/* Segment must contain all TSO headers. */
@@ -2368,7 +2368,7 @@ mlx5_tx_burst_tso(struct mlx5_txq_data *__rte_restrict txq,
 		if (MLX5_TXOFF_CONFIG(MULTI) &&
 		    unlikely(NB_SEGS(loc->mbuf) > 1))
 			return MLX5_TXCMP_CODE_MULTI;
-		if (likely(!(loc->mbuf->ol_flags & PKT_TX_TCP_SEG)))
+		if (likely(!(loc->mbuf->ol_flags & RTE_MBUF_F_TX_TCP_SEG)))
 			return MLX5_TXCMP_CODE_SINGLE;
 		/* Continue with the next TSO packet. */
 	}
@@ -2409,14 +2409,14 @@ mlx5_tx_able_to_empw(struct mlx5_txq_data *__rte_restrict txq,
 	/* Check for TSO packet. */
 	if (newp &&
 	    MLX5_TXOFF_CONFIG(TSO) &&
-	    unlikely(loc->mbuf->ol_flags & PKT_TX_TCP_SEG))
+	    unlikely(loc->mbuf->ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 		return MLX5_TXCMP_CODE_TSO;
 	/* Check if eMPW is enabled at all. */
 	if (!MLX5_TXOFF_CONFIG(EMPW))
 		return MLX5_TXCMP_CODE_SINGLE;
 	/* Check if eMPW can be engaged. */
 	if (MLX5_TXOFF_CONFIG(VLAN) &&
-	    unlikely(loc->mbuf->ol_flags & PKT_TX_VLAN) &&
+	    unlikely(loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) &&
 		(!MLX5_TXOFF_CONFIG(INLINE) ||
 		 unlikely((rte_pktmbuf_data_len(loc->mbuf) +
 			   sizeof(struct rte_vlan_hdr)) > txq->inlen_empw))) {
@@ -2469,7 +2469,7 @@ mlx5_tx_match_empw(struct mlx5_txq_data *__rte_restrict txq,
 		return false;
 	/* Fill metadata field if needed. */
 	if (MLX5_TXOFF_CONFIG(METADATA) &&
-		es->metadata != (loc->mbuf->ol_flags & PKT_TX_DYNF_METADATA ?
+		es->metadata != (loc->mbuf->ol_flags & RTE_MBUF_DYNFLAG_TX_METADATA ?
 				 *RTE_FLOW_DYNF_METADATA(loc->mbuf) : 0))
 		return false;
 	/* Legacy MPW can send packets with the same length only. */
@@ -2478,7 +2478,7 @@ mlx5_tx_match_empw(struct mlx5_txq_data *__rte_restrict txq,
 		return false;
 	/* There must be no VLAN packets in eMPW loop. */
 	if (MLX5_TXOFF_CONFIG(VLAN))
-		MLX5_ASSERT(!(loc->mbuf->ol_flags & PKT_TX_VLAN));
+		MLX5_ASSERT(!(loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN));
 	/* Check if the scheduling is requested. */
 	if (MLX5_TXOFF_CONFIG(TXPP) &&
 	    loc->mbuf->ol_flags & txq->ts_mask)
@@ -2914,7 +2914,7 @@ mlx5_tx_burst_empw_inline(struct mlx5_txq_data *__rte_restrict txq,
 			}
 			/* Inline or not inline - that's the Question. */
 			if (dlen > txq->inlen_empw ||
-			    loc->mbuf->ol_flags & PKT_TX_DYNF_NOINLINE)
+			    loc->mbuf->ol_flags & RTE_MBUF_F_TX_DYNF_NOINLINE)
 				goto pointer_empw;
 			if (MLX5_TXOFF_CONFIG(MPW)) {
 				if (dlen > txq->inlen_send)
@@ -2939,7 +2939,7 @@ mlx5_tx_burst_empw_inline(struct mlx5_txq_data *__rte_restrict txq,
 			}
 			/* Inline entire packet, optional VLAN insertion. */
 			if (MLX5_TXOFF_CONFIG(VLAN) &&
-			    loc->mbuf->ol_flags & PKT_TX_VLAN) {
+			    loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) {
 				/*
 				 * The packet length must be checked in
 				 * mlx5_tx_able_to_empw() and packet
@@ -3004,7 +3004,7 @@ mlx5_tx_burst_empw_inline(struct mlx5_txq_data *__rte_restrict txq,
 			MLX5_ASSERT(room >= MLX5_WQE_DSEG_SIZE);
 			if (MLX5_TXOFF_CONFIG(VLAN))
 				MLX5_ASSERT(!(loc->mbuf->ol_flags &
-					    PKT_TX_VLAN));
+					    RTE_MBUF_F_TX_VLAN));
 			mlx5_tx_dseg_ptr(txq, loc, dseg, dptr, dlen, olx);
 			/* We have to store mbuf in elts.*/
 			txq->elts[txq->elts_head++ & txq->elts_m] = loc->mbuf;
@@ -3149,7 +3149,7 @@ mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq,
 
 			inlen = rte_pktmbuf_data_len(loc->mbuf);
 			if (MLX5_TXOFF_CONFIG(VLAN) &&
-			    loc->mbuf->ol_flags & PKT_TX_VLAN) {
+			    loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) {
 				vlan = sizeof(struct rte_vlan_hdr);
 				inlen += vlan;
 			}
@@ -3170,7 +3170,7 @@ mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq,
 				if (inlen <= MLX5_ESEG_MIN_INLINE_SIZE)
 					return MLX5_TXCMP_CODE_ERROR;
 				if (loc->mbuf->ol_flags &
-				    PKT_TX_DYNF_NOINLINE) {
+				    RTE_MBUF_F_TX_DYNF_NOINLINE) {
 					/*
 					 * The hint flag not to inline packet
 					 * data is set. Check whether we can
@@ -3380,7 +3380,7 @@ mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq,
 			/* Update sent data bytes counter. */
 			txq->stats.obytes += rte_pktmbuf_data_len(loc->mbuf);
 			if (MLX5_TXOFF_CONFIG(VLAN) &&
-			    loc->mbuf->ol_flags & PKT_TX_VLAN)
+			    loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN)
 				txq->stats.obytes +=
 					sizeof(struct rte_vlan_hdr);
 #endif
@@ -3576,7 +3576,7 @@ mlx5_tx_burst_tmpl(struct mlx5_txq_data *__rte_restrict txq,
 		}
 		/* Dedicated branch for single-segment TSO packets. */
 		if (MLX5_TXOFF_CONFIG(TSO) &&
-		    unlikely(loc.mbuf->ol_flags & PKT_TX_TCP_SEG)) {
+		    unlikely(loc.mbuf->ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 			/*
 			 * TSO might require special way for inlining
 			 * (dedicated parameters) and is sent with
diff --git a/drivers/net/mvneta/mvneta_ethdev.h b/drivers/net/mvneta/mvneta_ethdev.h
index ef8067790f..d4a06d5795 100644
--- a/drivers/net/mvneta/mvneta_ethdev.h
+++ b/drivers/net/mvneta/mvneta_ethdev.h
@@ -64,9 +64,9 @@
 #define MVNETA_TX_OFFLOADS (MVNETA_TX_OFFLOAD_CHECKSUM | \
 			    DEV_TX_OFFLOAD_MULTI_SEGS)
 
-#define MVNETA_TX_PKT_OFFLOADS (PKT_TX_IP_CKSUM | \
-				PKT_TX_TCP_CKSUM | \
-				PKT_TX_UDP_CKSUM)
+#define MVNETA_TX_PKT_OFFLOADS (RTE_MBUF_F_TX_IP_CKSUM | \
+				RTE_MBUF_F_TX_TCP_CKSUM | \
+				RTE_MBUF_F_TX_UDP_CKSUM)
 
 struct mvneta_priv {
 	/* Hot fields, used in fast path. */
diff --git a/drivers/net/mvneta/mvneta_rxtx.c b/drivers/net/mvneta/mvneta_rxtx.c
index dfa7ecc090..de53ef935f 100644
--- a/drivers/net/mvneta/mvneta_rxtx.c
+++ b/drivers/net/mvneta/mvneta_rxtx.c
@@ -304,18 +304,18 @@ mvneta_prepare_proto_info(uint64_t ol_flags,
 	 * default value
 	 */
 	*l3_type = NETA_OUTQ_L3_TYPE_IPV4;
-	*gen_l3_cksum = ol_flags & PKT_TX_IP_CKSUM ? 1 : 0;
+	*gen_l3_cksum = ol_flags & RTE_MBUF_F_TX_IP_CKSUM ? 1 : 0;
 
-	if (ol_flags & PKT_TX_IPV6) {
+	if (ol_flags & RTE_MBUF_F_TX_IPV6) {
 		*l3_type = NETA_OUTQ_L3_TYPE_IPV6;
 		/* no checksum for ipv6 header */
 		*gen_l3_cksum = 0;
 	}
 
-	if (ol_flags & PKT_TX_TCP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) {
 		*l4_type = NETA_OUTQ_L4_TYPE_TCP;
 		*gen_l4_cksum = 1;
-	} else if (ol_flags & PKT_TX_UDP_CKSUM) {
+	} else if (ol_flags & RTE_MBUF_F_TX_UDP_CKSUM) {
 		*l4_type = NETA_OUTQ_L4_TYPE_UDP;
 		*gen_l4_cksum = 1;
 	} else {
@@ -342,15 +342,15 @@ mvneta_desc_to_ol_flags(struct neta_ppio_desc *desc)
 
 	status = neta_ppio_inq_desc_get_l3_pkt_error(desc);
 	if (unlikely(status != NETA_DESC_ERR_OK))
-		flags = PKT_RX_IP_CKSUM_BAD;
+		flags = RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		flags = PKT_RX_IP_CKSUM_GOOD;
+		flags = RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	status = neta_ppio_inq_desc_get_l4_pkt_error(desc);
 	if (unlikely(status != NETA_DESC_ERR_OK))
-		flags |= PKT_RX_L4_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	else
-		flags |= PKT_RX_L4_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	return flags;
 }
diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index 078aefbb8d..ed6823c2ae 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -69,9 +69,9 @@
 #define MRVL_TX_OFFLOADS (MRVL_TX_OFFLOAD_CHECKSUM | \
 			  DEV_TX_OFFLOAD_MULTI_SEGS)
 
-#define MRVL_TX_PKT_OFFLOADS (PKT_TX_IP_CKSUM | \
-			      PKT_TX_TCP_CKSUM | \
-			      PKT_TX_UDP_CKSUM)
+#define MRVL_TX_PKT_OFFLOADS (RTE_MBUF_F_TX_IP_CKSUM | \
+			      RTE_MBUF_F_TX_TCP_CKSUM | \
+			      RTE_MBUF_F_TX_UDP_CKSUM)
 
 static const char * const valid_args[] = {
 	MRVL_IFACE_NAME_ARG,
@@ -2545,18 +2545,18 @@ mrvl_desc_to_ol_flags(struct pp2_ppio_desc *desc, uint64_t packet_type)
 	if (RTE_ETH_IS_IPV4_HDR(packet_type)) {
 		status = pp2_ppio_inq_desc_get_l3_pkt_error(desc);
 		if (unlikely(status != PP2_DESC_ERR_OK))
-			flags |= PKT_RX_IP_CKSUM_BAD;
+			flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		else
-			flags |= PKT_RX_IP_CKSUM_GOOD;
+			flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 	}
 
 	if (((packet_type & RTE_PTYPE_L4_UDP) == RTE_PTYPE_L4_UDP) ||
 	    ((packet_type & RTE_PTYPE_L4_TCP) == RTE_PTYPE_L4_TCP)) {
 		status = pp2_ppio_inq_desc_get_l4_pkt_error(desc);
 		if (unlikely(status != PP2_DESC_ERR_OK))
-			flags |= PKT_RX_L4_CKSUM_BAD;
+			flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		else
-			flags |= PKT_RX_L4_CKSUM_GOOD;
+			flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 	}
 
 	return flags;
@@ -2716,18 +2716,18 @@ mrvl_prepare_proto_info(uint64_t ol_flags,
 	 * default value
 	 */
 	*l3_type = PP2_OUTQ_L3_TYPE_IPV4;
-	*gen_l3_cksum = ol_flags & PKT_TX_IP_CKSUM ? 1 : 0;
+	*gen_l3_cksum = ol_flags & RTE_MBUF_F_TX_IP_CKSUM ? 1 : 0;
 
-	if (ol_flags & PKT_TX_IPV6) {
+	if (ol_flags & RTE_MBUF_F_TX_IPV6) {
 		*l3_type = PP2_OUTQ_L3_TYPE_IPV6;
 		/* no checksum for ipv6 header */
 		*gen_l3_cksum = 0;
 	}
 
-	if ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM) {
+	if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_TCP_CKSUM) {
 		*l4_type = PP2_OUTQ_L4_TYPE_TCP;
 		*gen_l4_cksum = 1;
-	} else if ((ol_flags & PKT_TX_L4_MASK) ==  PKT_TX_UDP_CKSUM) {
+	} else if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) ==  RTE_MBUF_F_TX_UDP_CKSUM) {
 		*l4_type = PP2_OUTQ_L4_TYPE_UDP;
 		*gen_l4_cksum = 1;
 	} else {
diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c
index afef7a96a3..8af9a084b9 100644
--- a/drivers/net/netvsc/hn_rxtx.c
+++ b/drivers/net/netvsc/hn_rxtx.c
@@ -615,7 +615,7 @@ static void hn_rxpkt(struct hn_rx_queue *rxq, struct hn_rx_bufinfo *rxb,
 
 	if (info->vlan_info != HN_NDIS_VLAN_INFO_INVALID) {
 		m->vlan_tci = info->vlan_info;
-		m->ol_flags |= PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+		m->ol_flags |= RTE_MBUF_F_RX_VLAN_STRIPPED | RTE_MBUF_F_RX_VLAN;
 
 		/* NDIS always strips tag, put it back if necessary */
 		if (!hv->vlan_strip && rte_vlan_insert(&m)) {
@@ -630,18 +630,18 @@ static void hn_rxpkt(struct hn_rx_queue *rxq, struct hn_rx_bufinfo *rxb,
 
 	if (info->csum_info != HN_NDIS_RXCSUM_INFO_INVALID) {
 		if (info->csum_info & NDIS_RXCSUM_INFO_IPCS_OK)
-			m->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 		if (info->csum_info & (NDIS_RXCSUM_INFO_UDPCS_OK
 				       | NDIS_RXCSUM_INFO_TCPCS_OK))
-			m->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		else if (info->csum_info & (NDIS_RXCSUM_INFO_TCPCS_FAILED
 					    | NDIS_RXCSUM_INFO_UDPCS_FAILED))
-			m->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+			m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	}
 
 	if (info->hash_info != HN_NDIS_HASH_INFO_INVALID) {
-		m->ol_flags |= PKT_RX_RSS_HASH;
+		m->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		m->hash.rss = info->hash_value;
 	}
 
@@ -1331,17 +1331,17 @@ static void hn_encap(struct rndis_packet_msg *pkt,
 					  NDIS_PKTINFO_TYPE_HASHVAL);
 	*pi_data = queue_id;
 
-	if (m->ol_flags & PKT_TX_VLAN) {
+	if (m->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		pi_data = hn_rndis_pktinfo_append(pkt, NDIS_VLAN_INFO_SIZE,
 						  NDIS_PKTINFO_TYPE_VLAN);
 		*pi_data = m->vlan_tci;
 	}
 
-	if (m->ol_flags & PKT_TX_TCP_SEG) {
+	if (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		pi_data = hn_rndis_pktinfo_append(pkt, NDIS_LSO2_INFO_SIZE,
 						  NDIS_PKTINFO_TYPE_LSO);
 
-		if (m->ol_flags & PKT_TX_IPV6) {
+		if (m->ol_flags & RTE_MBUF_F_TX_IPV6) {
 			*pi_data = NDIS_LSO2_INFO_MAKEIPV6(hlen,
 							   m->tso_segsz);
 		} else {
@@ -1349,23 +1349,23 @@ static void hn_encap(struct rndis_packet_msg *pkt,
 							   m->tso_segsz);
 		}
 	} else if (m->ol_flags &
-		   (PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM | PKT_TX_IP_CKSUM)) {
+		   (RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM | RTE_MBUF_F_TX_IP_CKSUM)) {
 		pi_data = hn_rndis_pktinfo_append(pkt, NDIS_TXCSUM_INFO_SIZE,
 						  NDIS_PKTINFO_TYPE_CSUM);
 		*pi_data = 0;
 
-		if (m->ol_flags & PKT_TX_IPV6)
+		if (m->ol_flags & RTE_MBUF_F_TX_IPV6)
 			*pi_data |= NDIS_TXCSUM_INFO_IPV6;
-		if (m->ol_flags & PKT_TX_IPV4) {
+		if (m->ol_flags & RTE_MBUF_F_TX_IPV4) {
 			*pi_data |= NDIS_TXCSUM_INFO_IPV4;
 
-			if (m->ol_flags & PKT_TX_IP_CKSUM)
+			if (m->ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 				*pi_data |= NDIS_TXCSUM_INFO_IPCS;
 		}
 
-		if (m->ol_flags & PKT_TX_TCP_CKSUM)
+		if (m->ol_flags & RTE_MBUF_F_TX_TCP_CKSUM)
 			*pi_data |= NDIS_TXCSUM_INFO_MKTCPCS(hlen);
-		else if (m->ol_flags & PKT_TX_UDP_CKSUM)
+		else if (m->ol_flags & RTE_MBUF_F_TX_UDP_CKSUM)
 			*pi_data |= NDIS_TXCSUM_INFO_MKUDPCS(hlen);
 	}
 
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 0dcaf525f6..03afc779cf 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -203,7 +203,7 @@ nfp_net_set_hash(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd,
 	}
 
 	mbuf->hash.rss = hash;
-	mbuf->ol_flags |= PKT_RX_RSS_HASH;
+	mbuf->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 
 	switch (hash_type) {
 	case NFP_NET_RSS_IPV4:
@@ -245,9 +245,9 @@ nfp_net_rx_cksum(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd,
 	/* If IPv4 and IP checksum error, fail */
 	if (unlikely((rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM) &&
 	    !(rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM_OK)))
-		mb->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+		mb->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		mb->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+		mb->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	/* If neither UDP nor TCP return */
 	if (!(rxd->rxd.flags & PCIE_DESC_RX_TCP_CSUM) &&
@@ -255,9 +255,9 @@ nfp_net_rx_cksum(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd,
 		return;
 
 	if (likely(rxd->rxd.flags & PCIE_DESC_RX_L4_CSUM_OK))
-		mb->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+		mb->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 	else
-		mb->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+		mb->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 }
 
 /*
@@ -403,7 +403,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		if ((rxds->rxd.flags & PCIE_DESC_RX_VLAN) &&
 		    (hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN)) {
 			mb->vlan_tci = rte_cpu_to_le_32(rxds->rxd.vlan);
-			mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+			mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		}
 
 		/* Adding the mbuf to the mbuf array passed by the app */
@@ -821,7 +821,7 @@ nfp_net_tx_tso(struct nfp_net_txq *txq, struct nfp_net_tx_desc *txd,
 
 	ol_flags = mb->ol_flags;
 
-	if (!(ol_flags & PKT_TX_TCP_SEG))
+	if (!(ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 		goto clean_txd;
 
 	txd->l3_offset = mb->l2_len;
@@ -853,19 +853,19 @@ nfp_net_tx_cksum(struct nfp_net_txq *txq, struct nfp_net_tx_desc *txd,
 	ol_flags = mb->ol_flags;
 
 	/* IPv6 does not need checksum */
-	if (ol_flags & PKT_TX_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 		txd->flags |= PCIE_DESC_TX_IP4_CSUM;
 
-	switch (ol_flags & PKT_TX_L4_MASK) {
-	case PKT_TX_UDP_CKSUM:
+	switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		txd->flags |= PCIE_DESC_TX_UDP_CSUM;
 		break;
-	case PKT_TX_TCP_CKSUM:
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		txd->flags |= PCIE_DESC_TX_TCP_CSUM;
 		break;
 	}
 
-	if (ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK))
+	if (ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK))
 		txd->flags |= PCIE_DESC_TX_CSUM;
 }
 
@@ -929,7 +929,7 @@ nfp_net_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		nfp_net_tx_tso(txq, &txd, pkt);
 		nfp_net_tx_cksum(txq, &txd, pkt);
 
-		if ((pkt->ol_flags & PKT_TX_VLAN) &&
+		if ((pkt->ol_flags & RTE_MBUF_F_TX_VLAN) &&
 		    (hw->cap & NFP_NET_CFG_CTRL_TXVLAN)) {
 			txd.flags |= PCIE_DESC_TX_VLAN;
 			txd.vlan = pkt->vlan_tci;
diff --git a/drivers/net/octeontx/octeontx_rxtx.h b/drivers/net/octeontx/octeontx_rxtx.h
index e0723ac26a..eeadd555c7 100644
--- a/drivers/net/octeontx/octeontx_rxtx.h
+++ b/drivers/net/octeontx/octeontx_rxtx.h
@@ -242,20 +242,20 @@ octeontx_tx_checksum_offload(uint64_t *cmd_buf, const uint16_t flags,
 	 * 0x2 - TCP L4 checksum
 	 * 0x3 - SCTP L4 checksum
 	 */
-	const uint8_t csum = (!(((ol_flags ^ PKT_TX_UDP_CKSUM) >> 52) & 0x3) +
-		      (!(((ol_flags ^ PKT_TX_TCP_CKSUM) >> 52) & 0x3) * 2) +
-		      (!(((ol_flags ^ PKT_TX_SCTP_CKSUM) >> 52) & 0x3) * 3));
-
-	const uint8_t is_tunnel_parsed = (!!(ol_flags & PKT_TX_TUNNEL_GTP) ||
-				      !!(ol_flags & PKT_TX_TUNNEL_VXLAN_GPE) ||
-				      !!(ol_flags & PKT_TX_TUNNEL_VXLAN) ||
-				      !!(ol_flags & PKT_TX_TUNNEL_GRE) ||
-				      !!(ol_flags & PKT_TX_TUNNEL_GENEVE) ||
-				      !!(ol_flags & PKT_TX_TUNNEL_IP) ||
-				      !!(ol_flags & PKT_TX_TUNNEL_IPIP));
-
-	const uint8_t csum_outer = (!!(ol_flags & PKT_TX_OUTER_UDP_CKSUM) ||
-				    !!(ol_flags & PKT_TX_TUNNEL_UDP));
+	const uint8_t csum = (!(((ol_flags ^ RTE_MBUF_F_TX_UDP_CKSUM) >> 52) & 0x3) +
+		      (!(((ol_flags ^ RTE_MBUF_F_TX_TCP_CKSUM) >> 52) & 0x3) * 2) +
+		      (!(((ol_flags ^ RTE_MBUF_F_TX_SCTP_CKSUM) >> 52) & 0x3) * 3));
+
+	const uint8_t is_tunnel_parsed = (!!(ol_flags & RTE_MBUF_F_TX_TUNNEL_GTP) ||
+				      !!(ol_flags & RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE) ||
+				      !!(ol_flags & RTE_MBUF_F_TX_TUNNEL_VXLAN) ||
+				      !!(ol_flags & RTE_MBUF_F_TX_TUNNEL_GRE) ||
+				      !!(ol_flags & RTE_MBUF_F_TX_TUNNEL_GENEVE) ||
+				      !!(ol_flags & RTE_MBUF_F_TX_TUNNEL_IP) ||
+				      !!(ol_flags & RTE_MBUF_F_TX_TUNNEL_IPIP));
+
+	const uint8_t csum_outer = (!!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM) ||
+				    !!(ol_flags & RTE_MBUF_F_TX_TUNNEL_UDP));
 	const uint8_t outer_l2_len = m->outer_l2_len;
 	const uint8_t l2_len = m->l2_len;
 
@@ -266,7 +266,7 @@ octeontx_tx_checksum_offload(uint64_t *cmd_buf, const uint16_t flags,
 			send_hdr->w0.l3ptr = outer_l2_len;
 			send_hdr->w0.l4ptr = outer_l2_len + m->outer_l3_len;
 			/* Set clk3 for PKO to calculate IPV4 header checksum */
-			send_hdr->w0.ckl3 = !!(ol_flags & PKT_TX_OUTER_IPV4);
+			send_hdr->w0.ckl3 = !!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4);
 
 			/* Outer L4 */
 			send_hdr->w0.ckl4 = csum_outer;
@@ -277,7 +277,7 @@ octeontx_tx_checksum_offload(uint64_t *cmd_buf, const uint16_t flags,
 			/* Set clke for PKO to calculate inner IPV4 header
 			 * checksum.
 			 */
-			send_hdr->w0.ckle = !!(ol_flags & PKT_TX_IPV4);
+			send_hdr->w0.ckle = !!(ol_flags & RTE_MBUF_F_TX_IPV4);
 
 			/* Inner L4 */
 			send_hdr->w0.cklf = csum;
@@ -286,7 +286,7 @@ octeontx_tx_checksum_offload(uint64_t *cmd_buf, const uint16_t flags,
 			send_hdr->w0.l3ptr = l2_len;
 			send_hdr->w0.l4ptr = l2_len + m->l3_len;
 			/* Set clk3 for PKO to calculate IPV4 header checksum */
-			send_hdr->w0.ckl3 = !!(ol_flags & PKT_TX_IPV4);
+			send_hdr->w0.ckl3 = !!(ol_flags & RTE_MBUF_F_TX_IPV4);
 
 			/* Inner L4 */
 			send_hdr->w0.ckl4 = csum;
@@ -296,7 +296,7 @@ octeontx_tx_checksum_offload(uint64_t *cmd_buf, const uint16_t flags,
 		send_hdr->w0.l3ptr = outer_l2_len;
 		send_hdr->w0.l4ptr = outer_l2_len + m->outer_l3_len;
 		/* Set clk3 for PKO to calculate IPV4 header checksum */
-		send_hdr->w0.ckl3 = !!(ol_flags & PKT_TX_OUTER_IPV4);
+		send_hdr->w0.ckl3 = !!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4);
 
 		/* Outer L4 */
 		send_hdr->w0.ckl4 = csum_outer;
@@ -305,7 +305,7 @@ octeontx_tx_checksum_offload(uint64_t *cmd_buf, const uint16_t flags,
 		send_hdr->w0.l3ptr = l2_len;
 		send_hdr->w0.l4ptr = l2_len + m->l3_len;
 		/* Set clk3 for PKO to calculate IPV4 header checksum */
-		send_hdr->w0.ckl3 = !!(ol_flags & PKT_TX_IPV4);
+		send_hdr->w0.ckl3 = !!(ol_flags & RTE_MBUF_F_TX_IPV4);
 
 		/* Inner L4 */
 		send_hdr->w0.ckl4 = csum;
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 75d4cabf2e..541f793d5a 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -745,15 +745,15 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	uint16_t flags = 0;
 
 	/* Fastpath is dependent on these enums */
-	RTE_BUILD_BUG_ON(PKT_TX_TCP_CKSUM != (1ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_SCTP_CKSUM != (2ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_UDP_CKSUM != (3ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_IP_CKSUM != (1ULL << 54));
-	RTE_BUILD_BUG_ON(PKT_TX_IPV4 != (1ULL << 55));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IP_CKSUM != (1ULL << 58));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IPV4 != (1ULL << 59));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IPV6 != (1ULL << 60));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_UDP_CKSUM != (1ULL << 41));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_TCP_CKSUM != (1ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_SCTP_CKSUM != (2ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_UDP_CKSUM != (3ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IP_CKSUM != (1ULL << 54));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IPV4 != (1ULL << 55));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IP_CKSUM != (1ULL << 58));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV4 != (1ULL << 59));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV6 != (1ULL << 60));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_UDP_CKSUM != (1ULL << 41));
 	RTE_BUILD_BUG_ON(RTE_MBUF_L2_LEN_BITS != 7);
 	RTE_BUILD_BUG_ON(RTE_MBUF_L3_LEN_BITS != 9);
 	RTE_BUILD_BUG_ON(RTE_MBUF_OUTL2_LEN_BITS != 7);
diff --git a/drivers/net/octeontx2/otx2_lookup.c b/drivers/net/octeontx2/otx2_lookup.c
index 4764608c2d..5fa9ae1396 100644
--- a/drivers/net/octeontx2/otx2_lookup.c
+++ b/drivers/net/octeontx2/otx2_lookup.c
@@ -264,9 +264,9 @@ nix_create_rx_ol_flags_array(void *mem)
 		errlev = idx & 0xf;
 		errcode = (idx & 0xff0) >> 4;
 
-		val = PKT_RX_IP_CKSUM_UNKNOWN;
-		val |= PKT_RX_L4_CKSUM_UNKNOWN;
-		val |= PKT_RX_OUTER_L4_CKSUM_UNKNOWN;
+		val = RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
+		val |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
+		val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN;
 
 		switch (errlev) {
 		case NPC_ERRLEV_RE:
@@ -274,46 +274,46 @@ nix_create_rx_ol_flags_array(void *mem)
 			 * including Outer L2 length mismatch error
 			 */
 			if (errcode) {
-				val |= PKT_RX_IP_CKSUM_BAD;
-				val |= PKT_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 			break;
 		case NPC_ERRLEV_LC:
 			if (errcode == NPC_EC_OIP4_CSUM ||
 			    errcode == NPC_EC_IP_FRAG_OFFSET_1) {
-				val |= PKT_RX_IP_CKSUM_BAD;
-				val |= PKT_RX_OUTER_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			}
 			break;
 		case NPC_ERRLEV_LG:
 			if (errcode == NPC_EC_IIP4_CSUM)
-				val |= PKT_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			else
-				val |= PKT_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			break;
 		case NPC_ERRLEV_NIX:
 			if (errcode == NIX_RX_PERRCODE_OL4_CHK ||
 			    errcode == NIX_RX_PERRCODE_OL4_LEN ||
 			    errcode == NIX_RX_PERRCODE_OL4_PORT) {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_BAD;
-				val |= PKT_RX_OUTER_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 			} else if (errcode == NIX_RX_PERRCODE_IL4_CHK ||
 				   errcode == NIX_RX_PERRCODE_IL4_LEN ||
 				   errcode == NIX_RX_PERRCODE_IL4_PORT) {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			} else if (errcode == NIX_RX_PERRCODE_IL3_LEN ||
 				   errcode == NIX_RX_PERRCODE_OL3_LEN) {
-				val |= PKT_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 			break;
 		}
diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
index ffeade5952..5a7d220e22 100644
--- a/drivers/net/octeontx2/otx2_rx.c
+++ b/drivers/net/octeontx2/otx2_rx.c
@@ -92,7 +92,7 @@ static __rte_always_inline uint64_t
 nix_vlan_update(const uint64_t w2, uint64_t ol_flags, uint8x16_t *f)
 {
 	if (w2 & BIT_ULL(21) /* vtag0_gone */) {
-		ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		*f = vsetq_lane_u16((uint16_t)(w2 >> 32), *f, 5);
 	}
 
@@ -103,7 +103,7 @@ static __rte_always_inline uint64_t
 nix_qinq_update(const uint64_t w2, uint64_t ol_flags, struct rte_mbuf *mbuf)
 {
 	if (w2 & BIT_ULL(23) /* vtag1_gone */) {
-		ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED;
+		ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
 		mbuf->vlan_tci_outer = (uint16_t)(w2 >> 48);
 	}
 
@@ -205,10 +205,10 @@ nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
 			f1 = vsetq_lane_u32(cq1_w0, f1, 3);
 			f2 = vsetq_lane_u32(cq2_w0, f2, 3);
 			f3 = vsetq_lane_u32(cq3_w0, f3, 3);
-			ol_flags0 = PKT_RX_RSS_HASH;
-			ol_flags1 = PKT_RX_RSS_HASH;
-			ol_flags2 = PKT_RX_RSS_HASH;
-			ol_flags3 = PKT_RX_RSS_HASH;
+			ol_flags0 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags1 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags2 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags3 = RTE_MBUF_F_RX_RSS_HASH;
 		} else {
 			ol_flags0 = 0; ol_flags1 = 0;
 			ol_flags2 = 0; ol_flags3 = 0;
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
index ea29aec62f..530bf0082f 100644
--- a/drivers/net/octeontx2/otx2_rx.h
+++ b/drivers/net/octeontx2/otx2_rx.h
@@ -88,15 +88,15 @@ otx2_nix_mbuf_to_tstamp(struct rte_mbuf *mbuf,
 		 */
 		*otx2_timestamp_dynfield(mbuf, tstamp) =
 				rte_be_to_cpu_64(*tstamp_ptr);
-		/* PKT_RX_IEEE1588_TMST flag needs to be set only in case
+		/* RTE_MBUF_F_RX_IEEE1588_TMST flag needs to be set only in case
 		 * PTP packets are received.
 		 */
 		if (mbuf->packet_type == RTE_PTYPE_L2_ETHER_TIMESYNC) {
 			tstamp->rx_tstamp =
 					*otx2_timestamp_dynfield(mbuf, tstamp);
 			tstamp->rx_ready = 1;
-			mbuf->ol_flags |= PKT_RX_IEEE1588_PTP |
-				PKT_RX_IEEE1588_TMST |
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP |
+				RTE_MBUF_F_RX_IEEE1588_TMST |
 				tstamp->rx_tstamp_dynflag;
 		}
 	}
@@ -161,9 +161,9 @@ nix_update_match_id(const uint16_t match_id, uint64_t ol_flags,
 	 * 0 to OTX2_FLOW_ACTION_FLAG_DEFAULT - 2
 	 */
 	if (likely(match_id)) {
-		ol_flags |= PKT_RX_FDIR;
+		ol_flags |= RTE_MBUF_F_RX_FDIR;
 		if (match_id != OTX2_FLOW_ACTION_FLAG_DEFAULT) {
-			ol_flags |= PKT_RX_FDIR_ID;
+			ol_flags |= RTE_MBUF_F_RX_FDIR_ID;
 			mbuf->hash.fdir.hi = match_id - 1;
 		}
 	}
@@ -252,7 +252,7 @@ nix_rx_sec_mbuf_update(const struct nix_rx_parse_s *rx,
 	int i;
 
 	if (unlikely(nix_rx_sec_cptres_get(cq) != OTX2_SEC_COMP_GOOD))
-		return PKT_RX_SEC_OFFLOAD | PKT_RX_SEC_OFFLOAD_FAILED;
+		return RTE_MBUF_F_RX_SEC_OFFLOAD | RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
 
 	/* 20 bits of tag would have the SPI */
 	spi = cq->tag & 0xFFFFF;
@@ -266,7 +266,7 @@ nix_rx_sec_mbuf_update(const struct nix_rx_parse_s *rx,
 
 	if (sa->replay_win_sz) {
 		if (cpt_ipsec_ip_antireplay_check(sa, l3_ptr) < 0)
-			return PKT_RX_SEC_OFFLOAD | PKT_RX_SEC_OFFLOAD_FAILED;
+			return RTE_MBUF_F_RX_SEC_OFFLOAD | RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
 	}
 
 	l2_ptr_actual = RTE_PTR_ADD(l2_ptr,
@@ -294,7 +294,7 @@ nix_rx_sec_mbuf_update(const struct nix_rx_parse_s *rx,
 	m_len = ip_len + l2_len;
 	m->data_len = m_len;
 	m->pkt_len = m_len;
-	return PKT_RX_SEC_OFFLOAD;
+	return RTE_MBUF_F_RX_SEC_OFFLOAD;
 }
 
 static __rte_always_inline void
@@ -318,7 +318,7 @@ otx2_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 
 	if (flag & NIX_RX_OFFLOAD_RSS_F) {
 		mbuf->hash.rss = tag;
-		ol_flags |= PKT_RX_RSS_HASH;
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 	}
 
 	if (flag & NIX_RX_OFFLOAD_CHECKSUM_F)
@@ -326,11 +326,11 @@ otx2_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 
 	if (flag & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
 		if (rx->vtag0_gone) {
-			ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+			ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 			mbuf->vlan_tci = rx->vtag0_tci;
 		}
 		if (rx->vtag1_gone) {
-			ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED;
+			ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
 			mbuf->vlan_tci_outer = rx->vtag1_tci;
 		}
 	}
diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c
index ff299f00b9..afc47ca888 100644
--- a/drivers/net/octeontx2/otx2_tx.c
+++ b/drivers/net/octeontx2/otx2_tx.c
@@ -364,26 +364,26 @@ nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			const uint8x16_t tbl = {
 				/* [0-15] = il4type:il3type */
 				0x04, /* none (IPv6 assumed) */
-				0x14, /* PKT_TX_TCP_CKSUM (IPv6 assumed) */
-				0x24, /* PKT_TX_SCTP_CKSUM (IPv6 assumed) */
-				0x34, /* PKT_TX_UDP_CKSUM (IPv6 assumed) */
-				0x03, /* PKT_TX_IP_CKSUM */
-				0x13, /* PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM */
-				0x23, /* PKT_TX_IP_CKSUM | PKT_TX_SCTP_CKSUM */
-				0x33, /* PKT_TX_IP_CKSUM | PKT_TX_UDP_CKSUM */
-				0x02, /* PKT_TX_IPV4  */
-				0x12, /* PKT_TX_IPV4 | PKT_TX_TCP_CKSUM */
-				0x22, /* PKT_TX_IPV4 | PKT_TX_SCTP_CKSUM */
-				0x32, /* PKT_TX_IPV4 | PKT_TX_UDP_CKSUM */
-				0x03, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM */
-				0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_TCP_CKSUM
+				0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6 assumed) */
+				0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6 assumed) */
+				0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6 assumed) */
+				0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
+				0x13, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM */
+				0x23, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_SCTP_CKSUM */
+				0x33, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM */
+				0x02, /* RTE_MBUF_F_TX_IPV4  */
+				0x12, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_TCP_CKSUM */
+				0x22, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_SCTP_CKSUM */
+				0x32, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_UDP_CKSUM */
+				0x03, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM */
+				0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_TCP_CKSUM
 				       */
-				0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_SCTP_CKSUM
+				0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_SCTP_CKSUM
 				       */
-				0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_UDP_CKSUM
+				0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_UDP_CKSUM
 				       */
 			};
 
@@ -655,40 +655,40 @@ nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 				{
 					/* [0-15] = il4type:il3type */
 					0x04, /* none (IPv6) */
-					0x14, /* PKT_TX_TCP_CKSUM (IPv6) */
-					0x24, /* PKT_TX_SCTP_CKSUM (IPv6) */
-					0x34, /* PKT_TX_UDP_CKSUM (IPv6) */
-					0x03, /* PKT_TX_IP_CKSUM */
-					0x13, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_TCP_CKSUM
+					0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6) */
+					0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6) */
+					0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6) */
+					0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
+					0x13, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x23, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_SCTP_CKSUM
+					0x23, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x33, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_UDP_CKSUM
+					0x33, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
-					0x02, /* PKT_TX_IPV4 */
-					0x12, /* PKT_TX_IPV4 |
-					       * PKT_TX_TCP_CKSUM
+					0x02, /* RTE_MBUF_F_TX_IPV4 */
+					0x12, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x22, /* PKT_TX_IPV4 |
-					       * PKT_TX_SCTP_CKSUM
+					0x22, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x32, /* PKT_TX_IPV4 |
-					       * PKT_TX_UDP_CKSUM
+					0x32, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
-					0x03, /* PKT_TX_IPV4 |
-					       * PKT_TX_IP_CKSUM
+					0x03, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_IP_CKSUM
 					       */
-					0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_TCP_CKSUM
+					0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_SCTP_CKSUM
+					0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_UDP_CKSUM
+					0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
 				},
 
diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h
index 486248dff7..c9558e50a7 100644
--- a/drivers/net/octeontx2/otx2_tx.h
+++ b/drivers/net/octeontx2/otx2_tx.h
@@ -29,8 +29,8 @@
 	 NIX_TX_OFFLOAD_TSO_F)
 
 #define NIX_UDP_TUN_BITMASK \
-	((1ull << (PKT_TX_TUNNEL_VXLAN >> 45)) | \
-	 (1ull << (PKT_TX_TUNNEL_GENEVE >> 45)))
+	((1ull << (RTE_MBUF_F_TX_TUNNEL_VXLAN >> 45)) | \
+	 (1ull << (RTE_MBUF_F_TX_TUNNEL_GENEVE >> 45)))
 
 #define NIX_LSO_FORMAT_IDX_TSOV4	(0)
 #define NIX_LSO_FORMAT_IDX_TSOV6	(1)
@@ -54,7 +54,7 @@ otx2_nix_xmit_prepare_tstamp(uint64_t *cmd,  const uint64_t *send_mem_desc,
 	if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
 		struct nix_send_mem_s *send_mem;
 		uint16_t off = (no_segdw - 1) << 1;
-		const uint8_t is_ol_tstamp = !(ol_flags & PKT_TX_IEEE1588_TMST);
+		const uint8_t is_ol_tstamp = !(ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST);
 
 		send_mem = (struct nix_send_mem_s *)(cmd + off);
 		if (flags & NIX_TX_MULTI_SEG_F) {
@@ -67,7 +67,7 @@ otx2_nix_xmit_prepare_tstamp(uint64_t *cmd,  const uint64_t *send_mem_desc,
 			rte_compiler_barrier();
 		}
 
-		/* Packets for which PKT_TX_IEEE1588_TMST is not set, tx tstamp
+		/* Packets for which RTE_MBUF_F_TX_IEEE1588_TMST is not set, tx tstamp
 		 * should not be recorded, hence changing the alg type to
 		 * NIX_SENDMEMALG_SET and also changing send mem addr field to
 		 * next 8 bytes as it corrpt the actual tx tstamp registered
@@ -152,12 +152,12 @@ otx2_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 	uint64_t mask, ol_flags = m->ol_flags;
 
 	if (flags & NIX_TX_OFFLOAD_TSO_F &&
-	    (ol_flags & PKT_TX_TCP_SEG)) {
+	    (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		uintptr_t mdata = rte_pktmbuf_mtod(m, uintptr_t);
 		uint16_t *iplen, *oiplen, *oudplen;
 		uint16_t lso_sb, paylen;
 
-		mask = -!!(ol_flags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6));
+		mask = -!!(ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IPV6));
 		lso_sb = (mask & (m->outer_l2_len + m->outer_l3_len)) +
 			m->l2_len + m->l3_len + m->l4_len;
 
@@ -166,15 +166,15 @@ otx2_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 
 		/* Get iplen position assuming no tunnel hdr */
 		iplen = (uint16_t *)(mdata + m->l2_len +
-				     (2 << !!(ol_flags & PKT_TX_IPV6)));
+				     (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
 		/* Handle tunnel tso */
 		if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-		    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+		    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 			const uint8_t is_udp_tun = (NIX_UDP_TUN_BITMASK >>
-				((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) & 0x1;
+				((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) & 0x1;
 
 			oiplen = (uint16_t *)(mdata + m->outer_l2_len +
-				(2 << !!(ol_flags & PKT_TX_OUTER_IPV6)));
+				(2 << !!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)));
 			*oiplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*oiplen) -
 						   paylen);
 
@@ -189,7 +189,7 @@ otx2_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 
 			/* Update iplen position to inner ip hdr */
 			iplen = (uint16_t *)(mdata + lso_sb - m->l3_len -
-				m->l4_len + (2 << !!(ol_flags & PKT_TX_IPV6)));
+				m->l4_len + (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
 		}
 
 		*iplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*iplen) - paylen);
@@ -239,11 +239,11 @@ otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 
 	if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
 	    (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F)) {
-		const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+		const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
 		const uint8_t ol3type =
-			((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) +
-			((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) +
-			!!(ol_flags & PKT_TX_OUTER_IP_CKSUM);
+			((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
+			((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
+			!!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
 
 		/* Outer L3 */
 		w1.ol3type = ol3type;
@@ -255,15 +255,15 @@ otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		w1.ol4type = csum + (csum << 1);
 
 		/* Inner L3 */
-		w1.il3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) +
-			((!!(ol_flags & PKT_TX_IPV6)) << 2);
+		w1.il3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
+			((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2);
 		w1.il3ptr = w1.ol4ptr + m->l2_len;
 		w1.il4ptr = w1.il3ptr + m->l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.il3type = w1.il3type + !!(ol_flags & PKT_TX_IP_CKSUM);
+		w1.il3type = w1.il3type + !!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
 
 		/* Inner L4 */
-		w1.il4type =  (ol_flags & PKT_TX_L4_MASK) >> 52;
+		w1.il4type =  (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
 
 		/* In case of no tunnel header use only
 		 * shift IL3/IL4 fields a bit to use
@@ -274,16 +274,16 @@ otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 			((w1.u & 0X00000000FFFFFFFF) >> (mask << 4));
 
 	} else if (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) {
-		const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+		const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
 		const uint8_t outer_l2_len = m->outer_l2_len;
 
 		/* Outer L3 */
 		w1.ol3ptr = outer_l2_len;
 		w1.ol4ptr = outer_l2_len + m->outer_l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.ol3type = ((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) +
-			((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) +
-			!!(ol_flags & PKT_TX_OUTER_IP_CKSUM);
+		w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
+			((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
+			!!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
 
 		/* Outer L4 */
 		w1.ol4type = csum + (csum << 1);
@@ -299,29 +299,29 @@ otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		w1.ol3ptr = l2_len;
 		w1.ol4ptr = l2_len + m->l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.ol3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) +
-			((!!(ol_flags & PKT_TX_IPV6)) << 2) +
-			!!(ol_flags & PKT_TX_IP_CKSUM);
+		w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
+			((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2) +
+			!!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
 
 		/* Inner L4 */
-		w1.ol4type =  (ol_flags & PKT_TX_L4_MASK) >> 52;
+		w1.ol4type =  (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
 	}
 
 	if (flags & NIX_TX_NEED_EXT_HDR &&
 	    flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) {
-		send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & PKT_TX_VLAN);
+		send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_VLAN);
 		/* HW will update ptr after vlan0 update */
 		send_hdr_ext->w1.vlan1_ins_ptr = 12;
 		send_hdr_ext->w1.vlan1_ins_tci = m->vlan_tci;
 
-		send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & PKT_TX_QINQ);
+		send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_QINQ);
 		/* 2B before end of l2 header */
 		send_hdr_ext->w1.vlan0_ins_ptr = 12;
 		send_hdr_ext->w1.vlan0_ins_tci = m->vlan_tci_outer;
 	}
 
 	if (flags & NIX_TX_OFFLOAD_TSO_F &&
-	    (ol_flags & PKT_TX_TCP_SEG)) {
+	    (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		uint16_t lso_sb;
 		uint64_t mask;
 
@@ -332,18 +332,18 @@ otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		send_hdr_ext->w0.lso = 1;
 		send_hdr_ext->w0.lso_mps = m->tso_segsz;
 		send_hdr_ext->w0.lso_format =
-			NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & PKT_TX_IPV6);
+			NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & RTE_MBUF_F_TX_IPV6);
 		w1.ol4type = NIX_SENDL4TYPE_TCP_CKSUM;
 
 		/* Handle tunnel tso */
 		if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-		    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+		    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 			const uint8_t is_udp_tun = (NIX_UDP_TUN_BITMASK >>
-				((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) & 0x1;
+				((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) & 0x1;
 			uint8_t shift = is_udp_tun ? 32 : 0;
 
-			shift += (!!(ol_flags & PKT_TX_OUTER_IPV6) << 4);
-			shift += (!!(ol_flags & PKT_TX_IPV6) << 3);
+			shift += (!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) << 4);
+			shift += (!!(ol_flags & RTE_MBUF_F_TX_IPV6) << 3);
 
 			w1.il4type = NIX_SENDL4TYPE_TCP_CKSUM;
 			w1.ol4type = is_udp_tun ? NIX_SENDL4TYPE_UDP_CKSUM : 0;
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 050c6f5c32..1b4dfff3c3 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -1639,9 +1639,9 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 					    "L4 csum failed, flags = 0x%x\n",
 					    parse_flag);
 				rxq->rx_hw_errors++;
-				ol_flags |= PKT_RX_L4_CKSUM_BAD;
+				ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			} else {
-				ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+				ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 
 			if (unlikely(qede_check_tunn_csum_l3(parse_flag))) {
@@ -1649,9 +1649,9 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 					"Outer L3 csum failed, flags = 0x%x\n",
 					parse_flag);
 				rxq->rx_hw_errors++;
-				ol_flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+				ol_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 			} else {
-				ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+				ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			}
 
 			flags = fp_cqe->tunnel_pars_flags.flags;
@@ -1684,31 +1684,31 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 				    "L4 csum failed, flags = 0x%x\n",
 				    parse_flag);
 			rxq->rx_hw_errors++;
-			ol_flags |= PKT_RX_L4_CKSUM_BAD;
+			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		} else {
-			ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		}
 		if (unlikely(qede_check_notunn_csum_l3(rx_mb, parse_flag))) {
 			PMD_RX_LOG(ERR, rxq, "IP csum failed, flags = 0x%x\n",
 				   parse_flag);
 			rxq->rx_hw_errors++;
-			ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		} else {
-			ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 		}
 
 		if (unlikely(CQE_HAS_VLAN(parse_flag) ||
 			     CQE_HAS_OUTER_VLAN(parse_flag))) {
 			/* Note: FW doesn't indicate Q-in-Q packet */
-			ol_flags |= PKT_RX_VLAN;
+			ol_flags |= RTE_MBUF_F_RX_VLAN;
 			if (qdev->vlan_strip_flg) {
-				ol_flags |= PKT_RX_VLAN_STRIPPED;
+				ol_flags |= RTE_MBUF_F_RX_VLAN_STRIPPED;
 				rx_mb->vlan_tci = vlan_tci;
 			}
 		}
 
 		if (rss_enable) {
-			ol_flags |= PKT_RX_RSS_HASH;
+			ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 			rx_mb->hash.rss = rss_hash;
 		}
 
@@ -1837,7 +1837,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 			tpa_info = &rxq->tpa_info[cqe_start_tpa->tpa_agg_index];
 			tpa_start_flg = true;
 			/* Mark it as LRO packet */
-			ol_flags |= PKT_RX_LRO;
+			ol_flags |= RTE_MBUF_F_RX_LRO;
 			/* In split mode,  seg_len is same as len_on_first_bd
 			 * and bw_ext_bd_len_list will be empty since there are
 			 * no additional buffers
@@ -1908,9 +1908,9 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 					    "L4 csum failed, flags = 0x%x\n",
 					    parse_flag);
 				rxq->rx_hw_errors++;
-				ol_flags |= PKT_RX_L4_CKSUM_BAD;
+				ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			} else {
-				ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+				ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 
 			if (unlikely(qede_check_tunn_csum_l3(parse_flag))) {
@@ -1918,9 +1918,9 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 					"Outer L3 csum failed, flags = 0x%x\n",
 					parse_flag);
 				  rxq->rx_hw_errors++;
-				  ol_flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+				  ol_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 			} else {
-				  ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+				  ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			}
 
 			if (tpa_start_flg)
@@ -1957,32 +1957,32 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 				    "L4 csum failed, flags = 0x%x\n",
 				    parse_flag);
 			rxq->rx_hw_errors++;
-			ol_flags |= PKT_RX_L4_CKSUM_BAD;
+			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		} else {
-			ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		}
 		if (unlikely(qede_check_notunn_csum_l3(rx_mb, parse_flag))) {
 			PMD_RX_LOG(ERR, rxq, "IP csum failed, flags = 0x%x\n",
 				   parse_flag);
 			rxq->rx_hw_errors++;
-			ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		} else {
-			ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 		}
 
 		if (CQE_HAS_VLAN(parse_flag) ||
 		    CQE_HAS_OUTER_VLAN(parse_flag)) {
 			/* Note: FW doesn't indicate Q-in-Q packet */
-			ol_flags |= PKT_RX_VLAN;
+			ol_flags |= RTE_MBUF_F_RX_VLAN;
 			if (qdev->vlan_strip_flg) {
-				ol_flags |= PKT_RX_VLAN_STRIPPED;
+				ol_flags |= RTE_MBUF_F_RX_VLAN_STRIPPED;
 				rx_mb->vlan_tci = vlan_tci;
 			}
 		}
 
 		/* RSS Hash */
 		if (qdev->rss_enable) {
-			ol_flags |= PKT_RX_RSS_HASH;
+			ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 			rx_mb->hash.rss = rss_hash;
 		}
 
@@ -2178,7 +2178,7 @@ qede_xmit_prep_pkts(__rte_unused void *p_txq, struct rte_mbuf **tx_pkts,
 	for (i = 0; i < nb_pkts; i++) {
 		m = tx_pkts[i];
 		ol_flags = m->ol_flags;
-		if (ol_flags & PKT_TX_TCP_SEG) {
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			if (m->nb_segs >= ETH_TX_MAX_BDS_PER_LSO_PACKET) {
 				rte_errno = EINVAL;
 				break;
@@ -2196,14 +2196,14 @@ qede_xmit_prep_pkts(__rte_unused void *p_txq, struct rte_mbuf **tx_pkts,
 		}
 		if (ol_flags & QEDE_TX_OFFLOAD_NOTSUP_MASK) {
 			/* We support only limited tunnel protocols */
-			if (ol_flags & PKT_TX_TUNNEL_MASK) {
+			if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
 				uint64_t temp;
 
-				temp = ol_flags & PKT_TX_TUNNEL_MASK;
-				if (temp == PKT_TX_TUNNEL_VXLAN ||
-				    temp == PKT_TX_TUNNEL_GENEVE ||
-				    temp == PKT_TX_TUNNEL_MPLSINUDP ||
-				    temp == PKT_TX_TUNNEL_GRE)
+				temp = ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK;
+				if (temp == RTE_MBUF_F_TX_TUNNEL_VXLAN ||
+				    temp == RTE_MBUF_F_TX_TUNNEL_GENEVE ||
+				    temp == RTE_MBUF_F_TX_TUNNEL_MPLSINUDP ||
+				    temp == RTE_MBUF_F_TX_TUNNEL_GRE)
 					continue;
 			}
 
@@ -2311,13 +2311,13 @@ qede_xmit_pkts_regular(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			<< ETH_TX_DATA_1ST_BD_PKT_LEN_SHIFT;
 
 		/* Offload the IP checksum in the hardware */
-		if (tx_ol_flags & PKT_TX_IP_CKSUM)
+		if (tx_ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			bd1_bd_flags_bf |=
 				1 << ETH_TX_1ST_BD_FLAGS_IP_CSUM_SHIFT;
 
 		/* L4 checksum offload (tcp or udp) */
-		if ((tx_ol_flags & (PKT_TX_IPV4 | PKT_TX_IPV6)) &&
-		    (tx_ol_flags & (PKT_TX_UDP_CKSUM | PKT_TX_TCP_CKSUM)))
+		if ((tx_ol_flags & (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IPV6)) &&
+		    (tx_ol_flags & (RTE_MBUF_F_TX_UDP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM)))
 			bd1_bd_flags_bf |=
 				1 << ETH_TX_1ST_BD_FLAGS_L4_CSUM_SHIFT;
 
@@ -2456,7 +2456,7 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		 * offloads. Don't rely on pkt_type marked by Rx, instead use
 		 * tx_ol_flags to decide.
 		 */
-		tunn_flg = !!(tx_ol_flags & PKT_TX_TUNNEL_MASK);
+		tunn_flg = !!(tx_ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK);
 
 		if (tunn_flg) {
 			/* Check against max which is Tunnel IPv6 + ext */
@@ -2477,8 +2477,8 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			}
 
 			/* Outer IP checksum offload */
-			if (tx_ol_flags & (PKT_TX_OUTER_IP_CKSUM |
-					   PKT_TX_OUTER_IPV4)) {
+			if (tx_ol_flags & (RTE_MBUF_F_TX_OUTER_IP_CKSUM |
+					   RTE_MBUF_F_TX_OUTER_IPV4)) {
 				bd1_bd_flags_bf |=
 					ETH_TX_1ST_BD_FLAGS_TUNN_IP_CSUM_MASK <<
 					ETH_TX_1ST_BD_FLAGS_TUNN_IP_CSUM_SHIFT;
@@ -2490,8 +2490,8 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			 * and inner layers  lengths need to be provided in
 			 * mbuf.
 			 */
-			if ((tx_ol_flags & PKT_TX_TUNNEL_MASK) ==
-						PKT_TX_TUNNEL_MPLSINUDP) {
+			if ((tx_ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ==
+						RTE_MBUF_F_TX_TUNNEL_MPLSINUDP) {
 				mplsoudp_flg = true;
 #ifdef RTE_LIBRTE_QEDE_DEBUG_TX
 				qede_mpls_tunn_tx_sanity_check(mbuf, txq);
@@ -2524,18 +2524,18 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 				    1 << ETH_TX_DATA_2ND_BD_TUNN_IPV6_EXT_SHIFT;
 
 				/* Mark inner IPv6 if present */
-				if (tx_ol_flags & PKT_TX_IPV6)
+				if (tx_ol_flags & RTE_MBUF_F_TX_IPV6)
 					bd2_bf1 |=
 						1 << ETH_TX_DATA_2ND_BD_TUNN_INNER_IPV6_SHIFT;
 
 				/* Inner L4 offsets */
-				if ((tx_ol_flags & (PKT_TX_IPV4 | PKT_TX_IPV6)) &&
-				     (tx_ol_flags & (PKT_TX_UDP_CKSUM |
-							PKT_TX_TCP_CKSUM))) {
+				if ((tx_ol_flags & (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IPV6)) &&
+				     (tx_ol_flags & (RTE_MBUF_F_TX_UDP_CKSUM |
+							RTE_MBUF_F_TX_TCP_CKSUM))) {
 					/* Determines if BD3 is needed */
 					tunn_ipv6_ext_flg = true;
-					if ((tx_ol_flags & PKT_TX_L4_MASK) ==
-							PKT_TX_UDP_CKSUM) {
+					if ((tx_ol_flags & RTE_MBUF_F_TX_L4_MASK) ==
+							RTE_MBUF_F_TX_UDP_CKSUM) {
 						bd2_bf1 |=
 							1 << ETH_TX_DATA_2ND_BD_L4_UDP_SHIFT;
 					}
@@ -2553,7 +2553,7 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			} /* End MPLSoUDP */
 		} /* End Tunnel handling */
 
-		if (tx_ol_flags & PKT_TX_TCP_SEG) {
+		if (tx_ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			lso_flg = true;
 			if (unlikely(txq->nb_tx_avail <
 						ETH_TX_MIN_BDS_PER_LSO_PKT))
@@ -2570,7 +2570,7 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			bd1_bd_flags_bf |= 1 << ETH_TX_1ST_BD_FLAGS_LSO_SHIFT;
 			bd1_bd_flags_bf |=
 					1 << ETH_TX_1ST_BD_FLAGS_IP_CSUM_SHIFT;
-			/* PKT_TX_TCP_SEG implies PKT_TX_TCP_CKSUM */
+			/* RTE_MBUF_F_TX_TCP_SEG implies RTE_MBUF_F_TX_TCP_CKSUM */
 			bd1_bd_flags_bf |=
 					1 << ETH_TX_1ST_BD_FLAGS_L4_CSUM_SHIFT;
 			mss = rte_cpu_to_le_16(mbuf->tso_segsz);
@@ -2587,14 +2587,14 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Descriptor based VLAN insertion */
-		if (tx_ol_flags & PKT_TX_VLAN) {
+		if (tx_ol_flags & RTE_MBUF_F_TX_VLAN) {
 			vlan = rte_cpu_to_le_16(mbuf->vlan_tci);
 			bd1_bd_flags_bf |=
 			    1 << ETH_TX_1ST_BD_FLAGS_VLAN_INSERTION_SHIFT;
 		}
 
 		/* Offload the IP checksum in the hardware */
-		if (tx_ol_flags & PKT_TX_IP_CKSUM) {
+		if (tx_ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 			bd1_bd_flags_bf |=
 				1 << ETH_TX_1ST_BD_FLAGS_IP_CSUM_SHIFT;
 			/* There's no DPDK flag to request outer-L4 csum
@@ -2602,8 +2602,8 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			 * csum offload is requested then we need to force
 			 * recalculation of L4 tunnel header csum also.
 			 */
-			if (tunn_flg && ((tx_ol_flags & PKT_TX_TUNNEL_MASK) !=
-							PKT_TX_TUNNEL_GRE)) {
+			if (tunn_flg && ((tx_ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) !=
+							RTE_MBUF_F_TX_TUNNEL_GRE)) {
 				bd1_bd_flags_bf |=
 					ETH_TX_1ST_BD_FLAGS_TUNN_L4_CSUM_MASK <<
 					ETH_TX_1ST_BD_FLAGS_TUNN_L4_CSUM_SHIFT;
@@ -2611,8 +2611,8 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* L4 checksum offload (tcp or udp) */
-		if ((tx_ol_flags & (PKT_TX_IPV4 | PKT_TX_IPV6)) &&
-		    (tx_ol_flags & (PKT_TX_UDP_CKSUM | PKT_TX_TCP_CKSUM))) {
+		if ((tx_ol_flags & (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IPV6)) &&
+		    (tx_ol_flags & (RTE_MBUF_F_TX_UDP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM))) {
 			bd1_bd_flags_bf |=
 				1 << ETH_TX_1ST_BD_FLAGS_L4_CSUM_SHIFT;
 			/* There's no DPDK flag to request outer-L4 csum
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index 025ed6fff2..828df1cf99 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -144,20 +144,20 @@
 
 #define QEDE_PKT_TYPE_TUNN_MAX_TYPE			0x20 /* 2^5 */
 
-#define QEDE_TX_CSUM_OFFLOAD_MASK (PKT_TX_IP_CKSUM              | \
-				   PKT_TX_TCP_CKSUM             | \
-				   PKT_TX_UDP_CKSUM             | \
-				   PKT_TX_OUTER_IP_CKSUM        | \
-				   PKT_TX_TCP_SEG		| \
-				   PKT_TX_IPV4			| \
-				   PKT_TX_IPV6)
+#define QEDE_TX_CSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM              | \
+				   RTE_MBUF_F_TX_TCP_CKSUM             | \
+				   RTE_MBUF_F_TX_UDP_CKSUM             | \
+				   RTE_MBUF_F_TX_OUTER_IP_CKSUM        | \
+				   RTE_MBUF_F_TX_TCP_SEG		| \
+				   RTE_MBUF_F_TX_IPV4			| \
+				   RTE_MBUF_F_TX_IPV6)
 
 #define QEDE_TX_OFFLOAD_MASK (QEDE_TX_CSUM_OFFLOAD_MASK | \
-			      PKT_TX_VLAN		| \
-			      PKT_TX_TUNNEL_MASK)
+			      RTE_MBUF_F_TX_VLAN		| \
+			      RTE_MBUF_F_TX_TUNNEL_MASK)
 
 #define QEDE_TX_OFFLOAD_NOTSUP_MASK \
-	(PKT_TX_OFFLOAD_MASK ^ QEDE_TX_OFFLOAD_MASK)
+	(RTE_MBUF_F_TX_OFFLOAD_MASK ^ QEDE_TX_OFFLOAD_MASK)
 
 /* TPA related structures */
 struct qede_agg_info {
diff --git a/drivers/net/sfc/sfc_dp_tx.h b/drivers/net/sfc/sfc_dp_tx.h
index 777807985b..20f3b4eaba 100644
--- a/drivers/net/sfc/sfc_dp_tx.h
+++ b/drivers/net/sfc/sfc_dp_tx.h
@@ -241,7 +241,7 @@ sfc_dp_tx_prepare_pkt(struct rte_mbuf *m,
 			   unsigned int nb_vlan_descs)
 {
 	unsigned int descs_required = m->nb_segs;
-	unsigned int tcph_off = ((m->ol_flags & PKT_TX_TUNNEL_MASK) ?
+	unsigned int tcph_off = ((m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 				 m->outer_l2_len + m->outer_l3_len : 0) +
 				m->l2_len + m->l3_len;
 	unsigned int header_len = tcph_off + m->l4_len;
@@ -279,21 +279,21 @@ sfc_dp_tx_prepare_pkt(struct rte_mbuf *m,
 			 * to proceed with additional checks below.
 			 * Otherwise, throw an error.
 			 */
-			if ((m->ol_flags & PKT_TX_TCP_SEG) == 0 ||
+			if ((m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) == 0 ||
 			    tso_bounce_buffer_len == 0)
 				return EINVAL;
 		}
 	}
 
-	if (m->ol_flags & PKT_TX_TCP_SEG) {
-		switch (m->ol_flags & PKT_TX_TUNNEL_MASK) {
+	if (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
+		switch (m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
 		case 0:
 			break;
-		case PKT_TX_TUNNEL_VXLAN:
+		case RTE_MBUF_F_TX_TUNNEL_VXLAN:
 			/* FALLTHROUGH */
-		case PKT_TX_TUNNEL_GENEVE:
+		case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 			if (!(m->ol_flags &
-			      (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6)))
+			      (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IPV6)))
 				return EINVAL;
 		}
 
diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index 1bf04f565a..35e1650851 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -203,7 +203,7 @@ sfc_ef100_rx_nt_or_inner_l4_csum(const efx_word_t class)
 	return EFX_WORD_FIELD(class,
 			      ESF_GZ_RX_PREFIX_HCLASS_NT_OR_INNER_L4_CSUM) ==
 		ESE_GZ_RH_HCLASS_L4_CSUM_GOOD ?
-		PKT_RX_L4_CKSUM_GOOD : PKT_RX_L4_CKSUM_BAD;
+		RTE_MBUF_F_RX_L4_CKSUM_GOOD : RTE_MBUF_F_RX_L4_CKSUM_BAD;
 }
 
 static inline uint64_t
@@ -212,7 +212,7 @@ sfc_ef100_rx_tun_outer_l4_csum(const efx_word_t class)
 	return EFX_WORD_FIELD(class,
 			      ESF_GZ_RX_PREFIX_HCLASS_TUN_OUTER_L4_CSUM) ==
 		ESE_GZ_RH_HCLASS_L4_CSUM_GOOD ?
-		PKT_RX_OUTER_L4_CKSUM_GOOD : PKT_RX_OUTER_L4_CKSUM_BAD;
+		RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD : RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 }
 
 static uint32_t
@@ -268,11 +268,11 @@ sfc_ef100_rx_class_decode(const efx_word_t class, uint64_t *ol_flags)
 			ESF_GZ_RX_PREFIX_HCLASS_NT_OR_INNER_L3_CLASS)) {
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP4GOOD:
 			ptype |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
-			*ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			*ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			break;
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP4BAD:
 			ptype |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
-			*ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			*ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			break;
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP6:
 			ptype |= RTE_PTYPE_L3_IPV6_EXT_UNKNOWN;
@@ -309,7 +309,7 @@ sfc_ef100_rx_class_decode(const efx_word_t class, uint64_t *ol_flags)
 			break;
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP4BAD:
 			ptype |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
-			*ol_flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+			*ol_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 			break;
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP6:
 			ptype |= RTE_PTYPE_L3_IPV6_EXT_UNKNOWN;
@@ -320,11 +320,11 @@ sfc_ef100_rx_class_decode(const efx_word_t class, uint64_t *ol_flags)
 			ESF_GZ_RX_PREFIX_HCLASS_NT_OR_INNER_L3_CLASS)) {
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP4GOOD:
 			ptype |= RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN;
-			*ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			*ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			break;
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP4BAD:
 			ptype |= RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN;
-			*ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			*ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			break;
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP6:
 			ptype |= RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN;
@@ -401,7 +401,7 @@ sfc_ef100_rx_prefix_to_offloads(const struct sfc_ef100_rxq *rxq,
 	if ((rxq->flags & SFC_EF100_RXQ_RSS_HASH) &&
 	    EFX_TEST_OWORD_BIT(rx_prefix[0],
 			       ESF_GZ_RX_PREFIX_RSS_HASH_VALID_LBN)) {
-		ol_flags |= PKT_RX_RSS_HASH;
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		/* EFX_OWORD_FIELD converts little-endian to CPU */
 		m->hash.rss = EFX_OWORD_FIELD(rx_prefix[0],
 					      ESF_GZ_RX_PREFIX_RSS_HASH);
@@ -414,7 +414,7 @@ sfc_ef100_rx_prefix_to_offloads(const struct sfc_ef100_rxq *rxq,
 		user_mark = EFX_OWORD_FIELD(rx_prefix[0],
 					    ESF_GZ_RX_PREFIX_USER_MARK);
 		if (user_mark != SFC_EF100_USER_MARK_INVALID) {
-			ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+			ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 			m->hash.fdir.hi = user_mark;
 		}
 	}
diff --git a/drivers/net/sfc/sfc_ef100_tx.c b/drivers/net/sfc/sfc_ef100_tx.c
index 53d01612d1..78c16168ed 100644
--- a/drivers/net/sfc/sfc_ef100_tx.c
+++ b/drivers/net/sfc/sfc_ef100_tx.c
@@ -98,7 +98,7 @@ static int
 sfc_ef100_tx_prepare_pkt_tso(struct sfc_ef100_txq * const txq,
 			     struct rte_mbuf *m)
 {
-	size_t header_len = ((m->ol_flags & PKT_TX_TUNNEL_MASK) ?
+	size_t header_len = ((m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 			     m->outer_l2_len + m->outer_l3_len : 0) +
 			    m->l2_len + m->l3_len + m->l4_len;
 	size_t payload_len = m->pkt_len - header_len;
@@ -106,12 +106,12 @@ sfc_ef100_tx_prepare_pkt_tso(struct sfc_ef100_txq * const txq,
 	unsigned int nb_payload_descs;
 
 #ifdef RTE_LIBRTE_SFC_EFX_DEBUG
-	switch (m->ol_flags & PKT_TX_TUNNEL_MASK) {
+	switch (m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
 	case 0:
 		/* FALLTHROUGH */
-	case PKT_TX_TUNNEL_VXLAN:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
 		/* FALLTHROUGH */
-	case PKT_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 		break;
 	default:
 		return ENOTSUP;
@@ -164,11 +164,11 @@ sfc_ef100_tx_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		 * pseudo-header checksum which is calculated below,
 		 * but requires contiguous packet headers.
 		 */
-		if ((m->ol_flags & PKT_TX_TUNNEL_MASK) &&
-		    (m->ol_flags & PKT_TX_L4_MASK)) {
+		if ((m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) &&
+		    (m->ol_flags & RTE_MBUF_F_TX_L4_MASK)) {
 			calc_phdr_cksum = true;
 			max_nb_header_segs = 1;
-		} else if (m->ol_flags & PKT_TX_TCP_SEG) {
+		} else if (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			max_nb_header_segs = txq->tso_max_nb_header_descs;
 		}
 
@@ -180,7 +180,7 @@ sfc_ef100_tx_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			break;
 		}
 
-		if (m->ol_flags & PKT_TX_TCP_SEG) {
+		if (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			ret = sfc_ef100_tx_prepare_pkt_tso(txq, m);
 			if (unlikely(ret != 0)) {
 				rte_errno = ret;
@@ -197,7 +197,7 @@ sfc_ef100_tx_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			 * and does not require any assistance.
 			 */
 			ret = rte_net_intel_cksum_flags_prepare(m,
-					m->ol_flags & ~PKT_TX_IP_CKSUM);
+					m->ol_flags & ~RTE_MBUF_F_TX_IP_CKSUM);
 			if (unlikely(ret != 0)) {
 				rte_errno = -ret;
 				break;
@@ -315,10 +315,10 @@ sfc_ef100_tx_qdesc_cso_inner_l3(uint64_t tx_tunnel)
 	uint8_t inner_l3;
 
 	switch (tx_tunnel) {
-	case PKT_TX_TUNNEL_VXLAN:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
 		inner_l3 = ESE_GZ_TX_DESC_CS_INNER_L3_VXLAN;
 		break;
-	case PKT_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 		inner_l3 = ESE_GZ_TX_DESC_CS_INNER_L3_GENEVE;
 		break;
 	default:
@@ -338,25 +338,25 @@ sfc_ef100_tx_qdesc_send_create(const struct rte_mbuf *m, efx_oword_t *tx_desc)
 	uint16_t part_cksum_w;
 	uint16_t l4_offset_w;
 
-	if ((m->ol_flags & PKT_TX_TUNNEL_MASK) == 0) {
-		outer_l3 = (m->ol_flags & PKT_TX_IP_CKSUM);
-		outer_l4 = (m->ol_flags & PKT_TX_L4_MASK);
+	if ((m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) == 0) {
+		outer_l3 = (m->ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
+		outer_l4 = (m->ol_flags & RTE_MBUF_F_TX_L4_MASK);
 		inner_l3 = ESE_GZ_TX_DESC_CS_INNER_L3_OFF;
 		partial_en = ESE_GZ_TX_DESC_CSO_PARTIAL_EN_OFF;
 		part_cksum_w = 0;
 		l4_offset_w = 0;
 	} else {
-		outer_l3 = (m->ol_flags & PKT_TX_OUTER_IP_CKSUM);
-		outer_l4 = (m->ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+		outer_l3 = (m->ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
+		outer_l4 = (m->ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
 		inner_l3 = sfc_ef100_tx_qdesc_cso_inner_l3(m->ol_flags &
-							   PKT_TX_TUNNEL_MASK);
+							   RTE_MBUF_F_TX_TUNNEL_MASK);
 
-		switch (m->ol_flags & PKT_TX_L4_MASK) {
-		case PKT_TX_TCP_CKSUM:
+		switch (m->ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+		case RTE_MBUF_F_TX_TCP_CKSUM:
 			partial_en = ESE_GZ_TX_DESC_CSO_PARTIAL_EN_TCP;
 			part_cksum_w = offsetof(struct rte_tcp_hdr, cksum) >> 1;
 			break;
-		case PKT_TX_UDP_CKSUM:
+		case RTE_MBUF_F_TX_UDP_CKSUM:
 			partial_en = ESE_GZ_TX_DESC_CSO_PARTIAL_EN_UDP;
 			part_cksum_w = offsetof(struct rte_udp_hdr,
 						dgram_cksum) >> 1;
@@ -382,7 +382,7 @@ sfc_ef100_tx_qdesc_send_create(const struct rte_mbuf *m, efx_oword_t *tx_desc)
 			ESF_GZ_TX_SEND_CSO_OUTER_L4, outer_l4,
 			ESF_GZ_TX_DESC_TYPE, ESE_GZ_TX_DESC_TYPE_SEND);
 
-	if (m->ol_flags & PKT_TX_VLAN) {
+	if (m->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		efx_oword_t tx_desc_extra_fields;
 
 		EFX_POPULATE_OWORD_2(tx_desc_extra_fields,
@@ -423,7 +423,7 @@ sfc_ef100_tx_qdesc_tso_create(const struct rte_mbuf *m,
 	 */
 	int ed_inner_ip_id = ESE_GZ_TX_DESC_IP4_ID_INC_MOD16;
 	uint8_t inner_l3 = sfc_ef100_tx_qdesc_cso_inner_l3(
-					m->ol_flags & PKT_TX_TUNNEL_MASK);
+					m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK);
 
 	EFX_POPULATE_OWORD_10(*tx_desc,
 			ESF_GZ_TX_TSO_MSS, m->tso_segsz,
@@ -464,7 +464,7 @@ sfc_ef100_tx_qdesc_tso_create(const struct rte_mbuf *m,
 
 	EFX_OR_OWORD(*tx_desc, tx_desc_extra_fields);
 
-	if (m->ol_flags & PKT_TX_VLAN) {
+	if (m->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		EFX_POPULATE_OWORD_2(tx_desc_extra_fields,
 				ESF_GZ_TX_TSO_VLAN_INSERT_EN, 1,
 				ESF_GZ_TX_TSO_VLAN_INSERT_TCI, m->vlan_tci);
@@ -505,7 +505,7 @@ sfc_ef100_tx_pkt_descs_max(const struct rte_mbuf *m)
 #define SFC_MBUF_SEG_LEN_MAX		UINT16_MAX
 	RTE_BUILD_BUG_ON(sizeof(m->data_len) != 2);
 
-	if (m->ol_flags & PKT_TX_TCP_SEG) {
+	if (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		/* Tx TSO descriptor */
 		extra_descs++;
 		/*
@@ -552,7 +552,7 @@ sfc_ef100_xmit_tso_pkt(struct sfc_ef100_txq * const txq,
 	size_t header_len;
 	size_t remaining_hdr_len;
 
-	if (m->ol_flags & PKT_TX_TUNNEL_MASK) {
+	if (m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
 		outer_iph_off = m->outer_l2_len;
 		outer_udph_off = outer_iph_off + m->outer_l3_len;
 	} else {
@@ -671,7 +671,7 @@ sfc_ef100_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 				break;
 		}
 
-		if (m_seg->ol_flags & PKT_TX_TCP_SEG) {
+		if (m_seg->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			m_seg = sfc_ef100_xmit_tso_pkt(txq, m_seg, &added);
 		} else {
 			id = added++ & txq->ptr_mask;
diff --git a/drivers/net/sfc/sfc_ef10_essb_rx.c b/drivers/net/sfc/sfc_ef10_essb_rx.c
index 991329e86f..eda468df3f 100644
--- a/drivers/net/sfc/sfc_ef10_essb_rx.c
+++ b/drivers/net/sfc/sfc_ef10_essb_rx.c
@@ -374,13 +374,13 @@ sfc_ef10_essb_rx_get_pending(struct sfc_ef10_essb_rxq *rxq,
 			rte_pktmbuf_data_len(m) = pkt_len;
 
 			m->ol_flags |=
-				(PKT_RX_RSS_HASH *
+				(RTE_MBUF_F_RX_RSS_HASH *
 				 !!EFX_TEST_QWORD_BIT(*qwordp,
 					ES_EZ_ESSB_RX_PREFIX_HASH_VALID_LBN)) |
-				(PKT_RX_FDIR_ID *
+				(RTE_MBUF_F_RX_FDIR_ID *
 				 !!EFX_TEST_QWORD_BIT(*qwordp,
 					ES_EZ_ESSB_RX_PREFIX_MARK_VALID_LBN)) |
-				(PKT_RX_FDIR *
+				(RTE_MBUF_F_RX_FDIR *
 				 !!EFX_TEST_QWORD_BIT(*qwordp,
 					ES_EZ_ESSB_RX_PREFIX_MATCH_FLAG_LBN));
 
diff --git a/drivers/net/sfc/sfc_ef10_rx.c b/drivers/net/sfc/sfc_ef10_rx.c
index 49a7d4fb42..8ddb830642 100644
--- a/drivers/net/sfc/sfc_ef10_rx.c
+++ b/drivers/net/sfc/sfc_ef10_rx.c
@@ -330,7 +330,7 @@ sfc_ef10_rx_process_event(struct sfc_ef10_rxq *rxq, efx_qword_t rx_ev,
 	/* Mask RSS hash offload flag if RSS is not enabled */
 	sfc_ef10_rx_ev_to_offloads(rx_ev, m,
 				   (rxq->flags & SFC_EF10_RXQ_RSS_HASH) ?
-				   ~0ull : ~PKT_RX_RSS_HASH);
+				   ~0ull : ~RTE_MBUF_F_RX_RSS_HASH);
 
 	/* data_off already moved past pseudo header */
 	pseudo_hdr = (uint8_t *)m->buf_addr + RTE_PKTMBUF_HEADROOM;
@@ -338,7 +338,7 @@ sfc_ef10_rx_process_event(struct sfc_ef10_rxq *rxq, efx_qword_t rx_ev,
 	/*
 	 * Always get RSS hash from pseudo header to avoid
 	 * condition/branching. If it is valid or not depends on
-	 * PKT_RX_RSS_HASH in m->ol_flags.
+	 * RTE_MBUF_F_RX_RSS_HASH in m->ol_flags.
 	 */
 	m->hash.rss = sfc_ef10_rx_pseudo_hdr_get_hash(pseudo_hdr);
 
@@ -392,7 +392,7 @@ sfc_ef10_rx_process_event(struct sfc_ef10_rxq *rxq, efx_qword_t rx_ev,
 		/*
 		 * Always get RSS hash from pseudo header to avoid
 		 * condition/branching. If it is valid or not depends on
-		 * PKT_RX_RSS_HASH in m->ol_flags.
+		 * RTE_MBUF_F_RX_RSS_HASH in m->ol_flags.
 		 */
 		m->hash.rss = sfc_ef10_rx_pseudo_hdr_get_hash(pseudo_hdr);
 
diff --git a/drivers/net/sfc/sfc_ef10_rx_ev.h b/drivers/net/sfc/sfc_ef10_rx_ev.h
index a7f5b9168b..821e2227bb 100644
--- a/drivers/net/sfc/sfc_ef10_rx_ev.h
+++ b/drivers/net/sfc/sfc_ef10_rx_ev.h
@@ -27,9 +27,9 @@ sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m,
 			   uint64_t ol_mask)
 {
 	uint32_t tun_ptype = 0;
-	/* Which event bit is mapped to PKT_RX_IP_CKSUM_* */
+	/* Which event bit is mapped to RTE_MBUF_F_RX_IP_CKSUM_* */
 	int8_t ip_csum_err_bit;
-	/* Which event bit is mapped to PKT_RX_L4_CKSUM_* */
+	/* Which event bit is mapped to RTE_MBUF_F_RX_L4_CKSUM_* */
 	int8_t l4_csum_err_bit;
 	uint32_t l2_ptype = 0;
 	uint32_t l3_ptype = 0;
@@ -76,7 +76,7 @@ sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m,
 		l4_csum_err_bit = ESF_EZ_RX_TCP_UDP_INNER_CHKSUM_ERR_LBN;
 		if (unlikely(EFX_TEST_QWORD_BIT(rx_ev,
 						ESF_DZ_RX_IPCKSUM_ERR_LBN)))
-			ol_flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+			ol_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 	}
 
 	switch (EFX_QWORD_FIELD(rx_ev, ESF_DZ_RX_ETH_TAG_CLASS)) {
@@ -105,9 +105,9 @@ sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m,
 	case ESE_DZ_L3_CLASS_IP4:
 		l3_ptype = (tun_ptype == 0) ? RTE_PTYPE_L3_IPV4_EXT_UNKNOWN :
 			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN;
-		ol_flags |= PKT_RX_RSS_HASH |
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH |
 			((EFX_TEST_QWORD_BIT(rx_ev, ip_csum_err_bit)) ?
-			 PKT_RX_IP_CKSUM_BAD : PKT_RX_IP_CKSUM_GOOD);
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD : RTE_MBUF_F_RX_IP_CKSUM_GOOD);
 		break;
 	case ESE_DZ_L3_CLASS_IP6_FRAG:
 		l4_ptype = (tun_ptype == 0) ? RTE_PTYPE_L4_FRAG :
@@ -116,7 +116,7 @@ sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m,
 	case ESE_DZ_L3_CLASS_IP6:
 		l3_ptype = (tun_ptype == 0) ? RTE_PTYPE_L3_IPV6_EXT_UNKNOWN :
 			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN;
-		ol_flags |= PKT_RX_RSS_HASH;
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		break;
 	case ESE_DZ_L3_CLASS_ARP:
 		/* Override Layer 2 packet type */
@@ -144,7 +144,7 @@ sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m,
 			RTE_PTYPE_INNER_L4_TCP;
 		ol_flags |=
 			(EFX_TEST_QWORD_BIT(rx_ev, l4_csum_err_bit)) ?
-			PKT_RX_L4_CKSUM_BAD : PKT_RX_L4_CKSUM_GOOD;
+			RTE_MBUF_F_RX_L4_CKSUM_BAD : RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		break;
 	case ESE_FZ_L4_CLASS_UDP:
 		 RTE_BUILD_BUG_ON(ESE_FZ_L4_CLASS_UDP != ESE_DE_L4_CLASS_UDP);
@@ -152,7 +152,7 @@ sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m,
 			RTE_PTYPE_INNER_L4_UDP;
 		ol_flags |=
 			(EFX_TEST_QWORD_BIT(rx_ev, l4_csum_err_bit)) ?
-			PKT_RX_L4_CKSUM_BAD : PKT_RX_L4_CKSUM_GOOD;
+			RTE_MBUF_F_RX_L4_CKSUM_BAD : RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		break;
 	case ESE_FZ_L4_CLASS_UNKNOWN:
 		 RTE_BUILD_BUG_ON(ESE_FZ_L4_CLASS_UNKNOWN !=
diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index 277fe6c6ca..e58f8bbe8c 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -341,7 +341,7 @@ sfc_ef10_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		 * the size limit. Perform the check in debug mode since MTU
 		 * more than 9k is not supported, but the limit here is 16k-1.
 		 */
-		if (!(m->ol_flags & PKT_TX_TCP_SEG)) {
+		if (!(m->ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 			struct rte_mbuf *m_seg;
 
 			for (m_seg = m; m_seg != NULL; m_seg = m_seg->next) {
@@ -371,7 +371,7 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 		      unsigned int *added, unsigned int *dma_desc_space,
 		      bool *reap_done)
 {
-	size_t iph_off = ((m_seg->ol_flags & PKT_TX_TUNNEL_MASK) ?
+	size_t iph_off = ((m_seg->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 			  m_seg->outer_l2_len + m_seg->outer_l3_len : 0) +
 			 m_seg->l2_len;
 	size_t tcph_off = iph_off + m_seg->l3_len;
@@ -489,10 +489,10 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 	 *
 	 * The same concern applies to outer UDP datagram length field.
 	 */
-	switch (m_seg->ol_flags & PKT_TX_TUNNEL_MASK) {
-	case PKT_TX_TUNNEL_VXLAN:
+	switch (m_seg->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
 		/* FALLTHROUGH */
-	case PKT_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 		sfc_tso_outer_udp_fix_len(first_m_seg, hdr_addr);
 		break;
 	default:
@@ -506,10 +506,10 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 	 * filled in in TSO mbuf. Use zero IPID if there is no IPv4 flag.
 	 * If the packet is still IPv4, HW will simply start from zero IPID.
 	 */
-	if (first_m_seg->ol_flags & PKT_TX_IPV4)
+	if (first_m_seg->ol_flags & RTE_MBUF_F_TX_IPV4)
 		packet_id = sfc_tso_ip4_get_ipid(hdr_addr, iph_off);
 
-	if (first_m_seg->ol_flags & PKT_TX_OUTER_IPV4)
+	if (first_m_seg->ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)
 		outer_packet_id = sfc_tso_ip4_get_ipid(hdr_addr,
 						first_m_seg->outer_l2_len);
 
@@ -648,7 +648,7 @@ sfc_ef10_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		if (likely(pktp + 1 != pktp_end))
 			rte_mbuf_prefetch_part1(pktp[1]);
 
-		if (m_seg->ol_flags & PKT_TX_TCP_SEG) {
+		if (m_seg->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			int rc;
 
 			rc = sfc_ef10_xmit_tso_pkt(txq, m_seg, &added,
@@ -805,7 +805,7 @@ sfc_ef10_simple_prepare_pkts(__rte_unused void *tx_queue,
 
 		/* ef10_simple does not support TSO and VLAN insertion */
 		if (unlikely(m->ol_flags &
-			     (PKT_TX_TCP_SEG | PKT_TX_VLAN))) {
+			     (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_VLAN))) {
 			rte_errno = ENOTSUP;
 			break;
 		}
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 280e8a61f9..66024f3e53 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -148,15 +148,15 @@ sfc_efx_rx_desc_flags_to_offload_flags(const unsigned int desc_flags)
 
 	switch (desc_flags & (EFX_PKT_IPV4 | EFX_CKSUM_IPV4)) {
 	case (EFX_PKT_IPV4 | EFX_CKSUM_IPV4):
-		mbuf_flags |= PKT_RX_IP_CKSUM_GOOD;
+		mbuf_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 		break;
 	case EFX_PKT_IPV4:
-		mbuf_flags |= PKT_RX_IP_CKSUM_BAD;
+		mbuf_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		break;
 	default:
-		RTE_BUILD_BUG_ON(PKT_RX_IP_CKSUM_UNKNOWN != 0);
-		SFC_ASSERT((mbuf_flags & PKT_RX_IP_CKSUM_MASK) ==
-			   PKT_RX_IP_CKSUM_UNKNOWN);
+		RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN != 0);
+		SFC_ASSERT((mbuf_flags & RTE_MBUF_F_RX_IP_CKSUM_MASK) ==
+			   RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN);
 		break;
 	}
 
@@ -164,16 +164,16 @@ sfc_efx_rx_desc_flags_to_offload_flags(const unsigned int desc_flags)
 		 (EFX_PKT_TCP | EFX_PKT_UDP | EFX_CKSUM_TCPUDP))) {
 	case (EFX_PKT_TCP | EFX_CKSUM_TCPUDP):
 	case (EFX_PKT_UDP | EFX_CKSUM_TCPUDP):
-		mbuf_flags |= PKT_RX_L4_CKSUM_GOOD;
+		mbuf_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		break;
 	case EFX_PKT_TCP:
 	case EFX_PKT_UDP:
-		mbuf_flags |= PKT_RX_L4_CKSUM_BAD;
+		mbuf_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		break;
 	default:
-		RTE_BUILD_BUG_ON(PKT_RX_L4_CKSUM_UNKNOWN != 0);
-		SFC_ASSERT((mbuf_flags & PKT_RX_L4_CKSUM_MASK) ==
-			   PKT_RX_L4_CKSUM_UNKNOWN);
+		RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN != 0);
+		SFC_ASSERT((mbuf_flags & RTE_MBUF_F_RX_L4_CKSUM_MASK) ==
+			   RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN);
 		break;
 	}
 
@@ -224,7 +224,7 @@ sfc_efx_rx_set_rss_hash(struct sfc_efx_rxq *rxq, unsigned int flags,
 						      EFX_RX_HASHALG_TOEPLITZ,
 						      mbuf_data);
 
-		m->ol_flags |= PKT_RX_RSS_HASH;
+		m->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 	}
 }
 
diff --git a/drivers/net/sfc/sfc_tso.c b/drivers/net/sfc/sfc_tso.c
index 29d0836b65..927e351a6e 100644
--- a/drivers/net/sfc/sfc_tso.c
+++ b/drivers/net/sfc/sfc_tso.c
@@ -153,7 +153,7 @@ sfc_efx_tso_do(struct sfc_efx_txq *txq, unsigned int idx,
 	 * IPv4 flag. If the packet is still IPv4, HW will simply start from
 	 * zero IPID.
 	 */
-	if (m->ol_flags & PKT_TX_IPV4)
+	if (m->ol_flags & RTE_MBUF_F_TX_IPV4)
 		packet_id = sfc_tso_ip4_get_ipid(tsoh, nh_off);
 
 	/* Handle TCP header */
diff --git a/drivers/net/sfc/sfc_tso.h b/drivers/net/sfc/sfc_tso.h
index f081e856e1..9029ad1590 100644
--- a/drivers/net/sfc/sfc_tso.h
+++ b/drivers/net/sfc/sfc_tso.h
@@ -59,7 +59,7 @@ sfc_tso_innermost_ip_fix_len(const struct rte_mbuf *m, uint8_t *tsoh,
 	size_t field_ofst;
 	rte_be16_t len;
 
-	if (m->ol_flags & PKT_TX_IPV4) {
+	if (m->ol_flags & RTE_MBUF_F_TX_IPV4) {
 		field_ofst = offsetof(struct rte_ipv4_hdr, total_length);
 		len = rte_cpu_to_be_16(m->l3_len + ip_payload_len);
 	} else {
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 936ae815ea..fd79e67efa 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -766,7 +766,7 @@ static unsigned int
 sfc_efx_tx_maybe_insert_tag(struct sfc_efx_txq *txq, struct rte_mbuf *m,
 			    efx_desc_t **pend)
 {
-	uint16_t this_tag = ((m->ol_flags & PKT_TX_VLAN) ?
+	uint16_t this_tag = ((m->ol_flags & RTE_MBUF_F_TX_VLAN) ?
 			     m->vlan_tci : 0);
 
 	if (this_tag == txq->hw_vlan_tci)
@@ -869,7 +869,7 @@ sfc_efx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		 */
 		pkt_descs += sfc_efx_tx_maybe_insert_tag(txq, m_seg, &pend);
 
-		if (m_seg->ol_flags & PKT_TX_TCP_SEG) {
+		if (m_seg->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			/*
 			 * We expect correct 'pkt->l[2, 3, 4]_len' values
 			 * to be set correctly by the caller
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index c515de3bf7..3b62232553 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -340,8 +340,8 @@ tap_verify_csum(struct rte_mbuf *mbuf)
 
 		cksum = ~rte_raw_cksum(iph, l3_len);
 		mbuf->ol_flags |= cksum ?
-			PKT_RX_IP_CKSUM_BAD :
-			PKT_RX_IP_CKSUM_GOOD;
+			RTE_MBUF_F_RX_IP_CKSUM_BAD :
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 	} else if (l3 == RTE_PTYPE_L3_IPV6) {
 		struct rte_ipv6_hdr *iph = l3_hdr;
 
@@ -376,7 +376,7 @@ tap_verify_csum(struct rte_mbuf *mbuf)
 					 * indicates that the sender did not
 					 * generate one [RFC 768].
 					 */
-					mbuf->ol_flags |= PKT_RX_L4_CKSUM_NONE;
+					mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_NONE;
 					return;
 				}
 			}
@@ -387,7 +387,7 @@ tap_verify_csum(struct rte_mbuf *mbuf)
 								 l4_hdr);
 		}
 		mbuf->ol_flags |= cksum_ok ?
-			PKT_RX_L4_CKSUM_GOOD : PKT_RX_L4_CKSUM_BAD;
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD : RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	}
 }
 
@@ -544,7 +544,7 @@ tap_tx_l3_cksum(char *packet, uint64_t ol_flags, unsigned int l2_len,
 {
 	void *l3_hdr = packet + l2_len;
 
-	if (ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_IPV4)) {
+	if (ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_IPV4)) {
 		struct rte_ipv4_hdr *iph = l3_hdr;
 		uint16_t cksum;
 
@@ -552,18 +552,18 @@ tap_tx_l3_cksum(char *packet, uint64_t ol_flags, unsigned int l2_len,
 		cksum = rte_raw_cksum(iph, l3_len);
 		iph->hdr_checksum = (cksum == 0xffff) ? cksum : ~cksum;
 	}
-	if (ol_flags & PKT_TX_L4_MASK) {
+	if (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
 		void *l4_hdr;
 
 		l4_hdr = packet + l2_len + l3_len;
-		if ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_UDP_CKSUM)
+		if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_UDP_CKSUM)
 			*l4_cksum = &((struct rte_udp_hdr *)l4_hdr)->dgram_cksum;
-		else if ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM)
+		else if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_TCP_CKSUM)
 			*l4_cksum = &((struct rte_tcp_hdr *)l4_hdr)->cksum;
 		else
 			return;
 		**l4_cksum = 0;
-		if (ol_flags & PKT_TX_IPV4)
+		if (ol_flags & RTE_MBUF_F_TX_IPV4)
 			*l4_phdr_cksum = rte_ipv4_phdr_cksum(l3_hdr, 0);
 		else
 			*l4_phdr_cksum = rte_ipv6_phdr_cksum(l3_hdr, 0);
@@ -627,9 +627,9 @@ tap_write_mbufs(struct tx_queue *txq, uint16_t num_mbufs,
 
 		nb_segs = mbuf->nb_segs;
 		if (txq->csum &&
-		    ((mbuf->ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_IPV4) ||
-		     (mbuf->ol_flags & PKT_TX_L4_MASK) == PKT_TX_UDP_CKSUM ||
-		     (mbuf->ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM))) {
+		    ((mbuf->ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_IPV4) ||
+		      (mbuf->ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_UDP_CKSUM ||
+		      (mbuf->ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_TCP_CKSUM))) {
 			is_cksum = 1;
 
 			/* Support only packets with at least layer 4
@@ -719,12 +719,12 @@ pmd_tx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		uint16_t hdrs_len;
 		uint64_t tso;
 
-		tso = mbuf_in->ol_flags & PKT_TX_TCP_SEG;
+		tso = mbuf_in->ol_flags & RTE_MBUF_F_TX_TCP_SEG;
 		if (tso) {
 			struct rte_gso_ctx *gso_ctx = &txq->gso_ctx;
 
 			/* TCP segmentation implies TCP checksum offload */
-			mbuf_in->ol_flags |= PKT_TX_TCP_CKSUM;
+			mbuf_in->ol_flags |= RTE_MBUF_F_TX_TCP_CKSUM;
 
 			/* gso size is calculated without RTE_ETHER_CRC_LEN */
 			hdrs_len = mbuf_in->l2_len + mbuf_in->l3_len +
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index 91e09ff8d5..4a433435c6 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -42,10 +42,10 @@ fill_sq_desc_header(union sq_entry_t *entry, struct rte_mbuf *pkt)
 	ol_flags = pkt->ol_flags & NICVF_TX_OFFLOAD_MASK;
 	if (unlikely(ol_flags)) {
 		/* L4 cksum */
-		uint64_t l4_flags = ol_flags & PKT_TX_L4_MASK;
-		if (l4_flags == PKT_TX_TCP_CKSUM)
+		uint64_t l4_flags = ol_flags & RTE_MBUF_F_TX_L4_MASK;
+		if (l4_flags == RTE_MBUF_F_TX_TCP_CKSUM)
 			sqe.hdr.csum_l4 = SEND_L4_CSUM_TCP;
-		else if (l4_flags == PKT_TX_UDP_CKSUM)
+		else if (l4_flags == RTE_MBUF_F_TX_UDP_CKSUM)
 			sqe.hdr.csum_l4 = SEND_L4_CSUM_UDP;
 		else
 			sqe.hdr.csum_l4 = SEND_L4_CSUM_DISABLE;
@@ -54,7 +54,7 @@ fill_sq_desc_header(union sq_entry_t *entry, struct rte_mbuf *pkt)
 		sqe.hdr.l4_offset = pkt->l3_len + pkt->l2_len;
 
 		/* L3 cksum */
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			sqe.hdr.csum_l3 = 1;
 	}
 
@@ -343,9 +343,9 @@ static inline uint64_t __rte_hot
 nicvf_set_olflags(const cqe_rx_word0_t cqe_rx_w0)
 {
 	static const uint64_t flag_table[3] __rte_cache_aligned = {
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD,
-		PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_UNKNOWN,
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
 	};
 
 	const uint8_t idx = (cqe_rx_w0.err_opcode == CQE_RX_ERR_L4_CHK) << 1 |
@@ -409,7 +409,7 @@ nicvf_rx_offload(cqe_rx_word0_t cqe_rx_w0, cqe_rx_word2_t cqe_rx_w2,
 {
 	if (likely(cqe_rx_w0.rss_alg)) {
 		pkt->hash.rss = cqe_rx_w2.rss_tag;
-		pkt->ol_flags |= PKT_RX_RSS_HASH;
+		pkt->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 
 	}
 }
@@ -454,8 +454,8 @@ nicvf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts,
 			pkt->ol_flags = nicvf_set_olflags(cqe_rx_w0);
 		if (flag & NICVF_RX_OFFLOAD_VLAN_STRIP) {
 			if (unlikely(cqe_rx_w0.vlan_stripped)) {
-				pkt->ol_flags |= PKT_RX_VLAN
-							| PKT_RX_VLAN_STRIPPED;
+				pkt->ol_flags |= RTE_MBUF_F_RX_VLAN
+							| RTE_MBUF_F_RX_VLAN_STRIPPED;
 				pkt->vlan_tci =
 					rte_cpu_to_be_16(cqe_rx_w2.vlan_tci);
 			}
@@ -549,8 +549,8 @@ nicvf_process_cq_mseg_entry(struct cqe_rx_t *cqe_rx,
 		pkt->ol_flags = nicvf_set_olflags(cqe_rx_w0);
 	if (flag & NICVF_RX_OFFLOAD_VLAN_STRIP) {
 		if (unlikely(cqe_rx_w0.vlan_stripped)) {
-			pkt->ol_flags |= PKT_RX_VLAN
-				| PKT_RX_VLAN_STRIPPED;
+			pkt->ol_flags |= RTE_MBUF_F_RX_VLAN
+				| RTE_MBUF_F_RX_VLAN_STRIPPED;
 			pkt->vlan_tci = rte_cpu_to_be_16(cqe_rx_w2.vlan_tci);
 		}
 	}
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index d6ed660b4e..3e1d40bbeb 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -12,7 +12,7 @@
 #define NICVF_RX_OFFLOAD_CKSUM          0x2
 #define NICVF_RX_OFFLOAD_VLAN_STRIP     0x4
 
-#define NICVF_TX_OFFLOAD_MASK (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK)
+#define NICVF_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK)
 
 #if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
 static inline uint16_t __attribute__((const))
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 0063994688..844b005249 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -1136,10 +1136,10 @@ txgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
 	rxq = dev->data->rx_queues[queue];
 
 	if (on) {
-		rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		rxq->vlan_flags = RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
 	} else {
-		rxq->vlan_flags = PKT_RX_VLAN;
+		rxq->vlan_flags = RTE_MBUF_F_RX_VLAN;
 		rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
 	}
 }
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 53da1b8450..3c5941d71f 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -43,30 +43,30 @@
 #include "txgbe_rxtx.h"
 
 #ifdef RTE_LIBRTE_IEEE1588
-#define TXGBE_TX_IEEE1588_TMST PKT_TX_IEEE1588_TMST
+#define TXGBE_TX_IEEE1588_TMST RTE_MBUF_F_TX_IEEE1588_TMST
 #else
 #define TXGBE_TX_IEEE1588_TMST 0
 #endif
 
 /* Bit Mask to indicate what bits required for building TX context */
-static const u64 TXGBE_TX_OFFLOAD_MASK = (PKT_TX_IP_CKSUM |
-		PKT_TX_OUTER_IPV6 |
-		PKT_TX_OUTER_IPV4 |
-		PKT_TX_IPV6 |
-		PKT_TX_IPV4 |
-		PKT_TX_VLAN |
-		PKT_TX_L4_MASK |
-		PKT_TX_TCP_SEG |
-		PKT_TX_TUNNEL_MASK |
-		PKT_TX_OUTER_IP_CKSUM |
-		PKT_TX_OUTER_UDP_CKSUM |
+static const u64 TXGBE_TX_OFFLOAD_MASK = (RTE_MBUF_F_TX_IP_CKSUM |
+		RTE_MBUF_F_TX_OUTER_IPV6 |
+		RTE_MBUF_F_TX_OUTER_IPV4 |
+		RTE_MBUF_F_TX_IPV6 |
+		RTE_MBUF_F_TX_IPV4 |
+		RTE_MBUF_F_TX_VLAN |
+		RTE_MBUF_F_TX_L4_MASK |
+		RTE_MBUF_F_TX_TCP_SEG |
+		RTE_MBUF_F_TX_TUNNEL_MASK |
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM |
+		RTE_MBUF_F_TX_OUTER_UDP_CKSUM |
 #ifdef RTE_LIB_SECURITY
-		PKT_TX_SEC_OFFLOAD |
+		RTE_MBUF_F_TX_SEC_OFFLOAD |
 #endif
 		TXGBE_TX_IEEE1588_TMST);
 
 #define TXGBE_TX_OFFLOAD_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ TXGBE_TX_OFFLOAD_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ TXGBE_TX_OFFLOAD_MASK)
 
 /*
  * Prefetch a cache line into all cache levels.
@@ -339,7 +339,7 @@ txgbe_set_xmit_ctx(struct txgbe_tx_queue *txq,
 	type_tucmd_mlhl |= TXGBE_TXD_PTID(tx_offload.ptid);
 
 	/* check if TCP segmentation required for this packet */
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		tx_offload_mask.l2_len |= ~0;
 		tx_offload_mask.l3_len |= ~0;
 		tx_offload_mask.l4_len |= ~0;
@@ -347,25 +347,25 @@ txgbe_set_xmit_ctx(struct txgbe_tx_queue *txq,
 		mss_l4len_idx |= TXGBE_TXD_MSS(tx_offload.tso_segsz);
 		mss_l4len_idx |= TXGBE_TXD_L4LEN(tx_offload.l4_len);
 	} else { /* no TSO, check if hardware checksum is needed */
-		if (ol_flags & PKT_TX_IP_CKSUM) {
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 			tx_offload_mask.l2_len |= ~0;
 			tx_offload_mask.l3_len |= ~0;
 		}
 
-		switch (ol_flags & PKT_TX_L4_MASK) {
-		case PKT_TX_UDP_CKSUM:
+		switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+		case RTE_MBUF_F_TX_UDP_CKSUM:
 			mss_l4len_idx |=
 				TXGBE_TXD_L4LEN(sizeof(struct rte_udp_hdr));
 			tx_offload_mask.l2_len |= ~0;
 			tx_offload_mask.l3_len |= ~0;
 			break;
-		case PKT_TX_TCP_CKSUM:
+		case RTE_MBUF_F_TX_TCP_CKSUM:
 			mss_l4len_idx |=
 				TXGBE_TXD_L4LEN(sizeof(struct rte_tcp_hdr));
 			tx_offload_mask.l2_len |= ~0;
 			tx_offload_mask.l3_len |= ~0;
 			break;
-		case PKT_TX_SCTP_CKSUM:
+		case RTE_MBUF_F_TX_SCTP_CKSUM:
 			mss_l4len_idx |=
 				TXGBE_TXD_L4LEN(sizeof(struct rte_sctp_hdr));
 			tx_offload_mask.l2_len |= ~0;
@@ -378,7 +378,7 @@ txgbe_set_xmit_ctx(struct txgbe_tx_queue *txq,
 
 	vlan_macip_lens = TXGBE_TXD_IPLEN(tx_offload.l3_len >> 1);
 
-	if (ol_flags & PKT_TX_TUNNEL_MASK) {
+	if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
 		tx_offload_mask.outer_tun_len |= ~0;
 		tx_offload_mask.outer_l2_len |= ~0;
 		tx_offload_mask.outer_l3_len |= ~0;
@@ -386,16 +386,16 @@ txgbe_set_xmit_ctx(struct txgbe_tx_queue *txq,
 		tunnel_seed = TXGBE_TXD_ETUNLEN(tx_offload.outer_tun_len >> 1);
 		tunnel_seed |= TXGBE_TXD_EIPLEN(tx_offload.outer_l3_len >> 2);
 
-		switch (ol_flags & PKT_TX_TUNNEL_MASK) {
-		case PKT_TX_TUNNEL_IPIP:
+		switch (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+		case RTE_MBUF_F_TX_TUNNEL_IPIP:
 			/* for non UDP / GRE tunneling, set to 0b */
 			break;
-		case PKT_TX_TUNNEL_VXLAN:
-		case PKT_TX_TUNNEL_VXLAN_GPE:
-		case PKT_TX_TUNNEL_GENEVE:
+		case RTE_MBUF_F_TX_TUNNEL_VXLAN:
+		case RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE:
+		case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 			tunnel_seed |= TXGBE_TXD_ETYPE_UDP;
 			break;
-		case PKT_TX_TUNNEL_GRE:
+		case RTE_MBUF_F_TX_TUNNEL_GRE:
 			tunnel_seed |= TXGBE_TXD_ETYPE_GRE;
 			break;
 		default:
@@ -408,13 +408,13 @@ txgbe_set_xmit_ctx(struct txgbe_tx_queue *txq,
 		vlan_macip_lens |= TXGBE_TXD_MACLEN(tx_offload.l2_len);
 	}
 
-	if (ol_flags & PKT_TX_VLAN) {
+	if (ol_flags & RTE_MBUF_F_TX_VLAN) {
 		tx_offload_mask.vlan_tci |= ~0;
 		vlan_macip_lens |= TXGBE_TXD_VLAN(tx_offload.vlan_tci);
 	}
 
 #ifdef RTE_LIB_SECURITY
-	if (ol_flags & PKT_TX_SEC_OFFLOAD) {
+	if (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) {
 		union txgbe_crypto_tx_desc_md *md =
 				(union txgbe_crypto_tx_desc_md *)mdata;
 		tunnel_seed |= TXGBE_TXD_IPSEC_SAIDX(md->sa_idx);
@@ -477,26 +477,26 @@ tx_desc_cksum_flags_to_olinfo(uint64_t ol_flags)
 {
 	uint32_t tmp = 0;
 
-	if ((ol_flags & PKT_TX_L4_MASK) != PKT_TX_L4_NO_CKSUM) {
+	if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) != RTE_MBUF_F_TX_L4_NO_CKSUM) {
 		tmp |= TXGBE_TXD_CC;
 		tmp |= TXGBE_TXD_L4CS;
 	}
-	if (ol_flags & PKT_TX_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		tmp |= TXGBE_TXD_CC;
 		tmp |= TXGBE_TXD_IPCS;
 	}
-	if (ol_flags & PKT_TX_OUTER_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) {
 		tmp |= TXGBE_TXD_CC;
 		tmp |= TXGBE_TXD_EIPCS;
 	}
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		tmp |= TXGBE_TXD_CC;
 		/* implies IPv4 cksum */
-		if (ol_flags & PKT_TX_IPV4)
+		if (ol_flags & RTE_MBUF_F_TX_IPV4)
 			tmp |= TXGBE_TXD_IPCS;
 		tmp |= TXGBE_TXD_L4CS;
 	}
-	if (ol_flags & PKT_TX_VLAN)
+	if (ol_flags & RTE_MBUF_F_TX_VLAN)
 		tmp |= TXGBE_TXD_CC;
 
 	return tmp;
@@ -507,11 +507,11 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
 {
 	uint32_t cmdtype = 0;
 
-	if (ol_flags & PKT_TX_VLAN)
+	if (ol_flags & RTE_MBUF_F_TX_VLAN)
 		cmdtype |= TXGBE_TXD_VLE;
-	if (ol_flags & PKT_TX_TCP_SEG)
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 		cmdtype |= TXGBE_TXD_TSE;
-	if (ol_flags & PKT_TX_MACSEC)
+	if (ol_flags & RTE_MBUF_F_TX_MACSEC)
 		cmdtype |= TXGBE_TXD_LINKSEC;
 	return cmdtype;
 }
@@ -525,67 +525,67 @@ tx_desc_ol_flags_to_ptid(uint64_t oflags, uint32_t ptype)
 		return txgbe_encode_ptype(ptype);
 
 	/* Only support flags in TXGBE_TX_OFFLOAD_MASK */
-	tun = !!(oflags & PKT_TX_TUNNEL_MASK);
+	tun = !!(oflags & RTE_MBUF_F_TX_TUNNEL_MASK);
 
 	/* L2 level */
 	ptype = RTE_PTYPE_L2_ETHER;
-	if (oflags & PKT_TX_VLAN)
+	if (oflags & RTE_MBUF_F_TX_VLAN)
 		ptype |= RTE_PTYPE_L2_ETHER_VLAN;
 
 	/* L3 level */
-	if (oflags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IP_CKSUM))
+	if (oflags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IP_CKSUM))
 		ptype |= RTE_PTYPE_L3_IPV4;
-	else if (oflags & (PKT_TX_OUTER_IPV6))
+	else if (oflags & (RTE_MBUF_F_TX_OUTER_IPV6))
 		ptype |= RTE_PTYPE_L3_IPV6;
 
-	if (oflags & (PKT_TX_IPV4 | PKT_TX_IP_CKSUM))
+	if (oflags & (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM))
 		ptype |= (tun ? RTE_PTYPE_INNER_L3_IPV4 : RTE_PTYPE_L3_IPV4);
-	else if (oflags & (PKT_TX_IPV6))
+	else if (oflags & (RTE_MBUF_F_TX_IPV6))
 		ptype |= (tun ? RTE_PTYPE_INNER_L3_IPV6 : RTE_PTYPE_L3_IPV6);
 
 	/* L4 level */
-	switch (oflags & (PKT_TX_L4_MASK)) {
-	case PKT_TX_TCP_CKSUM:
+	switch (oflags & (RTE_MBUF_F_TX_L4_MASK)) {
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		ptype |= (tun ? RTE_PTYPE_INNER_L4_TCP : RTE_PTYPE_L4_TCP);
 		break;
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		ptype |= (tun ? RTE_PTYPE_INNER_L4_UDP : RTE_PTYPE_L4_UDP);
 		break;
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		ptype |= (tun ? RTE_PTYPE_INNER_L4_SCTP : RTE_PTYPE_L4_SCTP);
 		break;
 	}
 
-	if (oflags & PKT_TX_TCP_SEG)
+	if (oflags & RTE_MBUF_F_TX_TCP_SEG)
 		ptype |= (tun ? RTE_PTYPE_INNER_L4_TCP : RTE_PTYPE_L4_TCP);
 
 	/* Tunnel */
-	switch (oflags & PKT_TX_TUNNEL_MASK) {
-	case PKT_TX_TUNNEL_VXLAN:
+	switch (oflags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
 		ptype |= RTE_PTYPE_L2_ETHER |
 			 RTE_PTYPE_L3_IPV4 |
 			 RTE_PTYPE_TUNNEL_VXLAN;
 		ptype |= RTE_PTYPE_INNER_L2_ETHER;
 		break;
-	case PKT_TX_TUNNEL_GRE:
+	case RTE_MBUF_F_TX_TUNNEL_GRE:
 		ptype |= RTE_PTYPE_L2_ETHER |
 			 RTE_PTYPE_L3_IPV4 |
 			 RTE_PTYPE_TUNNEL_GRE;
 		ptype |= RTE_PTYPE_INNER_L2_ETHER;
 		break;
-	case PKT_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 		ptype |= RTE_PTYPE_L2_ETHER |
 			 RTE_PTYPE_L3_IPV4 |
 			 RTE_PTYPE_TUNNEL_GENEVE;
 		ptype |= RTE_PTYPE_INNER_L2_ETHER;
 		break;
-	case PKT_TX_TUNNEL_VXLAN_GPE:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE:
 		ptype |= RTE_PTYPE_L2_ETHER |
 			 RTE_PTYPE_L3_IPV4 |
 			 RTE_PTYPE_TUNNEL_VXLAN_GPE;
 		break;
-	case PKT_TX_TUNNEL_IPIP:
-	case PKT_TX_TUNNEL_IP:
+	case RTE_MBUF_F_TX_TUNNEL_IPIP:
+	case RTE_MBUF_F_TX_TUNNEL_IP:
 		ptype |= RTE_PTYPE_L2_ETHER |
 			 RTE_PTYPE_L3_IPV4 |
 			 RTE_PTYPE_TUNNEL_IP;
@@ -669,19 +669,19 @@ txgbe_get_tun_len(struct rte_mbuf *mbuf)
 	const struct txgbe_genevehdr *gh;
 	uint8_t tun_len;
 
-	switch (mbuf->ol_flags & PKT_TX_TUNNEL_MASK) {
-	case PKT_TX_TUNNEL_IPIP:
+	switch (mbuf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+	case RTE_MBUF_F_TX_TUNNEL_IPIP:
 		tun_len = 0;
 		break;
-	case PKT_TX_TUNNEL_VXLAN:
-	case PKT_TX_TUNNEL_VXLAN_GPE:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE:
 		tun_len = sizeof(struct txgbe_udphdr)
 			+ sizeof(struct txgbe_vxlanhdr);
 		break;
-	case PKT_TX_TUNNEL_GRE:
+	case RTE_MBUF_F_TX_TUNNEL_GRE:
 		tun_len = sizeof(struct txgbe_nvgrehdr);
 		break;
-	case PKT_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 		gh = rte_pktmbuf_read(mbuf,
 			mbuf->outer_l2_len + mbuf->outer_l3_len,
 			sizeof(genevehdr), &genevehdr);
@@ -751,7 +751,7 @@ txgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		 */
 		ol_flags = tx_pkt->ol_flags;
 #ifdef RTE_LIB_SECURITY
-		use_ipsec = txq->using_ipsec && (ol_flags & PKT_TX_SEC_OFFLOAD);
+		use_ipsec = txq->using_ipsec && (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD);
 #endif
 
 		/* If hardware offload required */
@@ -895,20 +895,20 @@ txgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		cmd_type_len = TXGBE_TXD_FCS;
 
 #ifdef RTE_LIBRTE_IEEE1588
-		if (ol_flags & PKT_TX_IEEE1588_TMST)
+		if (ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 			cmd_type_len |= TXGBE_TXD_1588;
 #endif
 
 		olinfo_status = 0;
 		if (tx_ol_req) {
-			if (ol_flags & PKT_TX_TCP_SEG) {
+			if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 				/* when TSO is on, paylen in descriptor is the
 				 * not the packet len but the tcp payload len
 				 */
 				pkt_len -= (tx_offload.l2_len +
 					tx_offload.l3_len + tx_offload.l4_len);
 				pkt_len -=
-					(tx_pkt->ol_flags & PKT_TX_TUNNEL_MASK)
+					(tx_pkt->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 					? tx_offload.outer_l2_len +
 					  tx_offload.outer_l3_len : 0;
 			}
@@ -1076,14 +1076,14 @@ static inline uint64_t
 txgbe_rxd_pkt_info_to_pkt_flags(uint32_t pkt_info)
 {
 	static uint64_t ip_rss_types_map[16] __rte_cache_aligned = {
-		0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-		0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
-		PKT_RX_RSS_HASH, 0, 0, 0,
-		0, 0, 0,  PKT_RX_FDIR,
+		0, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+		0, RTE_MBUF_F_RX_RSS_HASH, 0, RTE_MBUF_F_RX_RSS_HASH,
+		RTE_MBUF_F_RX_RSS_HASH, 0, 0, 0,
+		0, 0, 0,  RTE_MBUF_F_RX_FDIR,
 	};
 #ifdef RTE_LIBRTE_IEEE1588
 	static uint64_t ip_pkt_etqf_map[8] = {
-		0, 0, 0, PKT_RX_IEEE1588_PTP,
+		0, 0, 0, RTE_MBUF_F_RX_IEEE1588_PTP,
 		0, 0, 0, 0,
 	};
 	int etfid = txgbe_etflt_id(TXGBE_RXD_PTID(pkt_info));
@@ -1108,12 +1108,12 @@ rx_desc_status_to_pkt_flags(uint32_t rx_status, uint64_t vlan_flags)
 	 * That can be found from rte_eth_rxmode.offloads flag
 	 */
 	pkt_flags = (rx_status & TXGBE_RXD_STAT_VLAN &&
-		     vlan_flags & PKT_RX_VLAN_STRIPPED)
+		     vlan_flags & RTE_MBUF_F_RX_VLAN_STRIPPED)
 		    ? vlan_flags : 0;
 
 #ifdef RTE_LIBRTE_IEEE1588
 	if (rx_status & TXGBE_RXD_STAT_1588)
-		pkt_flags = pkt_flags | PKT_RX_IEEE1588_TMST;
+		pkt_flags = pkt_flags | RTE_MBUF_F_RX_IEEE1588_TMST;
 #endif
 	return pkt_flags;
 }
@@ -1126,24 +1126,24 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status)
 	/* checksum offload can't be disabled */
 	if (rx_status & TXGBE_RXD_STAT_IPCS) {
 		pkt_flags |= (rx_status & TXGBE_RXD_ERR_IPCS
-				? PKT_RX_IP_CKSUM_BAD : PKT_RX_IP_CKSUM_GOOD);
+				? RTE_MBUF_F_RX_IP_CKSUM_BAD : RTE_MBUF_F_RX_IP_CKSUM_GOOD);
 	}
 
 	if (rx_status & TXGBE_RXD_STAT_L4CS) {
 		pkt_flags |= (rx_status & TXGBE_RXD_ERR_L4CS
-				? PKT_RX_L4_CKSUM_BAD : PKT_RX_L4_CKSUM_GOOD);
+				? RTE_MBUF_F_RX_L4_CKSUM_BAD : RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 	}
 
 	if (rx_status & TXGBE_RXD_STAT_EIPCS &&
 	    rx_status & TXGBE_RXD_ERR_EIPCS) {
-		pkt_flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+		pkt_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 	}
 
 #ifdef RTE_LIB_SECURITY
 	if (rx_status & TXGBE_RXD_STAT_SECP) {
-		pkt_flags |= PKT_RX_SEC_OFFLOAD;
+		pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD;
 		if (rx_status & TXGBE_RXD_ERR_SECERR)
-			pkt_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+			pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
 	}
 #endif
 
@@ -1226,10 +1226,10 @@ txgbe_rx_scan_hw_ring(struct txgbe_rx_queue *rxq)
 				txgbe_rxd_pkt_info_to_pkt_type(pkt_info[j],
 				rxq->pkt_type_mask);
 
-			if (likely(pkt_flags & PKT_RX_RSS_HASH))
+			if (likely(pkt_flags & RTE_MBUF_F_RX_RSS_HASH))
 				mb->hash.rss =
 					rte_le_to_cpu_32(rxdp[j].qw0.dw1);
-			else if (pkt_flags & PKT_RX_FDIR) {
+			else if (pkt_flags & RTE_MBUF_F_RX_FDIR) {
 				mb->hash.fdir.hash =
 					rte_le_to_cpu_16(rxdp[j].qw0.hi.csum) &
 					TXGBE_ATR_HASH_MASK;
@@ -1541,7 +1541,7 @@ txgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		rxm->port = rxq->port_id;
 
 		pkt_info = rte_le_to_cpu_32(rxd.qw0.dw0);
-		/* Only valid if PKT_RX_VLAN set in pkt_flags */
+		/* Only valid if RTE_MBUF_F_RX_VLAN set in pkt_flags */
 		rxm->vlan_tci = rte_le_to_cpu_16(rxd.qw1.hi.tag);
 
 		pkt_flags = rx_desc_status_to_pkt_flags(staterr,
@@ -1552,9 +1552,9 @@ txgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		rxm->packet_type = txgbe_rxd_pkt_info_to_pkt_type(pkt_info,
 						       rxq->pkt_type_mask);
 
-		if (likely(pkt_flags & PKT_RX_RSS_HASH)) {
+		if (likely(pkt_flags & RTE_MBUF_F_RX_RSS_HASH)) {
 			rxm->hash.rss = rte_le_to_cpu_32(rxd.qw0.dw1);
-		} else if (pkt_flags & PKT_RX_FDIR) {
+		} else if (pkt_flags & RTE_MBUF_F_RX_FDIR) {
 			rxm->hash.fdir.hash =
 				rte_le_to_cpu_16(rxd.qw0.hi.csum) &
 				TXGBE_ATR_HASH_MASK;
@@ -1616,7 +1616,7 @@ txgbe_fill_cluster_head_buf(struct rte_mbuf *head, struct txgbe_rx_desc *desc,
 
 	head->port = rxq->port_id;
 
-	/* The vlan_tci field is only valid when PKT_RX_VLAN is
+	/* The vlan_tci field is only valid when RTE_MBUF_F_RX_VLAN is
 	 * set in the pkt_flags field.
 	 */
 	head->vlan_tci = rte_le_to_cpu_16(desc->qw1.hi.tag);
@@ -1628,9 +1628,9 @@ txgbe_fill_cluster_head_buf(struct rte_mbuf *head, struct txgbe_rx_desc *desc,
 	head->packet_type = txgbe_rxd_pkt_info_to_pkt_type(pkt_info,
 						rxq->pkt_type_mask);
 
-	if (likely(pkt_flags & PKT_RX_RSS_HASH)) {
+	if (likely(pkt_flags & RTE_MBUF_F_RX_RSS_HASH)) {
 		head->hash.rss = rte_le_to_cpu_32(desc->qw0.dw1);
-	} else if (pkt_flags & PKT_RX_FDIR) {
+	} else if (pkt_flags & RTE_MBUF_F_RX_FDIR) {
 		head->hash.fdir.hash = rte_le_to_cpu_16(desc->qw0.hi.csum)
 				& TXGBE_ATR_HASH_MASK;
 		head->hash.fdir.id = rte_le_to_cpu_16(desc->qw0.hi.ipid);
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index 214a6ee4c8..a3d6a1d2dc 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -444,7 +444,7 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 		struct rte_mbuf *m = bufs[i];
 
 		/* Do VLAN tag insertion */
-		if (m->ol_flags & PKT_TX_VLAN) {
+		if (m->ol_flags & RTE_MBUF_F_TX_VLAN) {
 			int error = rte_vlan_insert(&m);
 			if (unlikely(error)) {
 				rte_pktmbuf_free(m);
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index 65f08b775a..d03fa08a75 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -929,7 +929,7 @@ virtio_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr)
 	if (hdr->flags == 0 && hdr->gso_type == VIRTIO_NET_HDR_GSO_NONE)
 		return 0;
 
-	m->ol_flags |= PKT_RX_IP_CKSUM_UNKNOWN;
+	m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 
 	ptype = rte_net_get_ptype(m, &hdr_lens, RTE_PTYPE_ALL_MASK);
 	m->packet_type = ptype;
@@ -941,7 +941,7 @@ virtio_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr)
 	if (hdr->flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) {
 		hdrlen = hdr_lens.l2_len + hdr_lens.l3_len + hdr_lens.l4_len;
 		if (hdr->csum_start <= hdrlen && l4_supported) {
-			m->ol_flags |= PKT_RX_L4_CKSUM_NONE;
+			m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_NONE;
 		} else {
 			/* Unknown proto or tunnel, do sw cksum. We can assume
 			 * the cksum field is in the first segment since the
@@ -963,7 +963,7 @@ virtio_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr)
 					off) = csum;
 		}
 	} else if (hdr->flags & VIRTIO_NET_HDR_F_DATA_VALID && l4_supported) {
-		m->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+		m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 	}
 
 	/* GSO request, save required information in mbuf */
@@ -979,8 +979,8 @@ virtio_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr)
 		switch (hdr->gso_type & ~VIRTIO_NET_HDR_GSO_ECN) {
 			case VIRTIO_NET_HDR_GSO_TCPV4:
 			case VIRTIO_NET_HDR_GSO_TCPV6:
-				m->ol_flags |= PKT_RX_LRO | \
-					PKT_RX_L4_CKSUM_NONE;
+				m->ol_flags |= RTE_MBUF_F_RX_LRO | \
+					RTE_MBUF_F_RX_L4_CKSUM_NONE;
 				break;
 			default:
 				return -EINVAL;
@@ -1747,7 +1747,7 @@ virtio_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts,
 #endif
 
 		/* Do VLAN tag insertion */
-		if (unlikely(m->ol_flags & PKT_TX_VLAN)) {
+		if (unlikely(m->ol_flags & RTE_MBUF_F_TX_VLAN)) {
 			error = rte_vlan_insert(&m);
 			/* rte_vlan_insert() may change pointer
 			 * even in the case of failure
@@ -1766,7 +1766,7 @@ virtio_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts,
 			break;
 		}
 
-		if (m->ol_flags & PKT_TX_TCP_SEG)
+		if (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			virtio_tso_fix_cksum(m);
 	}
 
diff --git a/drivers/net/virtio/virtio_rxtx_packed.h b/drivers/net/virtio/virtio_rxtx_packed.h
index 1d1db60da8..7c77128ff2 100644
--- a/drivers/net/virtio/virtio_rxtx_packed.h
+++ b/drivers/net/virtio/virtio_rxtx_packed.h
@@ -166,7 +166,7 @@ virtio_vec_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr)
 		return 0;
 
 	/* GSO not support in vec path, skip check */
-	m->ol_flags |= PKT_RX_IP_CKSUM_UNKNOWN;
+	m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 
 	ptype = rte_net_get_ptype(m, &hdr_lens, RTE_PTYPE_ALL_MASK);
 	m->packet_type = ptype;
@@ -178,7 +178,7 @@ virtio_vec_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr)
 	if (hdr->flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) {
 		hdrlen = hdr_lens.l2_len + hdr_lens.l3_len + hdr_lens.l4_len;
 		if (hdr->csum_start <= hdrlen && l4_supported) {
-			m->ol_flags |= PKT_RX_L4_CKSUM_NONE;
+			m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_NONE;
 		} else {
 			/* Unknown proto or tunnel, do sw cksum. We can assume
 			 * the cksum field is in the first segment since the
@@ -200,7 +200,7 @@ virtio_vec_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr)
 					off) = csum;
 		}
 	} else if (hdr->flags & VIRTIO_NET_HDR_F_DATA_VALID && l4_supported) {
-		m->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+		m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 	}
 
 	return 0;
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 03957b2bd0..182e4aa74c 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -619,19 +619,19 @@ virtqueue_notify(struct virtqueue *vq)
 static inline void
 virtqueue_xmit_offload(struct virtio_net_hdr *hdr, struct rte_mbuf *cookie)
 {
-	uint64_t csum_l4 = cookie->ol_flags & PKT_TX_L4_MASK;
+	uint64_t csum_l4 = cookie->ol_flags & RTE_MBUF_F_TX_L4_MASK;
 
-	if (cookie->ol_flags & PKT_TX_TCP_SEG)
-		csum_l4 |= PKT_TX_TCP_CKSUM;
+	if (cookie->ol_flags & RTE_MBUF_F_TX_TCP_SEG)
+		csum_l4 |= RTE_MBUF_F_TX_TCP_CKSUM;
 
 	switch (csum_l4) {
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		hdr->csum_start = cookie->l2_len + cookie->l3_len;
 		hdr->csum_offset = offsetof(struct rte_udp_hdr, dgram_cksum);
 		hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
 		break;
 
-	case PKT_TX_TCP_CKSUM:
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		hdr->csum_start = cookie->l2_len + cookie->l3_len;
 		hdr->csum_offset = offsetof(struct rte_tcp_hdr, cksum);
 		hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
@@ -645,8 +645,8 @@ virtqueue_xmit_offload(struct virtio_net_hdr *hdr, struct rte_mbuf *cookie)
 	}
 
 	/* TCP Segmentation Offload */
-	if (cookie->ol_flags & PKT_TX_TCP_SEG) {
-		hdr->gso_type = (cookie->ol_flags & PKT_TX_IPV6) ?
+	if (cookie->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
+		hdr->gso_type = (cookie->ol_flags & RTE_MBUF_F_TX_IPV6) ?
 			VIRTIO_NET_HDR_GSO_TCPV6 :
 			VIRTIO_NET_HDR_GSO_TCPV4;
 		hdr->gso_size = cookie->tso_segsz;
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index 69e877f816..d961498fdf 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -48,15 +48,14 @@
 #include "vmxnet3_logs.h"
 #include "vmxnet3_ethdev.h"
 
-#define	VMXNET3_TX_OFFLOAD_MASK	( \
-		PKT_TX_VLAN | \
-		PKT_TX_IPV6 |     \
-		PKT_TX_IPV4 |     \
-		PKT_TX_L4_MASK |  \
-		PKT_TX_TCP_SEG)
+#define	VMXNET3_TX_OFFLOAD_MASK	(RTE_MBUF_F_TX_VLAN | \
+		RTE_MBUF_F_TX_IPV6 |     \
+		RTE_MBUF_F_TX_IPV4 |     \
+		RTE_MBUF_F_TX_L4_MASK |  \
+		RTE_MBUF_F_TX_TCP_SEG)
 
 #define	VMXNET3_TX_OFFLOAD_NOTSUP_MASK	\
-	(PKT_TX_OFFLOAD_MASK ^ VMXNET3_TX_OFFLOAD_MASK)
+	(RTE_MBUF_F_TX_OFFLOAD_MASK ^ VMXNET3_TX_OFFLOAD_MASK)
 
 static const uint32_t rxprod_reg[2] = {VMXNET3_REG_RXPROD, VMXNET3_REG_RXPROD2};
 
@@ -359,7 +358,7 @@ vmxnet3_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		/* Non-TSO packet cannot occupy more than
 		 * VMXNET3_MAX_TXD_PER_PKT TX descriptors.
 		 */
-		if ((ol_flags & PKT_TX_TCP_SEG) == 0 &&
+		if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) == 0 &&
 				m->nb_segs > VMXNET3_MAX_TXD_PER_PKT) {
 			rte_errno = EINVAL;
 			return i;
@@ -367,8 +366,8 @@ vmxnet3_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 
 		/* check that only supported TX offloads are requested. */
 		if ((ol_flags & VMXNET3_TX_OFFLOAD_NOTSUP_MASK) != 0 ||
-				(ol_flags & PKT_TX_L4_MASK) ==
-				PKT_TX_SCTP_CKSUM) {
+				(ol_flags & RTE_MBUF_F_TX_L4_MASK) ==
+				RTE_MBUF_F_TX_SCTP_CKSUM) {
 			rte_errno = ENOTSUP;
 			return i;
 		}
@@ -416,7 +415,7 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		struct rte_mbuf *txm = tx_pkts[nb_tx];
 		struct rte_mbuf *m_seg = txm;
 		int copy_size = 0;
-		bool tso = (txm->ol_flags & PKT_TX_TCP_SEG) != 0;
+		bool tso = (txm->ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0;
 		/* # of descriptors needed for a packet. */
 		unsigned count = txm->nb_segs;
 
@@ -520,7 +519,7 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 		/* Add VLAN tag if present */
 		gdesc = txq->cmd_ring.base + first2fill;
-		if (txm->ol_flags & PKT_TX_VLAN) {
+		if (txm->ol_flags & RTE_MBUF_F_TX_VLAN) {
 			gdesc->txd.ti = 1;
 			gdesc->txd.tci = txm->vlan_tci;
 		}
@@ -535,23 +534,23 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			gdesc->txd.msscof = mss;
 
 			deferred += (rte_pktmbuf_pkt_len(txm) - gdesc->txd.hlen + mss - 1) / mss;
-		} else if (txm->ol_flags & PKT_TX_L4_MASK) {
+		} else if (txm->ol_flags & RTE_MBUF_F_TX_L4_MASK) {
 			gdesc->txd.om = VMXNET3_OM_CSUM;
 			gdesc->txd.hlen = txm->l2_len + txm->l3_len;
 
-			switch (txm->ol_flags & PKT_TX_L4_MASK) {
-			case PKT_TX_TCP_CKSUM:
+			switch (txm->ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+			case RTE_MBUF_F_TX_TCP_CKSUM:
 				gdesc->txd.msscof = gdesc->txd.hlen +
 					offsetof(struct rte_tcp_hdr, cksum);
 				break;
-			case PKT_TX_UDP_CKSUM:
+			case RTE_MBUF_F_TX_UDP_CKSUM:
 				gdesc->txd.msscof = gdesc->txd.hlen +
 					offsetof(struct rte_udp_hdr,
 						dgram_cksum);
 				break;
 			default:
 				PMD_TX_LOG(WARNING, "requested cksum offload not supported %#llx",
-					   txm->ol_flags & PKT_TX_L4_MASK);
+					   txm->ol_flags & RTE_MBUF_F_TX_L4_MASK);
 				abort();
 			}
 			deferred++;
@@ -739,35 +738,35 @@ vmxnet3_rx_offload(struct vmxnet3_hw *hw, const Vmxnet3_RxCompDesc *rcd,
 
 			rxm->tso_segsz = rcde->mss;
 			*vmxnet3_segs_dynfield(rxm) = rcde->segCnt;
-			ol_flags |= PKT_RX_LRO;
+			ol_flags |= RTE_MBUF_F_RX_LRO;
 		}
 	} else { /* Offloads set in eop */
 		/* Check for RSS */
 		if (rcd->rssType != VMXNET3_RCD_RSS_TYPE_NONE) {
-			ol_flags |= PKT_RX_RSS_HASH;
+			ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 			rxm->hash.rss = rcd->rssHash;
 		}
 
 		/* Check for hardware stripped VLAN tag */
 		if (rcd->ts) {
-			ol_flags |= (PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+			ol_flags |= (RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 			rxm->vlan_tci = rte_le_to_cpu_16((uint16_t)rcd->tci);
 		}
 
 		/* Check packet type, checksum errors, etc. */
 		if (rcd->cnc) {
-			ol_flags |= PKT_RX_L4_CKSUM_UNKNOWN;
+			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 		} else {
 			if (rcd->v4) {
 				packet_type |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
 
 				if (rcd->ipc)
-					ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+					ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 				else
-					ol_flags |= PKT_RX_IP_CKSUM_BAD;
+					ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 
 				if (rcd->tuc) {
-					ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+					ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 					if (rcd->tcp)
 						packet_type |= RTE_PTYPE_L4_TCP;
 					else
@@ -775,17 +774,17 @@ vmxnet3_rx_offload(struct vmxnet3_hw *hw, const Vmxnet3_RxCompDesc *rcd,
 				} else {
 					if (rcd->tcp) {
 						packet_type |= RTE_PTYPE_L4_TCP;
-						ol_flags |= PKT_RX_L4_CKSUM_BAD;
+						ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 					} else if (rcd->udp) {
 						packet_type |= RTE_PTYPE_L4_UDP;
-						ol_flags |= PKT_RX_L4_CKSUM_BAD;
+						ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 					}
 				}
 			} else if (rcd->v6) {
 				packet_type |= RTE_PTYPE_L3_IPV6_EXT_UNKNOWN;
 
 				if (rcd->tuc) {
-					ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+					ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 					if (rcd->tcp)
 						packet_type |= RTE_PTYPE_L4_TCP;
 					else
@@ -793,10 +792,10 @@ vmxnet3_rx_offload(struct vmxnet3_hw *hw, const Vmxnet3_RxCompDesc *rcd,
 				} else {
 					if (rcd->tcp) {
 						packet_type |= RTE_PTYPE_L4_TCP;
-						ol_flags |= PKT_RX_L4_CKSUM_BAD;
+						ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 					} else if (rcd->udp) {
 						packet_type |= RTE_PTYPE_L4_UDP;
-						ol_flags |= PKT_RX_L4_CKSUM_BAD;
+						ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 					}
 				}
 			} else {
@@ -804,7 +803,7 @@ vmxnet3_rx_offload(struct vmxnet3_hw *hw, const Vmxnet3_RxCompDesc *rcd,
 			}
 
 			/* Old variants of vmxnet3 do not provide MSS */
-			if ((ol_flags & PKT_RX_LRO) && rxm->tso_segsz == 0)
+			if ((ol_flags & RTE_MBUF_F_RX_LRO) && rxm->tso_segsz == 0)
 				rxm->tso_segsz = vmxnet3_guess_mss(hw,
 						rcd, rxm);
 		}
diff --git a/drivers/regex/mlx5/mlx5_regex_fastpath.c b/drivers/regex/mlx5/mlx5_regex_fastpath.c
index c79445ce7d..d1b5e9bb2a 100644
--- a/drivers/regex/mlx5/mlx5_regex_fastpath.c
+++ b/drivers/regex/mlx5/mlx5_regex_fastpath.c
@@ -139,7 +139,7 @@ mlx5_regex_addr2mr(struct mlx5_regex_priv *priv, struct mlx5_mr_ctrl *mr_ctrl,
 		return lkey;
 	/* Take slower bottom-half on miss. */
 	return mlx5_mr_addr2mr_bh(priv->pd, 0, &priv->mr_scache, mr_ctrl, addr,
-				  !!(mbuf->ol_flags & EXT_ATTACHED_MBUF));
+				  !!(mbuf->ol_flags & RTE_MBUF_F_EXTERNAL));
 }
 
 
diff --git a/examples/bpf/t2.c b/examples/bpf/t2.c
index b9bce746c0..67cd908cd6 100644
--- a/examples/bpf/t2.c
+++ b/examples/bpf/t2.c
@@ -6,7 +6,7 @@
  * eBPF program sample.
  * Accepts pointer to struct rte_mbuf as an input parameter.
  * cleanup mbuf's vlan_tci and all related RX flags
- * (PKT_RX_VLAN_PKT | PKT_RX_VLAN_STRIPPED).
+ * (RTE_MBUF_F_RX_VLAN_PKT | RTE_MBUF_F_RX_VLAN_STRIPPED).
  * Doesn't touch contents of packet data.
  * To compile:
  * clang -O2 -target bpf -Wno-int-to-void-pointer-cast -c t2.c
@@ -27,7 +27,7 @@ entry(void *pkt)
 
 	mb = pkt;
 	mb->vlan_tci = 0;
-	mb->ol_flags &= ~(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+	mb->ol_flags &= ~(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 
 	return 1;
 }
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index f245369720..c11ed1c0f6 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -299,7 +299,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
 			rte_pktmbuf_free(m);
 
 			/* request HW to regenerate IPv4 cksum */
-			ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+			ol_flags |= (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM);
 
 			/* If we fail to fragment the packet */
 			if (unlikely (len2 < 0))
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 8645ac790b..bbe343e5fa 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -359,7 +359,7 @@ reassemble(struct rte_mbuf *m, uint16_t portid, uint32_t queue,
 			}
 
 			/* update offloading flags */
-			m->ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+			m->ol_flags |= (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM);
 		}
 		ip_dst = rte_be_to_cpu_32(ip_hdr->dst_addr);
 
diff --git a/examples/ipsec-secgw/esp.c b/examples/ipsec-secgw/esp.c
index bfa7ff7217..bd233752c8 100644
--- a/examples/ipsec-secgw/esp.c
+++ b/examples/ipsec-secgw/esp.c
@@ -159,8 +159,8 @@ esp_inbound_post(struct rte_mbuf *m, struct ipsec_sa *sa,
 
 	if ((ips->type == RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) ||
 			(ips->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO)) {
-		if (m->ol_flags & PKT_RX_SEC_OFFLOAD) {
-			if (m->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED)
+		if (m->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD) {
+			if (m->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED)
 				cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
 			else
 				cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
@@ -464,7 +464,7 @@ esp_outbound_post(struct rte_mbuf *m,
 
 	if ((type == RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) ||
 			(type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO)) {
-		m->ol_flags |= PKT_TX_SEC_OFFLOAD;
+		m->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
 	} else {
 		RTE_ASSERT(cop != NULL);
 		if (cop->status != RTE_CRYPTO_OP_STATUS_SUCCESS) {
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 7ad94cb822..0a19033a7a 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -466,7 +466,7 @@ prepare_one_packet(struct rte_mbuf *pkt, struct ipsec_traffic *t)
 	 * with the security session.
 	 */
 
-	if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD &&
+	if (pkt->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD &&
 			rte_security_dynfield_is_registered()) {
 		struct ipsec_sa *sa;
 		struct ipsec_mbuf_metadata *priv;
@@ -533,7 +533,7 @@ prepare_tx_pkt(struct rte_mbuf *pkt, uint16_t port,
 		ip->ip_sum = 0;
 
 		/* calculate IPv4 cksum in SW */
-		if ((pkt->ol_flags & PKT_TX_IP_CKSUM) == 0)
+		if ((pkt->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) == 0)
 			ip->ip_sum = rte_ipv4_cksum((struct rte_ipv4_hdr *)ip);
 
 		ethhdr->ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
@@ -696,7 +696,7 @@ inbound_sp_sa(struct sp_ctx *sp, struct sa_ctx *sa, struct traffic_type *ip,
 		}
 
 		/* Only check SPI match for processed IPSec packets */
-		if (i < lim && ((m->ol_flags & PKT_RX_SEC_OFFLOAD) == 0)) {
+		if (i < lim && ((m->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD) == 0)) {
 			free_pkts(&m, 1);
 			continue;
 		}
@@ -978,7 +978,7 @@ route4_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint8_t nb_pkts)
 	 */
 
 	for (i = 0; i < nb_pkts; i++) {
-		if (!(pkts[i]->ol_flags & PKT_TX_SEC_OFFLOAD)) {
+		if (!(pkts[i]->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD)) {
 			/* Security offload not enabled. So an LPM lookup is
 			 * required to get the hop
 			 */
@@ -995,7 +995,7 @@ route4_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint8_t nb_pkts)
 	lpm_pkts = 0;
 
 	for (i = 0; i < nb_pkts; i++) {
-		if (pkts[i]->ol_flags & PKT_TX_SEC_OFFLOAD) {
+		if (pkts[i]->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) {
 			/* Read hop from the SA */
 			pkt_hop = get_hop_for_offload_pkt(pkts[i], 0);
 		} else {
@@ -1029,7 +1029,7 @@ route6_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint8_t nb_pkts)
 	 */
 
 	for (i = 0; i < nb_pkts; i++) {
-		if (!(pkts[i]->ol_flags & PKT_TX_SEC_OFFLOAD)) {
+		if (!(pkts[i]->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD)) {
 			/* Security offload not enabled. So an LPM lookup is
 			 * required to get the hop
 			 */
@@ -1047,7 +1047,7 @@ route6_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint8_t nb_pkts)
 	lpm_pkts = 0;
 
 	for (i = 0; i < nb_pkts; i++) {
-		if (pkts[i]->ol_flags & PKT_TX_SEC_OFFLOAD) {
+		if (pkts[i]->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) {
 			/* Read hop from the SA */
 			pkt_hop = get_hop_for_offload_pkt(pkts[i], 1);
 		} else {
@@ -2302,10 +2302,10 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
 		qconf->tx_queue_id[portid] = tx_queueid;
 
 		/* Pre-populate pkt offloads based on capabilities */
-		qconf->outbound.ipv4_offloads = PKT_TX_IPV4;
-		qconf->outbound.ipv6_offloads = PKT_TX_IPV6;
+		qconf->outbound.ipv4_offloads = RTE_MBUF_F_TX_IPV4;
+		qconf->outbound.ipv6_offloads = RTE_MBUF_F_TX_IPV6;
 		if (local_port_conf.txmode.offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
-			qconf->outbound.ipv4_offloads |= PKT_TX_IP_CKSUM;
+			qconf->outbound.ipv4_offloads |= RTE_MBUF_F_TX_IP_CKSUM;
 
 		tx_queueid++;
 
diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c
index c545497cee..f7703b7167 100644
--- a/examples/ipsec-secgw/ipsec_worker.c
+++ b/examples/ipsec-secgw/ipsec_worker.c
@@ -208,9 +208,9 @@ process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt,
 
 	switch (type) {
 	case PKT_TYPE_PLAIN_IPV4:
-		if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) {
+		if (pkt->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD) {
 			if (unlikely(pkt->ol_flags &
-				     PKT_RX_SEC_OFFLOAD_FAILED)) {
+				     RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED)) {
 				RTE_LOG(ERR, IPSEC,
 					"Inbound security offload failed\n");
 				goto drop_pkt_and_exit;
@@ -226,9 +226,9 @@ process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt,
 		break;
 
 	case PKT_TYPE_PLAIN_IPV6:
-		if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) {
+		if (pkt->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD) {
 			if (unlikely(pkt->ol_flags &
-				     PKT_RX_SEC_OFFLOAD_FAILED)) {
+				     RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED)) {
 				RTE_LOG(ERR, IPSEC,
 					"Inbound security offload failed\n");
 				goto drop_pkt_and_exit;
@@ -367,7 +367,7 @@ process_ipsec_ev_outbound(struct ipsec_ctx *ctx, struct route_table *rt,
 				      sess->security.ses, pkt, NULL);
 
 	/* Mark the packet for Tx security offload */
-	pkt->ol_flags |= PKT_TX_SEC_OFFLOAD;
+	pkt->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
 
 	/* Get the port to which this pkt need to be submitted */
 	port_id = sa->portid;
@@ -482,7 +482,7 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct eh_event_link_info *links,
 						      NULL);
 
 			/* Mark the packet for Tx security offload */
-			pkt->ol_flags |= PKT_TX_SEC_OFFLOAD;
+			pkt->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
 
 			/* Provide L2 len for Outbound processing */
 			pkt->l2_len = RTE_ETHER_HDR_LEN;
diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c
index 17a28556c9..7f2199290e 100644
--- a/examples/ipsec-secgw/sa.c
+++ b/examples/ipsec-secgw/sa.c
@@ -32,7 +32,7 @@
 
 #define IP6_FULL_MASK (sizeof(((struct ip_addr *)NULL)->ip.ip6.ip6) * CHAR_BIT)
 
-#define MBUF_NO_SEC_OFFLOAD(m) ((m->ol_flags & PKT_RX_SEC_OFFLOAD) == 0)
+#define MBUF_NO_SEC_OFFLOAD(m) ((m->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD) == 0)
 
 struct supported_cipher_algo {
 	const char *keyword;
diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
index 4f32ade7fb..d5d5217497 100644
--- a/examples/ptpclient/ptpclient.c
+++ b/examples/ptpclient/ptpclient.c
@@ -463,7 +463,7 @@ parse_fup(struct ptpv2_data_slave_ordinary *ptp_data)
 			   sizeof(struct clock_id));
 
 		/* Enable flag for hardware timestamping. */
-		created_pkt->ol_flags |= PKT_TX_IEEE1588_TMST;
+		created_pkt->ol_flags |= RTE_MBUF_F_TX_IEEE1588_TMST;
 
 		/*Read value from NIC to prevent latching with old value. */
 		rte_eth_timesync_read_tx_timestamp(ptp_data->portid,
@@ -625,7 +625,7 @@ lcore_main(void)
 				continue;
 
 			/* Packet is parsed to determine which type. 8< */
-			if (m->ol_flags & PKT_RX_IEEE1588_PTP)
+			if (m->ol_flags & RTE_MBUF_F_RX_IEEE1588_PTP)
 				parse_ptp_frames(portid, m);
 			/* >8 End of packet is parsed to determine which type. */
 
diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c
index 7ffccc8369..41e8fcdc30 100644
--- a/examples/qos_meter/main.c
+++ b/examples/qos_meter/main.c
@@ -77,13 +77,13 @@ static struct rte_eth_conf port_conf = {
  * Packet RX/TX
  *
  ***/
-#define PKT_RX_BURST_MAX                32
-#define PKT_TX_BURST_MAX                32
+#define RTE_MBUF_F_RX_BURST_MAX                32
+#define RTE_MBUF_F_TX_BURST_MAX                32
 #define TIME_TX_DRAIN                   200000ULL
 
 static uint16_t port_rx;
 static uint16_t port_tx;
-static struct rte_mbuf *pkts_rx[PKT_RX_BURST_MAX];
+static struct rte_mbuf *pkts_rx[RTE_MBUF_F_RX_BURST_MAX];
 struct rte_eth_dev_tx_buffer *tx_buffer;
 
 /* Traffic meter parameters are configured in the application. 8< */
@@ -188,7 +188,7 @@ main_loop(__rte_unused void *dummy)
 		}
 
 		/* Read packet burst from NIC RX */
-		nb_rx = rte_eth_rx_burst(port_rx, NIC_RX_QUEUE, pkts_rx, PKT_RX_BURST_MAX);
+		nb_rx = rte_eth_rx_burst(port_rx, NIC_RX_QUEUE, pkts_rx, RTE_MBUF_F_RX_BURST_MAX);
 
 		/* Handle packets */
 		for (i = 0; i < nb_rx; i ++) {
@@ -420,13 +420,13 @@ main(int argc, char **argv)
 		rte_exit(EXIT_FAILURE, "Port %d TX queue setup error (%d)\n", port_tx, ret);
 
 	tx_buffer = rte_zmalloc_socket("tx_buffer",
-			RTE_ETH_TX_BUFFER_SIZE(PKT_TX_BURST_MAX), 0,
+			RTE_ETH_TX_BUFFER_SIZE(RTE_MBUF_F_TX_BURST_MAX), 0,
 			rte_eth_dev_socket_id(port_tx));
 	if (tx_buffer == NULL)
 		rte_exit(EXIT_FAILURE, "Port %d TX buffer allocation error\n",
 				port_tx);
 
-	rte_eth_tx_buffer_init(tx_buffer, PKT_TX_BURST_MAX);
+	rte_eth_tx_buffer_init(tx_buffer, RTE_MBUF_F_TX_BURST_MAX);
 
 	ret = rte_eth_dev_start(port_rx);
 	if (ret < 0)
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index 1603c29fb5..f1248b0a36 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -1036,15 +1036,15 @@ static void virtio_tx_offload(struct rte_mbuf *m)
 	tcp_hdr = rte_pktmbuf_mtod_offset(m, struct rte_tcp_hdr *,
 		m->l2_len + m->l3_len);
 
-	m->ol_flags |= PKT_TX_TCP_SEG;
+	m->ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 	if ((ptype & RTE_PTYPE_L3_MASK) == RTE_PTYPE_L3_IPV4) {
-		m->ol_flags |= PKT_TX_IPV4;
-		m->ol_flags |= PKT_TX_IP_CKSUM;
+		m->ol_flags |= RTE_MBUF_F_TX_IPV4;
+		m->ol_flags |= RTE_MBUF_F_TX_IP_CKSUM;
 		ipv4_hdr = l3_hdr;
 		ipv4_hdr->hdr_checksum = 0;
 		tcp_hdr->cksum = rte_ipv4_phdr_cksum(l3_hdr, m->ol_flags);
 	} else { /* assume ethertype == RTE_ETHER_TYPE_IPV6 */
-		m->ol_flags |= PKT_TX_IPV6;
+		m->ol_flags |= RTE_MBUF_F_TX_IPV6;
 		tcp_hdr->cksum = rte_ipv6_phdr_cksum(l3_hdr, m->ol_flags);
 	}
 }
@@ -1115,7 +1115,7 @@ virtio_tx_route(struct vhost_dev *vdev, struct rte_mbuf *m, uint16_t vlan_tag)
 			(vh->vlan_tci != vlan_tag_be))
 			vh->vlan_tci = vlan_tag_be;
 	} else {
-		m->ol_flags |= PKT_TX_VLAN;
+		m->ol_flags |= RTE_MBUF_F_TX_VLAN;
 
 		/*
 		 * Find the right seg to adjust the data len when offset is
@@ -1139,7 +1139,7 @@ virtio_tx_route(struct vhost_dev *vdev, struct rte_mbuf *m, uint16_t vlan_tag)
 		m->vlan_tci = vlan_tag;
 	}
 
-	if (m->ol_flags & PKT_RX_LRO)
+	if (m->ol_flags & RTE_MBUF_F_RX_LRO)
 		virtio_tx_offload(m);
 
 	tx_q->m_table[tx_q->len++] = m;
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index afdc53b674..330fbb2722 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -1448,13 +1448,13 @@ struct rte_eth_conf {
 #define DEV_TX_OFFLOAD_SECURITY         0x00020000
 /**
  * Device supports generic UDP tunneled packet TSO.
- * Application must set PKT_TX_TUNNEL_UDP and other mbuf fields required
+ * Application must set RTE_MBUF_F_TX_TUNNEL_UDP and other mbuf fields required
  * for tunnel TSO.
  */
 #define DEV_TX_OFFLOAD_UDP_TNL_TSO      0x00040000
 /**
  * Device supports generic IP tunneled packet TSO.
- * Application must set PKT_TX_TUNNEL_IP and other mbuf fields required
+ * Application must set RTE_MBUF_F_TX_TUNNEL_IP and other mbuf fields required
  * for tunnel TSO.
  */
 #define DEV_TX_OFFLOAD_IP_TNL_TSO       0x00080000
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 7b1ed7f110..ba3e6f39b5 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -1422,11 +1422,12 @@ rte_flow_item_icmp6_nd_opt_tla_eth_mask = {
  * RTE_FLOW_ITEM_TYPE_META
  *
  * Matches a specified metadata value. On egress, metadata can be set
- * either by mbuf dynamic metadata field with PKT_TX_DYNF_METADATA flag or
- * RTE_FLOW_ACTION_TYPE_SET_META. On ingress, RTE_FLOW_ACTION_TYPE_SET_META
+ * either by mbuf dynamic metadata field with RTE_MBUF_DYNFLAG_TX_METADATA flag
+ * or RTE_FLOW_ACTION_TYPE_SET_META. On ingress, RTE_FLOW_ACTION_TYPE_SET_META
  * sets metadata for a packet and the metadata will be reported via mbuf
- * metadata dynamic field with PKT_RX_DYNF_METADATA flag. The dynamic mbuf
- * field must be registered in advance by rte_flow_dynf_metadata_register().
+ * metadata dynamic field with RTE_MBUF_DYNFLAG_RX_METADATA flag. The dynamic
+ * mbuf field must be registered in advance by
+ * rte_flow_dynf_metadata_register().
  */
 struct rte_flow_item_meta {
 	uint32_t data;
@@ -1900,8 +1901,8 @@ enum rte_flow_action_type {
 	RTE_FLOW_ACTION_TYPE_JUMP,
 
 	/**
-	 * Attaches an integer value to packets and sets PKT_RX_FDIR and
-	 * PKT_RX_FDIR_ID mbuf flags.
+	 * Attaches an integer value to packets and sets RTE_MBUF_F_RX_FDIR and
+	 * RTE_MBUF_F_RX_FDIR_ID mbuf flags.
 	 *
 	 * See struct rte_flow_action_mark.
 	 */
@@ -1909,7 +1910,7 @@ enum rte_flow_action_type {
 
 	/**
 	 * Flags packets. Similar to MARK without a specific value; only
-	 * sets the PKT_RX_FDIR mbuf flag.
+	 * sets the RTE_MBUF_F_RX_FDIR mbuf flag.
 	 *
 	 * No associated configuration structure.
 	 */
@@ -2414,8 +2415,8 @@ enum rte_flow_action_type {
 /**
  * RTE_FLOW_ACTION_TYPE_MARK
  *
- * Attaches an integer value to packets and sets PKT_RX_FDIR and
- * PKT_RX_FDIR_ID mbuf flags.
+ * Attaches an integer value to packets and sets RTE_MBUF_F_RX_FDIR and
+ * RTE_MBUF_F_RX_FDIR_ID mbuf flags.
  *
  * This value is arbitrary and application-defined. Maximum allowed value
  * depends on the underlying implementation. It is returned in the
@@ -2960,10 +2961,10 @@ struct rte_flow_action_set_tag {
  * RTE_FLOW_ACTION_TYPE_SET_META
  *
  * Set metadata. Metadata set by mbuf metadata dynamic field with
- * PKT_TX_DYNF_DATA flag on egress will be overridden by this action. On
+ * RTE_MBUF_DYNFLAG_TX_DATA flag on egress will be overridden by this action. On
  * ingress, the metadata will be carried by mbuf metadata dynamic field
- * with PKT_RX_DYNF_METADATA flag if set.  The dynamic mbuf field must be
- * registered in advance by rte_flow_dynf_metadata_register().
+ * with RTE_MBUF_DYNFLAG_RX_METADATA flag if set.  The dynamic mbuf field must
+ * be registered in advance by rte_flow_dynf_metadata_register().
  *
  * Altering partial bits is supported with mask. For bits which have never
  * been set, unpredictable value will be seen depending on driver
@@ -3261,8 +3262,12 @@ extern uint64_t rte_flow_dynf_metadata_mask;
 	RTE_MBUF_DYNFIELD((m), rte_flow_dynf_metadata_offs, uint32_t *)
 
 /* Mbuf dynamic flags for metadata. */
-#define PKT_RX_DYNF_METADATA (rte_flow_dynf_metadata_mask)
-#define PKT_TX_DYNF_METADATA (rte_flow_dynf_metadata_mask)
+#define RTE_MBUF_DYNFLAG_RX_METADATA (rte_flow_dynf_metadata_mask)
+#define PKT_RX_DYNF_METADATA RTE_DEPRECATED(PKT_RX_DYNF_METADATA) \
+		RTE_MBUF_DYNFLAG_RX_METADATA
+#define RTE_MBUF_DYNFLAG_TX_METADATA (rte_flow_dynf_metadata_mask)
+#define PKT_TX_DYNF_METADATA RTE_DEPRECATED(PKT_TX_DYNF_METADATA) \
+		RTE_MBUF_DYNFLAG_TX_METADATA
 
 __rte_experimental
 static inline uint32_t
diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c
index 13dfb28401..c67dbdf102 100644
--- a/lib/eventdev/rte_event_eth_rx_adapter.c
+++ b/lib/eventdev/rte_event_eth_rx_adapter.c
@@ -869,8 +869,8 @@ rxa_buffer_mbufs(struct rte_event_eth_rx_adapter *rx_adapter,
 	uint16_t dropped;
 
 	if (!eth_rx_queue_info->ena_vector) {
-		/* 0xffff ffff if PKT_RX_RSS_HASH is set, otherwise 0 */
-		rss_mask = ~(((m->ol_flags & PKT_RX_RSS_HASH) != 0) - 1);
+		/* 0xffff ffff if RTE_MBUF_F_RX_RSS_HASH is set, otherwise 0 */
+		rss_mask = ~(((m->ol_flags & RTE_MBUF_F_RX_RSS_HASH) != 0) - 1);
 		do_rss = !rss_mask && !eth_rx_queue_info->flow_id_mask;
 		for (i = 0; i < num; i++) {
 			m = mbufs[i];
diff --git a/lib/gso/gso_common.h b/lib/gso/gso_common.h
index 4d5f303fa6..2c258b22bf 100644
--- a/lib/gso/gso_common.h
+++ b/lib/gso/gso_common.h
@@ -18,26 +18,26 @@
 #define TCP_HDR_PSH_MASK ((uint8_t)0x08)
 #define TCP_HDR_FIN_MASK ((uint8_t)0x01)
 
-#define IS_IPV4_TCP(flag) (((flag) & (PKT_TX_TCP_SEG | PKT_TX_IPV4)) == \
-		(PKT_TX_TCP_SEG | PKT_TX_IPV4))
-
-#define IS_IPV4_VXLAN_TCP4(flag) (((flag) & (PKT_TX_TCP_SEG | PKT_TX_IPV4 | \
-				PKT_TX_OUTER_IPV4 | PKT_TX_TUNNEL_MASK)) == \
-		(PKT_TX_TCP_SEG | PKT_TX_IPV4 | PKT_TX_OUTER_IPV4 | \
-		 PKT_TX_TUNNEL_VXLAN))
-
-#define IS_IPV4_VXLAN_UDP4(flag) (((flag) & (PKT_TX_UDP_SEG | PKT_TX_IPV4 | \
-				PKT_TX_OUTER_IPV4 | PKT_TX_TUNNEL_MASK)) == \
-		(PKT_TX_UDP_SEG | PKT_TX_IPV4 | PKT_TX_OUTER_IPV4 | \
-		 PKT_TX_TUNNEL_VXLAN))
-
-#define IS_IPV4_GRE_TCP4(flag) (((flag) & (PKT_TX_TCP_SEG | PKT_TX_IPV4 | \
-				PKT_TX_OUTER_IPV4 | PKT_TX_TUNNEL_MASK)) == \
-		(PKT_TX_TCP_SEG | PKT_TX_IPV4 | PKT_TX_OUTER_IPV4 | \
-		 PKT_TX_TUNNEL_GRE))
-
-#define IS_IPV4_UDP(flag) (((flag) & (PKT_TX_UDP_SEG | PKT_TX_IPV4)) == \
-		(PKT_TX_UDP_SEG | PKT_TX_IPV4))
+#define IS_IPV4_TCP(flag) (((flag) & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_IPV4)) == \
+		(RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_IPV4))
+
+#define IS_IPV4_VXLAN_TCP4(flag) (((flag) & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_IPV4 | \
+				RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_TUNNEL_MASK)) == \
+		(RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_OUTER_IPV4 | \
+		 RTE_MBUF_F_TX_TUNNEL_VXLAN))
+
+#define IS_IPV4_VXLAN_UDP4(flag) (((flag) & (RTE_MBUF_F_TX_UDP_SEG | RTE_MBUF_F_TX_IPV4 | \
+				RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_TUNNEL_MASK)) == \
+		(RTE_MBUF_F_TX_UDP_SEG | RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_OUTER_IPV4 | \
+		 RTE_MBUF_F_TX_TUNNEL_VXLAN))
+
+#define IS_IPV4_GRE_TCP4(flag) (((flag) & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_IPV4 | \
+				RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_TUNNEL_MASK)) == \
+		(RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_OUTER_IPV4 | \
+		 RTE_MBUF_F_TX_TUNNEL_GRE))
+
+#define IS_IPV4_UDP(flag) (((flag) & (RTE_MBUF_F_TX_UDP_SEG | RTE_MBUF_F_TX_IPV4)) == \
+		(RTE_MBUF_F_TX_UDP_SEG | RTE_MBUF_F_TX_IPV4))
 
 /**
  * Internal function which updates the UDP header of a packet, following
diff --git a/lib/gso/gso_tunnel_tcp4.c b/lib/gso/gso_tunnel_tcp4.c
index 166aace73a..1a7ef30dde 100644
--- a/lib/gso/gso_tunnel_tcp4.c
+++ b/lib/gso/gso_tunnel_tcp4.c
@@ -37,7 +37,7 @@ update_tunnel_ipv4_tcp_headers(struct rte_mbuf *pkt, uint8_t ipid_delta,
 	tail_idx = nb_segs - 1;
 
 	/* Only update UDP header for VxLAN packets. */
-	update_udp_hdr = (pkt->ol_flags & PKT_TX_TUNNEL_VXLAN) ? 1 : 0;
+	update_udp_hdr = (pkt->ol_flags & RTE_MBUF_F_TX_TUNNEL_VXLAN) ? 1 : 0;
 
 	for (i = 0; i < nb_segs; i++) {
 		update_ipv4_header(segs[i], outer_ipv4_offset, outer_id);
diff --git a/lib/gso/rte_gso.c b/lib/gso/rte_gso.c
index 0d02ec3cee..58037d6b5d 100644
--- a/lib/gso/rte_gso.c
+++ b/lib/gso/rte_gso.c
@@ -43,7 +43,7 @@ rte_gso_segment(struct rte_mbuf *pkt,
 		return -EINVAL;
 
 	if (gso_ctx->gso_size >= pkt->pkt_len) {
-		pkt->ol_flags &= (~(PKT_TX_TCP_SEG | PKT_TX_UDP_SEG));
+		pkt->ol_flags &= (~(RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG));
 		return 0;
 	}
 
@@ -57,26 +57,26 @@ rte_gso_segment(struct rte_mbuf *pkt,
 			(gso_ctx->gso_types & DEV_TX_OFFLOAD_VXLAN_TNL_TSO)) ||
 			((IS_IPV4_GRE_TCP4(pkt->ol_flags) &&
 			 (gso_ctx->gso_types & DEV_TX_OFFLOAD_GRE_TNL_TSO)))) {
-		pkt->ol_flags &= (~PKT_TX_TCP_SEG);
+		pkt->ol_flags &= (~RTE_MBUF_F_TX_TCP_SEG);
 		ret = gso_tunnel_tcp4_segment(pkt, gso_size, ipid_delta,
 				direct_pool, indirect_pool,
 				pkts_out, nb_pkts_out);
 	} else if (IS_IPV4_VXLAN_UDP4(pkt->ol_flags) &&
 			(gso_ctx->gso_types & DEV_TX_OFFLOAD_VXLAN_TNL_TSO) &&
 			(gso_ctx->gso_types & DEV_TX_OFFLOAD_UDP_TSO)) {
-		pkt->ol_flags &= (~PKT_TX_UDP_SEG);
+		pkt->ol_flags &= (~RTE_MBUF_F_TX_UDP_SEG);
 		ret = gso_tunnel_udp4_segment(pkt, gso_size,
 				direct_pool, indirect_pool,
 				pkts_out, nb_pkts_out);
 	} else if (IS_IPV4_TCP(pkt->ol_flags) &&
 			(gso_ctx->gso_types & DEV_TX_OFFLOAD_TCP_TSO)) {
-		pkt->ol_flags &= (~PKT_TX_TCP_SEG);
+		pkt->ol_flags &= (~RTE_MBUF_F_TX_TCP_SEG);
 		ret = gso_tcp4_segment(pkt, gso_size, ipid_delta,
 				direct_pool, indirect_pool,
 				pkts_out, nb_pkts_out);
 	} else if (IS_IPV4_UDP(pkt->ol_flags) &&
 			(gso_ctx->gso_types & DEV_TX_OFFLOAD_UDP_TSO)) {
-		pkt->ol_flags &= (~PKT_TX_UDP_SEG);
+		pkt->ol_flags &= (~RTE_MBUF_F_TX_UDP_SEG);
 		ret = gso_udp4_segment(pkt, gso_size, direct_pool,
 				indirect_pool, pkts_out, nb_pkts_out);
 	} else {
diff --git a/lib/gso/rte_gso.h b/lib/gso/rte_gso.h
index d93ee8e5b1..777d0a55fb 100644
--- a/lib/gso/rte_gso.h
+++ b/lib/gso/rte_gso.h
@@ -77,8 +77,8 @@ struct rte_gso_ctx {
  *
  * Before calling rte_gso_segment(), applications must set proper ol_flags
  * for the packet. The GSO library uses the same macros as that of TSO.
- * For example, set PKT_TX_TCP_SEG and PKT_TX_IPV4 in ol_flags to segment
- * a TCP/IPv4 packet. If rte_gso_segment() succeeds, the PKT_TX_TCP_SEG
+ * For example, set RTE_MBUF_F_TX_TCP_SEG and RTE_MBUF_F_TX_IPV4 in ol_flags to segment
+ * a TCP/IPv4 packet. If rte_gso_segment() succeeds, the RTE_MBUF_F_TX_TCP_SEG
  * flag is removed for all GSO segments and the input packet.
  *
  * Each of the newly-created GSO segments is organized as a two-segment
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index 2b1df6a032..17442a98f2 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -399,7 +399,7 @@ static inline int32_t
 trs_process_check(struct rte_mbuf *mb, struct rte_mbuf **ml,
 	uint32_t *tofs, struct rte_esp_tail espt, uint32_t hlen, uint32_t tlen)
 {
-	if ((mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) != 0 ||
+	if ((mb->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED) != 0 ||
 			tlen + hlen > mb->pkt_len)
 		return -EBADMSG;
 
@@ -487,8 +487,8 @@ trs_process_step3(struct rte_mbuf *mb)
 	/* reset mbuf packet type */
 	mb->packet_type &= (RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK);
 
-	/* clear the PKT_RX_SEC_OFFLOAD flag if set */
-	mb->ol_flags &= ~PKT_RX_SEC_OFFLOAD;
+	/* clear the RTE_MBUF_F_RX_SEC_OFFLOAD flag if set */
+	mb->ol_flags &= ~RTE_MBUF_F_RX_SEC_OFFLOAD;
 }
 
 /*
@@ -505,8 +505,8 @@ tun_process_step3(struct rte_mbuf *mb, uint64_t txof_msk, uint64_t txof_val)
 	mb->packet_type = RTE_PTYPE_UNKNOWN;
 	mb->tx_offload = (mb->tx_offload & txof_msk) | txof_val;
 
-	/* clear the PKT_RX_SEC_OFFLOAD flag if set */
-	mb->ol_flags &= ~PKT_RX_SEC_OFFLOAD;
+	/* clear the RTE_MBUF_F_RX_SEC_OFFLOAD flag if set */
+	mb->ol_flags &= ~RTE_MBUF_F_RX_SEC_OFFLOAD;
 }
 
 /*
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 1e181cf2ce..2bbd5df2b8 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -544,7 +544,7 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	icv_len = sa->icv_len;
 
 	for (i = 0; i != num; i++) {
-		if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
+		if ((mb[i]->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED) == 0) {
 			ml = rte_pktmbuf_lastseg(mb[i]);
 			/* remove high-order 32 bits of esn from packet len */
 			mb[i]->pkt_len -= sa->sqh_len;
@@ -580,7 +580,7 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
 	ol_flags = ss->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA;
 	for (i = 0; i != num; i++) {
 
-		mb[i]->ol_flags |= PKT_TX_SEC_OFFLOAD;
+		mb[i]->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
 		if (ol_flags != 0)
 			rte_security_set_pkt_metadata(ss->security.ctx,
 				ss->security.ses, mb[i], NULL);
diff --git a/lib/ipsec/misc.h b/lib/ipsec/misc.h
index 79b9a20762..8e72ca992d 100644
--- a/lib/ipsec/misc.h
+++ b/lib/ipsec/misc.h
@@ -173,7 +173,7 @@ cpu_crypto_bulk(const struct rte_ipsec_session *ss,
 	j = num - n;
 	for (i = 0; j != 0 && i != num; i++) {
 		if (st[i] != 0) {
-			mb[i]->ol_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+			mb[i]->ol_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
 			j--;
 		}
 	}
diff --git a/lib/ipsec/rte_ipsec_group.h b/lib/ipsec/rte_ipsec_group.h
index ea3bdfad95..60ab297710 100644
--- a/lib/ipsec/rte_ipsec_group.h
+++ b/lib/ipsec/rte_ipsec_group.h
@@ -61,7 +61,7 @@ rte_ipsec_ses_from_crypto(const struct rte_crypto_op *cop)
  * Take as input completed crypto ops, extract related mbufs
  * and group them by rte_ipsec_session they belong to.
  * For mbuf which crypto-op wasn't completed successfully
- * PKT_RX_SEC_OFFLOAD_FAILED will be raised in ol_flags.
+ * RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED will be raised in ol_flags.
  * Note that mbufs with undetermined SA (session-less) are not freed
  * by the function, but are placed beyond mbufs for the last valid group.
  * It is a user responsibility to handle them further.
@@ -95,9 +95,9 @@ rte_ipsec_pkt_crypto_group(const struct rte_crypto_op *cop[],
 		m = cop[i]->sym[0].m_src;
 		ns = cop[i]->sym[0].session;
 
-		m->ol_flags |= PKT_RX_SEC_OFFLOAD;
+		m->ol_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD;
 		if (cop[i]->status != RTE_CRYPTO_OP_STATUS_SUCCESS)
-			m->ol_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+			m->ol_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
 
 		/* no valid session found */
 		if (ns == NULL) {
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index e59189d215..4754093873 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -590,7 +590,7 @@ pkt_flag_process(const struct rte_ipsec_session *ss,
 
 	k = 0;
 	for (i = 0; i != num; i++) {
-		if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0)
+		if ((mb[i]->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED) == 0)
 			k++;
 		else
 			dr[i - k] = i;
diff --git a/lib/mbuf/rte_mbuf.c b/lib/mbuf/rte_mbuf.c
index f7e3c1a187..f2e740c363 100644
--- a/lib/mbuf/rte_mbuf.c
+++ b/lib/mbuf/rte_mbuf.c
@@ -133,7 +133,7 @@ rte_pktmbuf_free_pinned_extmem(void *addr, void *opaque)
 	RTE_ASSERT(m->shinfo->fcb_opaque == m);
 
 	rte_mbuf_ext_refcnt_set(m->shinfo, 1);
-	m->ol_flags = EXT_ATTACHED_MBUF;
+	m->ol_flags = RTE_MBUF_F_EXTERNAL;
 	if (m->next != NULL) {
 		m->next = NULL;
 		m->nb_segs = 1;
@@ -213,7 +213,7 @@ __rte_pktmbuf_init_extmem(struct rte_mempool *mp,
 	m->pool = mp;
 	m->nb_segs = 1;
 	m->port = RTE_MBUF_PORT_INVALID;
-	m->ol_flags = EXT_ATTACHED_MBUF;
+	m->ol_flags = RTE_MBUF_F_EXTERNAL;
 	rte_mbuf_refcnt_set(m, 1);
 	m->next = NULL;
 
@@ -620,7 +620,7 @@ rte_pktmbuf_copy(const struct rte_mbuf *m, struct rte_mempool *mp,
 	__rte_pktmbuf_copy_hdr(mc, m);
 
 	/* copied mbuf is not indirect or external */
-	mc->ol_flags = m->ol_flags & ~(IND_ATTACHED_MBUF|EXT_ATTACHED_MBUF);
+	mc->ol_flags = m->ol_flags & ~(RTE_MBUF_F_INDIRECT|RTE_MBUF_F_EXTERNAL);
 
 	prev = &mc->next;
 	m_last = mc;
@@ -685,7 +685,7 @@ rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len)
 	fprintf(f, "  pkt_len=%u, ol_flags=%#"PRIx64", nb_segs=%u, port=%u",
 		m->pkt_len, m->ol_flags, m->nb_segs, m->port);
 
-	if (m->ol_flags & (PKT_RX_VLAN | PKT_TX_VLAN))
+	if (m->ol_flags & (RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_TX_VLAN))
 		fprintf(f, ", vlan_tci=%u", m->vlan_tci);
 
 	fprintf(f, ", ptype=%#"PRIx32"\n", m->packet_type);
@@ -751,30 +751,30 @@ const void *__rte_pktmbuf_read(const struct rte_mbuf *m, uint32_t off,
 const char *rte_get_rx_ol_flag_name(uint64_t mask)
 {
 	switch (mask) {
-	case PKT_RX_VLAN: return "PKT_RX_VLAN";
-	case PKT_RX_RSS_HASH: return "PKT_RX_RSS_HASH";
-	case PKT_RX_FDIR: return "PKT_RX_FDIR";
-	case PKT_RX_L4_CKSUM_BAD: return "PKT_RX_L4_CKSUM_BAD";
-	case PKT_RX_L4_CKSUM_GOOD: return "PKT_RX_L4_CKSUM_GOOD";
-	case PKT_RX_L4_CKSUM_NONE: return "PKT_RX_L4_CKSUM_NONE";
-	case PKT_RX_IP_CKSUM_BAD: return "PKT_RX_IP_CKSUM_BAD";
-	case PKT_RX_IP_CKSUM_GOOD: return "PKT_RX_IP_CKSUM_GOOD";
-	case PKT_RX_IP_CKSUM_NONE: return "PKT_RX_IP_CKSUM_NONE";
-	case PKT_RX_OUTER_IP_CKSUM_BAD: return "PKT_RX_OUTER_IP_CKSUM_BAD";
-	case PKT_RX_VLAN_STRIPPED: return "PKT_RX_VLAN_STRIPPED";
-	case PKT_RX_IEEE1588_PTP: return "PKT_RX_IEEE1588_PTP";
-	case PKT_RX_IEEE1588_TMST: return "PKT_RX_IEEE1588_TMST";
-	case PKT_RX_FDIR_ID: return "PKT_RX_FDIR_ID";
-	case PKT_RX_FDIR_FLX: return "PKT_RX_FDIR_FLX";
-	case PKT_RX_QINQ_STRIPPED: return "PKT_RX_QINQ_STRIPPED";
-	case PKT_RX_QINQ: return "PKT_RX_QINQ";
-	case PKT_RX_LRO: return "PKT_RX_LRO";
-	case PKT_RX_SEC_OFFLOAD: return "PKT_RX_SEC_OFFLOAD";
-	case PKT_RX_SEC_OFFLOAD_FAILED: return "PKT_RX_SEC_OFFLOAD_FAILED";
-	case PKT_RX_OUTER_L4_CKSUM_BAD: return "PKT_RX_OUTER_L4_CKSUM_BAD";
-	case PKT_RX_OUTER_L4_CKSUM_GOOD: return "PKT_RX_OUTER_L4_CKSUM_GOOD";
-	case PKT_RX_OUTER_L4_CKSUM_INVALID:
-		return "PKT_RX_OUTER_L4_CKSUM_INVALID";
+	case RTE_MBUF_F_RX_VLAN: return "RTE_MBUF_F_RX_VLAN";
+	case RTE_MBUF_F_RX_RSS_HASH: return "RTE_MBUF_F_RX_RSS_HASH";
+	case RTE_MBUF_F_RX_FDIR: return "RTE_MBUF_F_RX_FDIR";
+	case RTE_MBUF_F_RX_L4_CKSUM_BAD: return "RTE_MBUF_F_RX_L4_CKSUM_BAD";
+	case RTE_MBUF_F_RX_L4_CKSUM_GOOD: return "RTE_MBUF_F_RX_L4_CKSUM_GOOD";
+	case RTE_MBUF_F_RX_L4_CKSUM_NONE: return "RTE_MBUF_F_RX_L4_CKSUM_NONE";
+	case RTE_MBUF_F_RX_IP_CKSUM_BAD: return "RTE_MBUF_F_RX_IP_CKSUM_BAD";
+	case RTE_MBUF_F_RX_IP_CKSUM_GOOD: return "RTE_MBUF_F_RX_IP_CKSUM_GOOD";
+	case RTE_MBUF_F_RX_IP_CKSUM_NONE: return "RTE_MBUF_F_RX_IP_CKSUM_NONE";
+	case RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD: return "RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD";
+	case RTE_MBUF_F_RX_VLAN_STRIPPED: return "RTE_MBUF_F_RX_VLAN_STRIPPED";
+	case RTE_MBUF_F_RX_IEEE1588_PTP: return "RTE_MBUF_F_RX_IEEE1588_PTP";
+	case RTE_MBUF_F_RX_IEEE1588_TMST: return "RTE_MBUF_F_RX_IEEE1588_TMST";
+	case RTE_MBUF_F_RX_FDIR_ID: return "RTE_MBUF_F_RX_FDIR_ID";
+	case RTE_MBUF_F_RX_FDIR_FLX: return "RTE_MBUF_F_RX_FDIR_FLX";
+	case RTE_MBUF_F_RX_QINQ_STRIPPED: return "RTE_MBUF_F_RX_QINQ_STRIPPED";
+	case RTE_MBUF_F_RX_QINQ: return "RTE_MBUF_F_RX_QINQ";
+	case RTE_MBUF_F_RX_LRO: return "RTE_MBUF_F_RX_LRO";
+	case RTE_MBUF_F_RX_SEC_OFFLOAD: return "RTE_MBUF_F_RX_SEC_OFFLOAD";
+	case RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED: return "RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED";
+	case RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD: return "RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD";
+	case RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD: return "RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD";
+	case RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID:
+		return "RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID";
 
 	default: return NULL;
 	}
@@ -791,37 +791,37 @@ int
 rte_get_rx_ol_flag_list(uint64_t mask, char *buf, size_t buflen)
 {
 	const struct flag_mask rx_flags[] = {
-		{ PKT_RX_VLAN, PKT_RX_VLAN, NULL },
-		{ PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, NULL },
-		{ PKT_RX_FDIR, PKT_RX_FDIR, NULL },
-		{ PKT_RX_L4_CKSUM_BAD, PKT_RX_L4_CKSUM_MASK, NULL },
-		{ PKT_RX_L4_CKSUM_GOOD, PKT_RX_L4_CKSUM_MASK, NULL },
-		{ PKT_RX_L4_CKSUM_NONE, PKT_RX_L4_CKSUM_MASK, NULL },
-		{ PKT_RX_L4_CKSUM_UNKNOWN, PKT_RX_L4_CKSUM_MASK,
-		  "PKT_RX_L4_CKSUM_UNKNOWN" },
-		{ PKT_RX_IP_CKSUM_BAD, PKT_RX_IP_CKSUM_MASK, NULL },
-		{ PKT_RX_IP_CKSUM_GOOD, PKT_RX_IP_CKSUM_MASK, NULL },
-		{ PKT_RX_IP_CKSUM_NONE, PKT_RX_IP_CKSUM_MASK, NULL },
-		{ PKT_RX_IP_CKSUM_UNKNOWN, PKT_RX_IP_CKSUM_MASK,
-		  "PKT_RX_IP_CKSUM_UNKNOWN" },
-		{ PKT_RX_OUTER_IP_CKSUM_BAD, PKT_RX_OUTER_IP_CKSUM_BAD, NULL },
-		{ PKT_RX_VLAN_STRIPPED, PKT_RX_VLAN_STRIPPED, NULL },
-		{ PKT_RX_IEEE1588_PTP, PKT_RX_IEEE1588_PTP, NULL },
-		{ PKT_RX_IEEE1588_TMST, PKT_RX_IEEE1588_TMST, NULL },
-		{ PKT_RX_FDIR_ID, PKT_RX_FDIR_ID, NULL },
-		{ PKT_RX_FDIR_FLX, PKT_RX_FDIR_FLX, NULL },
-		{ PKT_RX_QINQ_STRIPPED, PKT_RX_QINQ_STRIPPED, NULL },
-		{ PKT_RX_LRO, PKT_RX_LRO, NULL },
-		{ PKT_RX_SEC_OFFLOAD, PKT_RX_SEC_OFFLOAD, NULL },
-		{ PKT_RX_SEC_OFFLOAD_FAILED, PKT_RX_SEC_OFFLOAD_FAILED, NULL },
-		{ PKT_RX_QINQ, PKT_RX_QINQ, NULL },
-		{ PKT_RX_OUTER_L4_CKSUM_BAD, PKT_RX_OUTER_L4_CKSUM_MASK, NULL },
-		{ PKT_RX_OUTER_L4_CKSUM_GOOD, PKT_RX_OUTER_L4_CKSUM_MASK,
+		{ RTE_MBUF_F_RX_VLAN, RTE_MBUF_F_RX_VLAN, NULL },
+		{ RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, NULL },
+		{ RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_FDIR, NULL },
+		{ RTE_MBUF_F_RX_L4_CKSUM_BAD, RTE_MBUF_F_RX_L4_CKSUM_MASK, NULL },
+		{ RTE_MBUF_F_RX_L4_CKSUM_GOOD, RTE_MBUF_F_RX_L4_CKSUM_MASK, NULL },
+		{ RTE_MBUF_F_RX_L4_CKSUM_NONE, RTE_MBUF_F_RX_L4_CKSUM_MASK, NULL },
+		{ RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN, RTE_MBUF_F_RX_L4_CKSUM_MASK,
+		  "RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN" },
+		{ RTE_MBUF_F_RX_IP_CKSUM_BAD, RTE_MBUF_F_RX_IP_CKSUM_MASK, NULL },
+		{ RTE_MBUF_F_RX_IP_CKSUM_GOOD, RTE_MBUF_F_RX_IP_CKSUM_MASK, NULL },
+		{ RTE_MBUF_F_RX_IP_CKSUM_NONE, RTE_MBUF_F_RX_IP_CKSUM_MASK, NULL },
+		{ RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN, RTE_MBUF_F_RX_IP_CKSUM_MASK,
+		  "RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN" },
+		{ RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD, RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD, NULL },
+		{ RTE_MBUF_F_RX_VLAN_STRIPPED, RTE_MBUF_F_RX_VLAN_STRIPPED, NULL },
+		{ RTE_MBUF_F_RX_IEEE1588_PTP, RTE_MBUF_F_RX_IEEE1588_PTP, NULL },
+		{ RTE_MBUF_F_RX_IEEE1588_TMST, RTE_MBUF_F_RX_IEEE1588_TMST, NULL },
+		{ RTE_MBUF_F_RX_FDIR_ID, RTE_MBUF_F_RX_FDIR_ID, NULL },
+		{ RTE_MBUF_F_RX_FDIR_FLX, RTE_MBUF_F_RX_FDIR_FLX, NULL },
+		{ RTE_MBUF_F_RX_QINQ_STRIPPED, RTE_MBUF_F_RX_QINQ_STRIPPED, NULL },
+		{ RTE_MBUF_F_RX_LRO, RTE_MBUF_F_RX_LRO, NULL },
+		{ RTE_MBUF_F_RX_SEC_OFFLOAD, RTE_MBUF_F_RX_SEC_OFFLOAD, NULL },
+		{ RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED, RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED, NULL },
+		{ RTE_MBUF_F_RX_QINQ, RTE_MBUF_F_RX_QINQ, NULL },
+		{ RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD, RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK, NULL },
+		{ RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD, RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK,
 		  NULL },
-		{ PKT_RX_OUTER_L4_CKSUM_INVALID, PKT_RX_OUTER_L4_CKSUM_MASK,
+		{ RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID, RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK,
 		  NULL },
-		{ PKT_RX_OUTER_L4_CKSUM_UNKNOWN, PKT_RX_OUTER_L4_CKSUM_MASK,
-		  "PKT_RX_OUTER_L4_CKSUM_UNKNOWN" },
+		{ RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN, RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK,
+		  "RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN" },
 	};
 	const char *name;
 	unsigned int i;
@@ -856,32 +856,32 @@ rte_get_rx_ol_flag_list(uint64_t mask, char *buf, size_t buflen)
 const char *rte_get_tx_ol_flag_name(uint64_t mask)
 {
 	switch (mask) {
-	case PKT_TX_VLAN: return "PKT_TX_VLAN";
-	case PKT_TX_IP_CKSUM: return "PKT_TX_IP_CKSUM";
-	case PKT_TX_TCP_CKSUM: return "PKT_TX_TCP_CKSUM";
-	case PKT_TX_SCTP_CKSUM: return "PKT_TX_SCTP_CKSUM";
-	case PKT_TX_UDP_CKSUM: return "PKT_TX_UDP_CKSUM";
-	case PKT_TX_IEEE1588_TMST: return "PKT_TX_IEEE1588_TMST";
-	case PKT_TX_TCP_SEG: return "PKT_TX_TCP_SEG";
-	case PKT_TX_IPV4: return "PKT_TX_IPV4";
-	case PKT_TX_IPV6: return "PKT_TX_IPV6";
-	case PKT_TX_OUTER_IP_CKSUM: return "PKT_TX_OUTER_IP_CKSUM";
-	case PKT_TX_OUTER_IPV4: return "PKT_TX_OUTER_IPV4";
-	case PKT_TX_OUTER_IPV6: return "PKT_TX_OUTER_IPV6";
-	case PKT_TX_TUNNEL_VXLAN: return "PKT_TX_TUNNEL_VXLAN";
-	case PKT_TX_TUNNEL_GTP: return "PKT_TX_TUNNEL_GTP";
-	case PKT_TX_TUNNEL_GRE: return "PKT_TX_TUNNEL_GRE";
-	case PKT_TX_TUNNEL_IPIP: return "PKT_TX_TUNNEL_IPIP";
-	case PKT_TX_TUNNEL_GENEVE: return "PKT_TX_TUNNEL_GENEVE";
-	case PKT_TX_TUNNEL_MPLSINUDP: return "PKT_TX_TUNNEL_MPLSINUDP";
-	case PKT_TX_TUNNEL_VXLAN_GPE: return "PKT_TX_TUNNEL_VXLAN_GPE";
-	case PKT_TX_TUNNEL_IP: return "PKT_TX_TUNNEL_IP";
-	case PKT_TX_TUNNEL_UDP: return "PKT_TX_TUNNEL_UDP";
-	case PKT_TX_QINQ: return "PKT_TX_QINQ";
-	case PKT_TX_MACSEC: return "PKT_TX_MACSEC";
-	case PKT_TX_SEC_OFFLOAD: return "PKT_TX_SEC_OFFLOAD";
-	case PKT_TX_UDP_SEG: return "PKT_TX_UDP_SEG";
-	case PKT_TX_OUTER_UDP_CKSUM: return "PKT_TX_OUTER_UDP_CKSUM";
+	case RTE_MBUF_F_TX_VLAN: return "RTE_MBUF_F_TX_VLAN";
+	case RTE_MBUF_F_TX_IP_CKSUM: return "RTE_MBUF_F_TX_IP_CKSUM";
+	case RTE_MBUF_F_TX_TCP_CKSUM: return "RTE_MBUF_F_TX_TCP_CKSUM";
+	case RTE_MBUF_F_TX_SCTP_CKSUM: return "RTE_MBUF_F_TX_SCTP_CKSUM";
+	case RTE_MBUF_F_TX_UDP_CKSUM: return "RTE_MBUF_F_TX_UDP_CKSUM";
+	case RTE_MBUF_F_TX_IEEE1588_TMST: return "RTE_MBUF_F_TX_IEEE1588_TMST";
+	case RTE_MBUF_F_TX_TCP_SEG: return "RTE_MBUF_F_TX_TCP_SEG";
+	case RTE_MBUF_F_TX_IPV4: return "RTE_MBUF_F_TX_IPV4";
+	case RTE_MBUF_F_TX_IPV6: return "RTE_MBUF_F_TX_IPV6";
+	case RTE_MBUF_F_TX_OUTER_IP_CKSUM: return "RTE_MBUF_F_TX_OUTER_IP_CKSUM";
+	case RTE_MBUF_F_TX_OUTER_IPV4: return "RTE_MBUF_F_TX_OUTER_IPV4";
+	case RTE_MBUF_F_TX_OUTER_IPV6: return "RTE_MBUF_F_TX_OUTER_IPV6";
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN: return "RTE_MBUF_F_TX_TUNNEL_VXLAN";
+	case RTE_MBUF_F_TX_TUNNEL_GTP: return "RTE_MBUF_F_TX_TUNNEL_GTP";
+	case RTE_MBUF_F_TX_TUNNEL_GRE: return "RTE_MBUF_F_TX_TUNNEL_GRE";
+	case RTE_MBUF_F_TX_TUNNEL_IPIP: return "RTE_MBUF_F_TX_TUNNEL_IPIP";
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE: return "RTE_MBUF_F_TX_TUNNEL_GENEVE";
+	case RTE_MBUF_F_TX_TUNNEL_MPLSINUDP: return "RTE_MBUF_F_TX_TUNNEL_MPLSINUDP";
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE: return "RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE";
+	case RTE_MBUF_F_TX_TUNNEL_IP: return "RTE_MBUF_F_TX_TUNNEL_IP";
+	case RTE_MBUF_F_TX_TUNNEL_UDP: return "RTE_MBUF_F_TX_TUNNEL_UDP";
+	case RTE_MBUF_F_TX_QINQ: return "RTE_MBUF_F_TX_QINQ";
+	case RTE_MBUF_F_TX_MACSEC: return "RTE_MBUF_F_TX_MACSEC";
+	case RTE_MBUF_F_TX_SEC_OFFLOAD: return "RTE_MBUF_F_TX_SEC_OFFLOAD";
+	case RTE_MBUF_F_TX_UDP_SEG: return "RTE_MBUF_F_TX_UDP_SEG";
+	case RTE_MBUF_F_TX_OUTER_UDP_CKSUM: return "RTE_MBUF_F_TX_OUTER_UDP_CKSUM";
 	default: return NULL;
 	}
 }
@@ -891,33 +891,33 @@ int
 rte_get_tx_ol_flag_list(uint64_t mask, char *buf, size_t buflen)
 {
 	const struct flag_mask tx_flags[] = {
-		{ PKT_TX_VLAN, PKT_TX_VLAN, NULL },
-		{ PKT_TX_IP_CKSUM, PKT_TX_IP_CKSUM, NULL },
-		{ PKT_TX_TCP_CKSUM, PKT_TX_L4_MASK, NULL },
-		{ PKT_TX_SCTP_CKSUM, PKT_TX_L4_MASK, NULL },
-		{ PKT_TX_UDP_CKSUM, PKT_TX_L4_MASK, NULL },
-		{ PKT_TX_L4_NO_CKSUM, PKT_TX_L4_MASK, "PKT_TX_L4_NO_CKSUM" },
-		{ PKT_TX_IEEE1588_TMST, PKT_TX_IEEE1588_TMST, NULL },
-		{ PKT_TX_TCP_SEG, PKT_TX_TCP_SEG, NULL },
-		{ PKT_TX_IPV4, PKT_TX_IPV4, NULL },
-		{ PKT_TX_IPV6, PKT_TX_IPV6, NULL },
-		{ PKT_TX_OUTER_IP_CKSUM, PKT_TX_OUTER_IP_CKSUM, NULL },
-		{ PKT_TX_OUTER_IPV4, PKT_TX_OUTER_IPV4, NULL },
-		{ PKT_TX_OUTER_IPV6, PKT_TX_OUTER_IPV6, NULL },
-		{ PKT_TX_TUNNEL_VXLAN, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_GTP, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_GRE, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_IPIP, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_GENEVE, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_MPLSINUDP, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_VXLAN_GPE, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_IP, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_UDP, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_QINQ, PKT_TX_QINQ, NULL },
-		{ PKT_TX_MACSEC, PKT_TX_MACSEC, NULL },
-		{ PKT_TX_SEC_OFFLOAD, PKT_TX_SEC_OFFLOAD, NULL },
-		{ PKT_TX_UDP_SEG, PKT_TX_UDP_SEG, NULL },
-		{ PKT_TX_OUTER_UDP_CKSUM, PKT_TX_OUTER_UDP_CKSUM, NULL },
+		{ RTE_MBUF_F_TX_VLAN, RTE_MBUF_F_TX_VLAN, NULL },
+		{ RTE_MBUF_F_TX_IP_CKSUM, RTE_MBUF_F_TX_IP_CKSUM, NULL },
+		{ RTE_MBUF_F_TX_TCP_CKSUM, RTE_MBUF_F_TX_L4_MASK, NULL },
+		{ RTE_MBUF_F_TX_SCTP_CKSUM, RTE_MBUF_F_TX_L4_MASK, NULL },
+		{ RTE_MBUF_F_TX_UDP_CKSUM, RTE_MBUF_F_TX_L4_MASK, NULL },
+		{ RTE_MBUF_F_TX_L4_NO_CKSUM, RTE_MBUF_F_TX_L4_MASK, "RTE_MBUF_F_TX_L4_NO_CKSUM" },
+		{ RTE_MBUF_F_TX_IEEE1588_TMST, RTE_MBUF_F_TX_IEEE1588_TMST, NULL },
+		{ RTE_MBUF_F_TX_TCP_SEG, RTE_MBUF_F_TX_TCP_SEG, NULL },
+		{ RTE_MBUF_F_TX_IPV4, RTE_MBUF_F_TX_IPV4, NULL },
+		{ RTE_MBUF_F_TX_IPV6, RTE_MBUF_F_TX_IPV6, NULL },
+		{ RTE_MBUF_F_TX_OUTER_IP_CKSUM, RTE_MBUF_F_TX_OUTER_IP_CKSUM, NULL },
+		{ RTE_MBUF_F_TX_OUTER_IPV4, RTE_MBUF_F_TX_OUTER_IPV4, NULL },
+		{ RTE_MBUF_F_TX_OUTER_IPV6, RTE_MBUF_F_TX_OUTER_IPV6, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_VXLAN, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_GTP, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_GRE, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_IPIP, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_GENEVE, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_MPLSINUDP, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_IP, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_UDP, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_QINQ, RTE_MBUF_F_TX_QINQ, NULL },
+		{ RTE_MBUF_F_TX_MACSEC, RTE_MBUF_F_TX_MACSEC, NULL },
+		{ RTE_MBUF_F_TX_SEC_OFFLOAD, RTE_MBUF_F_TX_SEC_OFFLOAD, NULL },
+		{ RTE_MBUF_F_TX_UDP_SEG, RTE_MBUF_F_TX_UDP_SEG, NULL },
+		{ RTE_MBUF_F_TX_OUTER_UDP_CKSUM, RTE_MBUF_F_TX_OUTER_UDP_CKSUM, NULL },
 	};
 	const char *name;
 	unsigned int i;
diff --git a/lib/mbuf/rte_mbuf.h b/lib/mbuf/rte_mbuf.h
index a555f216ae..da158076c3 100644
--- a/lib/mbuf/rte_mbuf.h
+++ b/lib/mbuf/rte_mbuf.h
@@ -77,7 +77,7 @@ int rte_get_rx_ol_flag_list(uint64_t mask, char *buf, size_t buflen);
  * @param mask
  *   The mask describing the flag. Usually only one bit must be set.
  *   Several bits can be given if they belong to the same mask.
- *   Ex: PKT_TX_L4_MASK.
+ *   Ex: RTE_MBUF_F_TX_L4_MASK.
  * @return
  *   The name of this flag, or NULL if it's not a valid TX flag.
  */
@@ -874,7 +874,7 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
 	m->nb_segs = 1;
 	m->port = RTE_MBUF_PORT_INVALID;
 
-	m->ol_flags &= EXT_ATTACHED_MBUF;
+	m->ol_flags &= RTE_MBUF_F_EXTERNAL;
 	m->packet_type = 0;
 	rte_pktmbuf_reset_headroom(m);
 
@@ -1089,7 +1089,7 @@ rte_pktmbuf_attach_extbuf(struct rte_mbuf *m, void *buf_addr,
 	m->data_len = 0;
 	m->data_off = 0;
 
-	m->ol_flags |= EXT_ATTACHED_MBUF;
+	m->ol_flags |= RTE_MBUF_F_EXTERNAL;
 	m->shinfo = shinfo;
 }
 
@@ -1163,7 +1163,7 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *m)
 		/* if m is not direct, get the mbuf that embeds the data */
 		rte_mbuf_refcnt_update(rte_mbuf_from_indirect(m), 1);
 		mi->priv_size = m->priv_size;
-		mi->ol_flags = m->ol_flags | IND_ATTACHED_MBUF;
+		mi->ol_flags = m->ol_flags | RTE_MBUF_F_INDIRECT;
 	}
 
 	__rte_pktmbuf_copy_hdr(mi, m);
@@ -1297,7 +1297,7 @@ static inline int __rte_pktmbuf_pinned_extbuf_decref(struct rte_mbuf *m)
 	struct rte_mbuf_ext_shared_info *shinfo;
 
 	/* Clear flags, mbuf is being freed. */
-	m->ol_flags = EXT_ATTACHED_MBUF;
+	m->ol_flags = RTE_MBUF_F_EXTERNAL;
 	shinfo = m->shinfo;
 
 	/* Optimize for performance - do not dec/reinit */
@@ -1828,28 +1828,28 @@ rte_validate_tx_offload(const struct rte_mbuf *m)
 	uint64_t ol_flags = m->ol_flags;
 
 	/* Does packet set any of available offloads? */
-	if (!(ol_flags & PKT_TX_OFFLOAD_MASK))
+	if (!(ol_flags & RTE_MBUF_F_TX_OFFLOAD_MASK))
 		return 0;
 
 	/* IP checksum can be counted only for IPv4 packet */
-	if ((ol_flags & PKT_TX_IP_CKSUM) && (ol_flags & PKT_TX_IPV6))
+	if ((ol_flags & RTE_MBUF_F_TX_IP_CKSUM) && (ol_flags & RTE_MBUF_F_TX_IPV6))
 		return -EINVAL;
 
 	/* IP type not set when required */
-	if (ol_flags & (PKT_TX_L4_MASK | PKT_TX_TCP_SEG))
-		if (!(ol_flags & (PKT_TX_IPV4 | PKT_TX_IPV6)))
+	if (ol_flags & (RTE_MBUF_F_TX_L4_MASK | RTE_MBUF_F_TX_TCP_SEG))
+		if (!(ol_flags & (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IPV6)))
 			return -EINVAL;
 
 	/* Check requirements for TSO packet */
-	if (ol_flags & PKT_TX_TCP_SEG)
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 		if ((m->tso_segsz == 0) ||
-				((ol_flags & PKT_TX_IPV4) &&
-				!(ol_flags & PKT_TX_IP_CKSUM)))
+				((ol_flags & RTE_MBUF_F_TX_IPV4) &&
+				 !(ol_flags & RTE_MBUF_F_TX_IP_CKSUM)))
 			return -EINVAL;
 
-	/* PKT_TX_OUTER_IP_CKSUM set for non outer IPv4 packet. */
-	if ((ol_flags & PKT_TX_OUTER_IP_CKSUM) &&
-			!(ol_flags & PKT_TX_OUTER_IPV4))
+	/* RTE_MBUF_F_TX_OUTER_IP_CKSUM set for non outer IPv4 packet. */
+	if ((ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) &&
+			!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4))
 		return -EINVAL;
 
 	return 0;
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index 93db9292c0..fed231bf91 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -43,271 +43,377 @@ extern "C" {
 /**
  * The RX packet is a 802.1q VLAN packet, and the tci has been
  * saved in in mbuf->vlan_tci.
- * If the flag PKT_RX_VLAN_STRIPPED is also present, the VLAN
+ * If the flag RTE_MBUF_F_RX_VLAN_STRIPPED is also present, the VLAN
  * header has been stripped from mbuf data, else it is still
  * present.
  */
-#define PKT_RX_VLAN          (1ULL << 0)
+#define RTE_MBUF_F_RX_VLAN          (1ULL << 0)
+#define PKT_RX_VLAN RTE_DEPRECATED(PKT_RX_VLAN) RTE_MBUF_F_RX_VLAN
 
 /** RX packet with RSS hash result. */
-#define PKT_RX_RSS_HASH      (1ULL << 1)
+#define RTE_MBUF_F_RX_RSS_HASH      (1ULL << 1)
+#define PKT_RX_RSS_HASH RTE_DEPRECATED(PKT_RX_RSS_HASH) RTE_MBUF_F_RX_RSS_HASH
 
  /** RX packet with FDIR match indicate. */
-#define PKT_RX_FDIR          (1ULL << 2)
+#define RTE_MBUF_F_RX_FDIR          (1ULL << 2)
+#define PKT_RX_FDIR RTE_DEPRECATED(PKT_RX_FDIR) RTE_MBUF_F_RX_FDIR
 
 /**
  * This flag is set when the outermost IP header checksum is detected as
  * wrong by the hardware.
  */
-#define PKT_RX_OUTER_IP_CKSUM_BAD (1ULL << 5)
+#define RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD (1ULL << 5)
+#define PKT_RX_OUTER_IP_CKSUM_BAD RTE_DEPRECATED(PKT_RX_OUTER_IP_CKSUM_BAD) \
+		RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD
 
 /**
  * A vlan has been stripped by the hardware and its tci is saved in
  * mbuf->vlan_tci. This can only happen if vlan stripping is enabled
  * in the RX configuration of the PMD.
- * When PKT_RX_VLAN_STRIPPED is set, PKT_RX_VLAN must also be set.
+ * When RTE_MBUF_F_RX_VLAN_STRIPPED is set, RTE_MBUF_F_RX_VLAN must also be set.
  */
-#define PKT_RX_VLAN_STRIPPED (1ULL << 6)
+#define RTE_MBUF_F_RX_VLAN_STRIPPED (1ULL << 6)
+#define PKT_RX_VLAN_STRIPPED RTE_DEPRECATED(PKT_RX_VLAN_STRIPPED) \
+		RTE_MBUF_F_RX_VLAN_STRIPPED
 
 /**
  * Mask of bits used to determine the status of RX IP checksum.
- * - PKT_RX_IP_CKSUM_UNKNOWN: no information about the RX IP checksum
- * - PKT_RX_IP_CKSUM_BAD: the IP checksum in the packet is wrong
- * - PKT_RX_IP_CKSUM_GOOD: the IP checksum in the packet is valid
- * - PKT_RX_IP_CKSUM_NONE: the IP checksum is not correct in the packet
+ * - RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN: no information about the RX IP checksum
+ * - RTE_MBUF_F_RX_IP_CKSUM_BAD: the IP checksum in the packet is wrong
+ * - RTE_MBUF_F_RX_IP_CKSUM_GOOD: the IP checksum in the packet is valid
+ * - RTE_MBUF_F_RX_IP_CKSUM_NONE: the IP checksum is not correct in the packet
  *   data, but the integrity of the IP header is verified.
  */
-#define PKT_RX_IP_CKSUM_MASK ((1ULL << 4) | (1ULL << 7))
+#define RTE_MBUF_F_RX_IP_CKSUM_MASK ((1ULL << 4) | (1ULL << 7))
+#define PKT_RX_IP_CKSUM_MASK RTE_DEPRECATED(PKT_RX_IP_CKSUM_MASK) \
+		RTE_MBUF_F_RX_IP_CKSUM_MASK
 
-#define PKT_RX_IP_CKSUM_UNKNOWN 0
-#define PKT_RX_IP_CKSUM_BAD     (1ULL << 4)
-#define PKT_RX_IP_CKSUM_GOOD    (1ULL << 7)
-#define PKT_RX_IP_CKSUM_NONE    ((1ULL << 4) | (1ULL << 7))
+#define RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN 0
+#define RTE_MBUF_F_RX_IP_CKSUM_BAD     (1ULL << 4)
+#define RTE_MBUF_F_RX_IP_CKSUM_GOOD    (1ULL << 7)
+#define RTE_MBUF_F_RX_IP_CKSUM_NONE    ((1ULL << 4) | (1ULL << 7))
+#define PKT_RX_IP_CKSUM_UNKNOWN RTE_DEPRECATED(PKT_RX_IP_CKSUM_UNKNOWN) \
+		RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN
+#define PKT_RX_IP_CKSUM_BAD RTE_DEPRECATED(PKT_RX_IP_CKSUM_BAD) \
+		RTE_MBUF_F_RX_IP_CKSUM_BAD
+#define PKT_RX_IP_CKSUM_GOOD RTE_DEPRECATED(PKT_RX_IP_CKSUM_GOOD) \
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD
+#define PKT_RX_IP_CKSUM_NONE RTE_DEPRECATED(PKT_RX_IP_CKSUM_NONE) \
+		RTE_MBUF_F_RX_IP_CKSUM_NONE
 
 /**
  * Mask of bits used to determine the status of RX L4 checksum.
- * - PKT_RX_L4_CKSUM_UNKNOWN: no information about the RX L4 checksum
- * - PKT_RX_L4_CKSUM_BAD: the L4 checksum in the packet is wrong
- * - PKT_RX_L4_CKSUM_GOOD: the L4 checksum in the packet is valid
- * - PKT_RX_L4_CKSUM_NONE: the L4 checksum is not correct in the packet
+ * - RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN: no information about the RX L4 checksum
+ * - RTE_MBUF_F_RX_L4_CKSUM_BAD: the L4 checksum in the packet is wrong
+ * - RTE_MBUF_F_RX_L4_CKSUM_GOOD: the L4 checksum in the packet is valid
+ * - RTE_MBUF_F_RX_L4_CKSUM_NONE: the L4 checksum is not correct in the packet
  *   data, but the integrity of the L4 data is verified.
  */
-#define PKT_RX_L4_CKSUM_MASK ((1ULL << 3) | (1ULL << 8))
-
-#define PKT_RX_L4_CKSUM_UNKNOWN 0
-#define PKT_RX_L4_CKSUM_BAD     (1ULL << 3)
-#define PKT_RX_L4_CKSUM_GOOD    (1ULL << 8)
-#define PKT_RX_L4_CKSUM_NONE    ((1ULL << 3) | (1ULL << 8))
+#define RTE_MBUF_F_RX_L4_CKSUM_MASK ((1ULL << 3) | (1ULL << 8))
+#define PKT_RX_L4_CKSUM_MASK RTE_DEPRECATED(PKT_RX_L4_CKSUM_MASK) \
+		RTE_MBUF_F_RX_L4_CKSUM_MASK
+
+#define RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN 0
+#define RTE_MBUF_F_RX_L4_CKSUM_BAD     (1ULL << 3)
+#define RTE_MBUF_F_RX_L4_CKSUM_GOOD    (1ULL << 8)
+#define RTE_MBUF_F_RX_L4_CKSUM_NONE    ((1ULL << 3) | (1ULL << 8))
+#define PKT_RX_L4_CKSUM_UNKNOWN RTE_DEPRECATED(PKT_RX_L4_CKSUM_UNKNOWN) \
+		RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN
+#define PKT_RX_L4_CKSUM_BAD RTE_DEPRECATED(PKT_RX_L4_CKSUM_BAD) \
+		RTE_MBUF_F_RX_L4_CKSUM_BAD
+#define PKT_RX_L4_CKSUM_GOOD RTE_DEPRECATED(PKT_RX_L4_CKSUM_GOOD) \
+		RTE_MBUF_F_RX_L4_CKSUM_GOOD
+#define PKT_RX_L4_CKSUM_NONE RTE_DEPRECATED(PKT_RX_L4_CKSUM_NONE) \
+		RTE_MBUF_F_RX_L4_CKSUM_NONE
 
 /** RX IEEE1588 L2 Ethernet PT Packet. */
-#define PKT_RX_IEEE1588_PTP  (1ULL << 9)
+#define RTE_MBUF_F_RX_IEEE1588_PTP  (1ULL << 9)
+#define PKT_RX_IEEE1588_PTP RTE_DEPRECATED(PKT_RX_IEEE1588_PTP) \
+		RTE_MBUF_F_RX_IEEE1588_PTP
 
 /** RX IEEE1588 L2/L4 timestamped packet.*/
-#define PKT_RX_IEEE1588_TMST (1ULL << 10)
+#define RTE_MBUF_F_RX_IEEE1588_TMST (1ULL << 10)
+#define PKT_RX_IEEE1588_TMST RTE_DEPRECATED(PKT_RX_IEEE1588_TMST) \
+		RTE_MBUF_F_RX_IEEE1588_TMST
 
 /** FD id reported if FDIR match. */
-#define PKT_RX_FDIR_ID       (1ULL << 13)
+#define RTE_MBUF_F_RX_FDIR_ID       (1ULL << 13)
+#define PKT_RX_FDIR_ID RTE_DEPRECATED(PKT_RX_FDIR_ID) \
+		RTE_MBUF_F_RX_FDIR_ID
 
 /** Flexible bytes reported if FDIR match. */
-#define PKT_RX_FDIR_FLX      (1ULL << 14)
+#define RTE_MBUF_F_RX_FDIR_FLX      (1ULL << 14)
+#define PKT_RX_FDIR_FLX RTE_DEPRECATED(PKT_RX_FDIR_FLX) \
+		RTE_MBUF_F_RX_FDIR_FLX
 
 /**
  * The outer VLAN has been stripped by the hardware and its TCI is
  * saved in mbuf->vlan_tci_outer.
  * This can only happen if VLAN stripping is enabled in the Rx
  * configuration of the PMD.
- * When PKT_RX_QINQ_STRIPPED is set, the flags PKT_RX_VLAN and PKT_RX_QINQ
- * must also be set.
+ * When RTE_MBUF_F_RX_QINQ_STRIPPED is set, the flags RTE_MBUF_F_RX_VLAN
+ * and RTE_MBUF_F_RX_QINQ must also be set.
  *
- * - If both PKT_RX_QINQ_STRIPPED and PKT_RX_VLAN_STRIPPED are set, the 2 VLANs
- *   have been stripped by the hardware and their TCIs are saved in
- *   mbuf->vlan_tci (inner) and mbuf->vlan_tci_outer (outer).
- * - If PKT_RX_QINQ_STRIPPED is set and PKT_RX_VLAN_STRIPPED is unset, only the
- *   outer VLAN is removed from packet data, but both tci are saved in
- *   mbuf->vlan_tci (inner) and mbuf->vlan_tci_outer (outer).
+ * - If both RTE_MBUF_F_RX_QINQ_STRIPPED and RTE_MBUF_F_RX_VLAN_STRIPPED are
+ *   set, the 2 VLANs have been stripped by the hardware and their TCIs are
+ *   saved in mbuf->vlan_tci (inner) and mbuf->vlan_tci_outer (outer).
+ * - If RTE_MBUF_F_RX_QINQ_STRIPPED is set and RTE_MBUF_F_RX_VLAN_STRIPPED
+ *   is unset, only the outer VLAN is removed from packet data, but both tci
+ *   are saved in mbuf->vlan_tci (inner) and mbuf->vlan_tci_outer (outer).
  */
-#define PKT_RX_QINQ_STRIPPED (1ULL << 15)
+#define RTE_MBUF_F_RX_QINQ_STRIPPED (1ULL << 15)
+#define PKT_RX_QINQ_STRIPPED RTE_DEPRECATED(PKT_RX_QINQ_STRIPPED) \
+		RTE_MBUF_F_RX_QINQ_STRIPPED
 
 /**
  * When packets are coalesced by a hardware or virtual driver, this flag
  * can be set in the RX mbuf, meaning that the m->tso_segsz field is
  * valid and is set to the segment size of original packets.
  */
-#define PKT_RX_LRO           (1ULL << 16)
+#define RTE_MBUF_F_RX_LRO           (1ULL << 16)
+#define PKT_RX_LRO RTE_DEPRECATED(PKT_RX_LRO) RTE_MBUF_F_RX_LRO
 
 /* There is no flag defined at offset 17. It is free for any future use. */
 
 /**
  * Indicate that security offload processing was applied on the RX packet.
  */
-#define PKT_RX_SEC_OFFLOAD	(1ULL << 18)
+#define RTE_MBUF_F_RX_SEC_OFFLOAD	(1ULL << 18)
+#define PKT_RX_SEC_OFFLOAD RTE_DEPRECATED(PKT_RX_SEC_OFFLOAD) \
+		RTE_MBUF_F_RX_SEC_OFFLOAD
 
 /**
  * Indicate that security offload processing failed on the RX packet.
  */
-#define PKT_RX_SEC_OFFLOAD_FAILED	(1ULL << 19)
+#define RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED	(1ULL << 19)
+#define PKT_RX_SEC_OFFLOAD_FAILED RTE_DEPRECATED(PKT_RX_SEC_OFFLOAD_FAILED) \
+		RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED
 
 /**
  * The RX packet is a double VLAN, and the outer tci has been
- * saved in mbuf->vlan_tci_outer. If this flag is set, PKT_RX_VLAN
+ * saved in mbuf->vlan_tci_outer. If this flag is set, RTE_MBUF_F_RX_VLAN
  * must also be set and the inner tci is saved in mbuf->vlan_tci.
- * If the flag PKT_RX_QINQ_STRIPPED is also present, both VLANs
+ * If the flag RTE_MBUF_F_RX_QINQ_STRIPPED is also present, both VLANs
  * headers have been stripped from mbuf data, else they are still
  * present.
  */
-#define PKT_RX_QINQ          (1ULL << 20)
+#define RTE_MBUF_F_RX_QINQ          (1ULL << 20)
+#define PKT_RX_QINQ RTE_DEPRECATED(PKT_RX_QINQ) RTE_MBUF_F_RX_QINQ
 
 /**
  * Mask of bits used to determine the status of outer RX L4 checksum.
- * - PKT_RX_OUTER_L4_CKSUM_UNKNOWN: no info about the outer RX L4 checksum
- * - PKT_RX_OUTER_L4_CKSUM_BAD: the outer L4 checksum in the packet is wrong
- * - PKT_RX_OUTER_L4_CKSUM_GOOD: the outer L4 checksum in the packet is valid
- * - PKT_RX_OUTER_L4_CKSUM_INVALID: invalid outer L4 checksum state.
+ * - RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN: no info about the outer RX L4
+ *   checksum
+ * - RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD: the outer L4 checksum in the packet
+ *   is wrong
+ * - RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD: the outer L4 checksum in the packet
+ *   is valid
+ * - RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID: invalid outer L4 checksum state.
  *
- * The detection of PKT_RX_OUTER_L4_CKSUM_GOOD shall be based on the given
- * HW capability, At minimum, the PMD should support
- * PKT_RX_OUTER_L4_CKSUM_UNKNOWN and PKT_RX_OUTER_L4_CKSUM_BAD states
- * if the DEV_RX_OFFLOAD_OUTER_UDP_CKSUM offload is available.
- */
-#define PKT_RX_OUTER_L4_CKSUM_MASK	((1ULL << 21) | (1ULL << 22))
-
-#define PKT_RX_OUTER_L4_CKSUM_UNKNOWN	0
-#define PKT_RX_OUTER_L4_CKSUM_BAD	(1ULL << 21)
-#define PKT_RX_OUTER_L4_CKSUM_GOOD	(1ULL << 22)
-#define PKT_RX_OUTER_L4_CKSUM_INVALID	((1ULL << 21) | (1ULL << 22))
-
-/* add new RX flags here, don't forget to update PKT_FIRST_FREE */
-
-#define PKT_FIRST_FREE (1ULL << 23)
-#define PKT_LAST_FREE (1ULL << 40)
-
-/* add new TX flags here, don't forget to update PKT_LAST_FREE  */
+ * The detection of RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD shall be based on the
+ * given HW capability, At minimum, the PMD should support
+ * RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN and RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD
+ * states if the DEV_RX_OFFLOAD_OUTER_UDP_CKSUM offload is available.
+ */
+#define RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK	((1ULL << 21) | (1ULL << 22))
+#define PKT_RX_OUTER_L4_CKSUM_MASK RTE_DEPRECATED(PKT_RX_OUTER_L4_CKSUM_MASK) \
+		RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK
+
+#define RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN	0
+#define RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD	(1ULL << 21)
+#define RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD	(1ULL << 22)
+#define RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID	((1ULL << 21) | (1ULL << 22))
+#define PKT_RX_OUTER_L4_CKSUM_UNKNOWN \
+	RTE_DEPRECATED(PKT_RX_OUTER_L4_CKSUM_UNKNOWN) \
+	RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
+#define PKT_RX_OUTER_L4_CKSUM_BAD RTE_DEPRECATED(PKT_RX_OUTER_L4_CKSUM_BAD) \
+		RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD
+#define PKT_RX_OUTER_L4_CKSUM_GOOD RTE_DEPRECATED(PKT_RX_OUTER_L4_CKSUM_GOOD) \
+		RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD
+#define PKT_RX_OUTER_L4_CKSUM_INVALID \
+	RTE_DEPRECATED(PKT_RX_OUTER_L4_CKSUM_INVALID) \
+	RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID
+
+/* add new RX flags here, don't forget to update RTE_MBUF_F_FIRST_FREE */
+
+#define RTE_MBUF_F_FIRST_FREE (1ULL << 23)
+#define PKT_FIRST_FREE RTE_DEPRECATED(PKT_FIRST_FREE) RTE_MBUF_F_FIRST_FREE
+#define RTE_MBUF_F_LAST_FREE (1ULL << 40)
+#define PKT_LAST_FREE RTE_DEPRECATED(PKT_LAST_FREE) RTE_MBUF_F_LAST_FREE
+
+/* add new TX flags here, don't forget to update RTE_MBUF_F_LAST_FREE  */
 
 /**
  * Outer UDP checksum offload flag. This flag is used for enabling
  * outer UDP checksum in PMD. To use outer UDP checksum, the user needs to
  * 1) Enable the following in mbuf,
  * a) Fill outer_l2_len and outer_l3_len in mbuf.
- * b) Set the PKT_TX_OUTER_UDP_CKSUM flag.
- * c) Set the PKT_TX_OUTER_IPV4 or PKT_TX_OUTER_IPV6 flag.
+ * b) Set the RTE_MBUF_F_TX_OUTER_UDP_CKSUM flag.
+ * c) Set the RTE_MBUF_F_TX_OUTER_IPV4 or RTE_MBUF_F_TX_OUTER_IPV6 flag.
  * 2) Configure DEV_TX_OFFLOAD_OUTER_UDP_CKSUM offload flag.
  */
-#define PKT_TX_OUTER_UDP_CKSUM     (1ULL << 41)
+#define RTE_MBUF_F_TX_OUTER_UDP_CKSUM     (1ULL << 41)
+#define PKT_TX_OUTER_UDP_CKSUM RTE_DEPRECATED(PKT_TX_OUTER_UDP_CKSUM) \
+		RTE_MBUF_F_TX_OUTER_UDP_CKSUM
 
 /**
  * UDP Fragmentation Offload flag. This flag is used for enabling UDP
  * fragmentation in SW or in HW. When use UFO, mbuf->tso_segsz is used
  * to store the MSS of UDP fragments.
  */
-#define PKT_TX_UDP_SEG	(1ULL << 42)
+#define RTE_MBUF_F_TX_UDP_SEG	(1ULL << 42)
+#define PKT_TX_UDP_SEG RTE_DEPRECATED(PKT_TX_UDP_SEG) RTE_MBUF_F_TX_UDP_SEG
 
 /**
  * Request security offload processing on the TX packet.
  * To use Tx security offload, the user needs to fill l2_len in mbuf
  * indicating L2 header size and where L3 header starts.
  */
-#define PKT_TX_SEC_OFFLOAD	(1ULL << 43)
+#define RTE_MBUF_F_TX_SEC_OFFLOAD	(1ULL << 43)
+#define PKT_TX_SEC_OFFLOAD RTE_DEPRECATED(PKT_TX_SEC_OFFLOAD) \
+		RTE_MBUF_F_TX_SEC_OFFLOAD
 
 /**
  * Offload the MACsec. This flag must be set by the application to enable
  * this offload feature for a packet to be transmitted.
  */
-#define PKT_TX_MACSEC        (1ULL << 44)
+#define RTE_MBUF_F_TX_MACSEC        (1ULL << 44)
+#define PKT_TX_MACSEC RTE_DEPRECATED(PKT_TX_MACSEC) RTE_MBUF_F_TX_MACSEC
 
 /**
  * Bits 45:48 used for the tunnel type.
  * The tunnel type must be specified for TSO or checksum on the inner part
  * of tunnel packets.
- * These flags can be used with PKT_TX_TCP_SEG for TSO, or PKT_TX_xxx_CKSUM.
+ * These flags can be used with RTE_MBUF_F_TX_TCP_SEG for TSO, or
+ * RTE_MBUF_F_TX_xxx_CKSUM.
  * The mbuf fields for inner and outer header lengths are required:
  * outer_l2_len, outer_l3_len, l2_len, l3_len, l4_len and tso_segsz for TSO.
  */
-#define PKT_TX_TUNNEL_VXLAN   (0x1ULL << 45)
-#define PKT_TX_TUNNEL_GRE     (0x2ULL << 45)
-#define PKT_TX_TUNNEL_IPIP    (0x3ULL << 45)
-#define PKT_TX_TUNNEL_GENEVE  (0x4ULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_VXLAN   (0x1ULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_GRE     (0x2ULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_IPIP    (0x3ULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_GENEVE  (0x4ULL << 45)
 /** TX packet with MPLS-in-UDP RFC 7510 header. */
-#define PKT_TX_TUNNEL_MPLSINUDP (0x5ULL << 45)
-#define PKT_TX_TUNNEL_VXLAN_GPE (0x6ULL << 45)
-#define PKT_TX_TUNNEL_GTP       (0x7ULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_MPLSINUDP (0x5ULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE (0x6ULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_GTP       (0x7ULL << 45)
 /**
  * Generic IP encapsulated tunnel type, used for TSO and checksum offload.
  * It can be used for tunnels which are not standards or listed above.
- * It is preferred to use specific tunnel flags like PKT_TX_TUNNEL_GRE
- * or PKT_TX_TUNNEL_IPIP if possible.
+ * It is preferred to use specific tunnel flags like RTE_MBUF_F_TX_TUNNEL_GRE
+ * or RTE_MBUF_F_TX_TUNNEL_IPIP if possible.
  * The ethdev must be configured with DEV_TX_OFFLOAD_IP_TNL_TSO.
  * Outer and inner checksums are done according to the existing flags like
- * PKT_TX_xxx_CKSUM.
+ * RTE_MBUF_F_TX_xxx_CKSUM.
  * Specific tunnel headers that contain payload length, sequence id
  * or checksum are not expected to be updated.
  */
-#define PKT_TX_TUNNEL_IP (0xDULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_IP (0xDULL << 45)
 /**
  * Generic UDP encapsulated tunnel type, used for TSO and checksum offload.
  * UDP tunnel type implies outer IP layer.
  * It can be used for tunnels which are not standards or listed above.
- * It is preferred to use specific tunnel flags like PKT_TX_TUNNEL_VXLAN
+ * It is preferred to use specific tunnel flags like RTE_MBUF_F_TX_TUNNEL_VXLAN
  * if possible.
  * The ethdev must be configured with DEV_TX_OFFLOAD_UDP_TNL_TSO.
  * Outer and inner checksums are done according to the existing flags like
- * PKT_TX_xxx_CKSUM.
+ * RTE_MBUF_F_TX_xxx_CKSUM.
  * Specific tunnel headers that contain payload length, sequence id
  * or checksum are not expected to be updated.
  */
-#define PKT_TX_TUNNEL_UDP (0xEULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_UDP (0xEULL << 45)
 /* add new TX TUNNEL type here */
-#define PKT_TX_TUNNEL_MASK    (0xFULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_MASK    (0xFULL << 45)
+
+#define PKT_TX_TUNNEL_VXLAN RTE_DEPRECATED(PKT_TX_TUNNEL_VXLAN) \
+		RTE_MBUF_F_TX_TUNNEL_VXLAN
+#define PKT_TX_TUNNEL_GRE RTE_DEPRECATED(PKT_TX_TUNNEL_GRE) \
+		RTE_MBUF_F_TX_TUNNEL_GRE
+#define PKT_TX_TUNNEL_IPIP RTE_DEPRECATED(PKT_TX_TUNNEL_IPIP) \
+		RTE_MBUF_F_TX_TUNNEL_IPIP
+#define PKT_TX_TUNNEL_GENEVE RTE_DEPRECATED(PKT_TX_TUNNEL_GENEVE) \
+		RTE_MBUF_F_TX_TUNNEL_GENEVE
+#define PKT_TX_TUNNEL_MPLSINUDP RTE_DEPRECATED(PKT_TX_TUNNEL_MPLSINUDP) \
+		RTE_MBUF_F_TX_TUNNEL_MPLSINUDP
+#define PKT_TX_TUNNEL_VXLAN_GPE RTE_DEPRECATED(PKT_TX_TUNNEL_VXLAN_GPE) \
+		RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE
+#define PKT_TX_TUNNEL_GTP RTE_DEPRECATED(PKT_TX_TUNNEL_GTP) \
+		RTE_MBUF_F_TX_TUNNEL_GTP
+#define PKT_TX_TUNNEL_IP RTE_DEPRECATED(PKT_TX_TUNNEL_IP) \
+		RTE_MBUF_F_TX_TUNNEL_IP
+#define PKT_TX_TUNNEL_UDP RTE_DEPRECATED(PKT_TX_TUNNEL_UDP) \
+		RTE_MBUF_F_TX_TUNNEL_UDP
+#define PKT_TX_TUNNEL_MASK RTE_DEPRECATED(PKT_TX_TUNNEL_MASK) \
+		RTE_MBUF_F_TX_TUNNEL_MASK
 
 /**
  * Double VLAN insertion (QinQ) request to driver, driver may offload the
  * insertion based on device capability.
  * mbuf 'vlan_tci' & 'vlan_tci_outer' must be valid when this flag is set.
  */
-#define PKT_TX_QINQ        (1ULL << 49)
+#define RTE_MBUF_F_TX_QINQ        (1ULL << 49)
+#define PKT_TX_QINQ RTE_DEPRECATED(PKT_TX_QINQ) RTE_MBUF_F_TX_QINQ
 
 /**
  * TCP segmentation offload. To enable this offload feature for a
  * packet to be transmitted on hardware supporting TSO:
- *  - set the PKT_TX_TCP_SEG flag in mbuf->ol_flags (this flag implies
- *    PKT_TX_TCP_CKSUM)
- *  - set the flag PKT_TX_IPV4 or PKT_TX_IPV6
- *  - if it's IPv4, set the PKT_TX_IP_CKSUM flag
+ *  - set the RTE_MBUF_F_TX_TCP_SEG flag in mbuf->ol_flags (this flag implies
+ *    RTE_MBUF_F_TX_TCP_CKSUM)
+ *  - set the flag RTE_MBUF_F_TX_IPV4 or RTE_MBUF_F_TX_IPV6
+ *  - if it's IPv4, set the RTE_MBUF_F_TX_IP_CKSUM flag
  *  - fill the mbuf offload information: l2_len, l3_len, l4_len, tso_segsz
  */
-#define PKT_TX_TCP_SEG       (1ULL << 50)
+#define RTE_MBUF_F_TX_TCP_SEG       (1ULL << 50)
+#define PKT_TX_TCP_SEG RTE_DEPRECATED(PKT_TX_TCP_SEG) RTE_MBUF_F_TX_TCP_SEG
 
 /** TX IEEE1588 packet to timestamp. */
-#define PKT_TX_IEEE1588_TMST (1ULL << 51)
+#define RTE_MBUF_F_TX_IEEE1588_TMST (1ULL << 51)
+#define PKT_TX_IEEE1588_TMST RTE_DEPRECATED(PKT_TX_IEEE1588_TMST) \
+		RTE_MBUF_F_TX_IEEE1588_TMST
 
-/**
+/*
  * Bits 52+53 used for L4 packet type with checksum enabled: 00: Reserved,
  * 01: TCP checksum, 10: SCTP checksum, 11: UDP checksum. To use hardware
  * L4 checksum offload, the user needs to:
  *  - fill l2_len and l3_len in mbuf
- *  - set the flags PKT_TX_TCP_CKSUM, PKT_TX_SCTP_CKSUM or PKT_TX_UDP_CKSUM
- *  - set the flag PKT_TX_IPV4 or PKT_TX_IPV6
+ *  - set the flags RTE_MBUF_F_TX_TCP_CKSUM, RTE_MBUF_F_TX_SCTP_CKSUM or
+ *    RTE_MBUF_F_TX_UDP_CKSUM
+ *  - set the flag RTE_MBUF_F_TX_IPV4 or RTE_MBUF_F_TX_IPV6
  */
-#define PKT_TX_L4_NO_CKSUM   (0ULL << 52) /**< Disable L4 cksum of TX pkt. */
+
+/** Disable L4 cksum of TX pkt. */
+#define RTE_MBUF_F_TX_L4_NO_CKSUM   (0ULL << 52)
 
 /** TCP cksum of TX pkt. computed by NIC. */
-#define PKT_TX_TCP_CKSUM     (1ULL << 52)
+#define RTE_MBUF_F_TX_TCP_CKSUM     (1ULL << 52)
 
 /** SCTP cksum of TX pkt. computed by NIC. */
-#define PKT_TX_SCTP_CKSUM    (2ULL << 52)
+#define RTE_MBUF_F_TX_SCTP_CKSUM    (2ULL << 52)
 
 /** UDP cksum of TX pkt. computed by NIC. */
-#define PKT_TX_UDP_CKSUM     (3ULL << 52)
+#define RTE_MBUF_F_TX_UDP_CKSUM     (3ULL << 52)
 
 /** Mask for L4 cksum offload request. */
-#define PKT_TX_L4_MASK       (3ULL << 52)
+#define RTE_MBUF_F_TX_L4_MASK       (3ULL << 52)
+
+#define PKT_TX_L4_NO_CKSUM RTE_DEPRECATED(PKT_TX_L4_NO_CKSUM) \
+		RTE_MBUF_F_TX_L4_NO_CKSUM
+#define PKT_TX_TCP_CKSUM RTE_DEPRECATED(PKT_TX_TCP_CKSUM) \
+		RTE_MBUF_F_TX_TCP_CKSUM
+#define PKT_TX_SCTP_CKSUM RTE_DEPRECATED(PKT_TX_SCTP_CKSUM) \
+		RTE_MBUF_F_TX_SCTP_CKSUM
+#define PKT_TX_UDP_CKSUM RTE_DEPRECATED(PKT_TX_UDP_CKSUM) \
+		RTE_MBUF_F_TX_UDP_CKSUM
+#define PKT_TX_L4_MASK RTE_DEPRECATED(PKT_TX_L4_MASK) RTE_MBUF_F_TX_L4_MASK
 
 /**
- * Offload the IP checksum in the hardware. The flag PKT_TX_IPV4 should
+ * Offload the IP checksum in the hardware. The flag RTE_MBUF_F_TX_IPV4 should
  * also be set by the application, although a PMD will only check
- * PKT_TX_IP_CKSUM.
+ * RTE_MBUF_F_TX_IP_CKSUM.
  *  - fill the mbuf offload information: l2_len, l3_len
  */
-#define PKT_TX_IP_CKSUM      (1ULL << 54)
+#define RTE_MBUF_F_TX_IP_CKSUM      (1ULL << 54)
+#define PKT_TX_IP_CKSUM RTE_DEPRECATED(PKT_TX_IP_CKSUM) RTE_MBUF_F_TX_IP_CKSUM
 
 /**
  * Packet is IPv4. This flag must be set when using any offload feature
@@ -315,7 +421,8 @@ extern "C" {
  * packet. If the packet is a tunneled packet, this flag is related to
  * the inner headers.
  */
-#define PKT_TX_IPV4          (1ULL << 55)
+#define RTE_MBUF_F_TX_IPV4          (1ULL << 55)
+#define PKT_TX_IPV4 RTE_DEPRECATED(PKT_TX_IPV4) RTE_MBUF_F_TX_IPV4
 
 /**
  * Packet is IPv6. This flag must be set when using an offload feature
@@ -323,65 +430,76 @@ extern "C" {
  * packet. If the packet is a tunneled packet, this flag is related to
  * the inner headers.
  */
-#define PKT_TX_IPV6          (1ULL << 56)
+#define RTE_MBUF_F_TX_IPV6          (1ULL << 56)
+#define PKT_TX_IPV6 RTE_DEPRECATED(PKT_TX_IPV6) RTE_MBUF_F_TX_IPV6
 
 /**
  * VLAN tag insertion request to driver, driver may offload the insertion
  * based on the device capability.
  * mbuf 'vlan_tci' field must be valid when this flag is set.
  */
-#define PKT_TX_VLAN          (1ULL << 57)
+#define RTE_MBUF_F_TX_VLAN          (1ULL << 57)
+#define PKT_TX_VLAN RTE_DEPRECATED(PKT_TX_VLAN) RTE_MBUF_F_TX_VLAN
 
 /**
  * Offload the IP checksum of an external header in the hardware. The
- * flag PKT_TX_OUTER_IPV4 should also be set by the application, although
- * a PMD will only check PKT_TX_OUTER_IP_CKSUM.
+ * flag RTE_MBUF_F_TX_OUTER_IPV4 should also be set by the application, although
+ * a PMD will only check RTE_MBUF_F_TX_OUTER_IP_CKSUM.
  *  - fill the mbuf offload information: outer_l2_len, outer_l3_len
  */
-#define PKT_TX_OUTER_IP_CKSUM   (1ULL << 58)
+#define RTE_MBUF_F_TX_OUTER_IP_CKSUM   (1ULL << 58)
+#define PKT_TX_OUTER_IP_CKSUM RTE_DEPRECATED(PKT_TX_OUTER_IP_CKSUM) \
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM
 
 /**
  * Packet outer header is IPv4. This flag must be set when using any
  * outer offload feature (L3 or L4 checksum) to tell the NIC that the
  * outer header of the tunneled packet is an IPv4 packet.
  */
-#define PKT_TX_OUTER_IPV4   (1ULL << 59)
+#define RTE_MBUF_F_TX_OUTER_IPV4   (1ULL << 59)
+#define PKT_TX_OUTER_IPV4 RTE_DEPRECATED(PKT_TX_OUTER_IPV4) \
+		RTE_MBUF_F_TX_OUTER_IPV4
 
 /**
  * Packet outer header is IPv6. This flag must be set when using any
  * outer offload feature (L4 checksum) to tell the NIC that the outer
  * header of the tunneled packet is an IPv6 packet.
  */
-#define PKT_TX_OUTER_IPV6    (1ULL << 60)
+#define RTE_MBUF_F_TX_OUTER_IPV6    (1ULL << 60)
+#define PKT_TX_OUTER_IPV6 RTE_DEPRECATED(PKT_TX_OUTER_IPV6) \
+		RTE_MBUF_F_TX_OUTER_IPV6
 
 /**
  * Bitmask of all supported packet Tx offload features flags,
  * which can be set for packet.
  */
-#define PKT_TX_OFFLOAD_MASK (    \
-		PKT_TX_OUTER_IPV6 |	 \
-		PKT_TX_OUTER_IPV4 |	 \
-		PKT_TX_OUTER_IP_CKSUM |  \
-		PKT_TX_VLAN |        \
-		PKT_TX_IPV6 |		 \
-		PKT_TX_IPV4 |		 \
-		PKT_TX_IP_CKSUM |        \
-		PKT_TX_L4_MASK |         \
-		PKT_TX_IEEE1588_TMST |	 \
-		PKT_TX_TCP_SEG |         \
-		PKT_TX_QINQ |        \
-		PKT_TX_TUNNEL_MASK |	 \
-		PKT_TX_MACSEC |		 \
-		PKT_TX_SEC_OFFLOAD |	 \
-		PKT_TX_UDP_SEG |	 \
-		PKT_TX_OUTER_UDP_CKSUM)
+#define RTE_MBUF_F_TX_OFFLOAD_MASK (    \
+		RTE_MBUF_F_TX_OUTER_IPV6 |	 \
+		RTE_MBUF_F_TX_OUTER_IPV4 |	 \
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM |  \
+		RTE_MBUF_F_TX_VLAN |        \
+		RTE_MBUF_F_TX_IPV6 |		 \
+		RTE_MBUF_F_TX_IPV4 |		 \
+		RTE_MBUF_F_TX_IP_CKSUM |        \
+		RTE_MBUF_F_TX_L4_MASK |         \
+		RTE_MBUF_F_TX_IEEE1588_TMST |	 \
+		RTE_MBUF_F_TX_TCP_SEG |         \
+		RTE_MBUF_F_TX_QINQ |        \
+		RTE_MBUF_F_TX_TUNNEL_MASK |	 \
+		RTE_MBUF_F_TX_MACSEC |		 \
+		RTE_MBUF_F_TX_SEC_OFFLOAD |	 \
+		RTE_MBUF_F_TX_UDP_SEG |	 \
+		RTE_MBUF_F_TX_OUTER_UDP_CKSUM)
+#define PKT_TX_OFFLOAD_MASK RTE_DEPRECATED(PKT_TX_OFFLOAD_MASK) RTE_MBUF_F_TX_OFFLOAD_MASK
 
 /**
  * Mbuf having an external buffer attached. shinfo in mbuf must be filled.
  */
-#define EXT_ATTACHED_MBUF    (1ULL << 61)
+#define RTE_MBUF_F_EXTERNAL    (1ULL << 61)
+#define EXT_ATTACHED_MBUF RTE_DEPRECATED(EXT_ATTACHED_MBUF) RTE_MBUF_F_EXTERNAL
 
-#define IND_ATTACHED_MBUF    (1ULL << 62) /**< Indirect attached mbuf */
+#define RTE_MBUF_F_INDIRECT    (1ULL << 62) /**< Indirect attached mbuf */
+#define IND_ATTACHED_MBUF RTE_DEPRECATED(IND_ATTACHED_MBUF) RTE_MBUF_F_INDIRECT
 
 /** Alignment constraint of mbuf private area. */
 #define RTE_MBUF_PRIV_ALIGN 8
@@ -528,7 +646,7 @@ struct rte_mbuf {
 
 	uint32_t pkt_len;         /**< Total pkt len: sum of all segments. */
 	uint16_t data_len;        /**< Amount of data in segment buffer. */
-	/** VLAN TCI (CPU order), valid if PKT_RX_VLAN is set. */
+	/** VLAN TCI (CPU order), valid if RTE_MBUF_F_RX_VLAN is set. */
 	uint16_t vlan_tci;
 
 	RTE_STD_C11
@@ -546,7 +664,7 @@ struct rte_mbuf {
 				};
 				uint32_t hi;
 				/**< First 4 flexible bytes or FD ID, dependent
-				 * on PKT_RX_FDIR_* flag in ol_flags.
+				 * on RTE_MBUF_F_RX_FDIR_* flag in ol_flags.
 				 */
 			} fdir;	/**< Filter identifier if FDIR enabled */
 			struct rte_mbuf_sched sched;
@@ -565,7 +683,7 @@ struct rte_mbuf {
 		} hash;                   /**< hash information */
 	};
 
-	/** Outer VLAN TCI (CPU order), valid if PKT_RX_QINQ is set. */
+	/** Outer VLAN TCI (CPU order), valid if RTE_MBUF_F_RX_QINQ is set. */
 	uint16_t vlan_tci_outer;
 
 	uint16_t buf_len;         /**< Length of segment buffer. */
@@ -655,14 +773,14 @@ struct rte_mbuf_ext_shared_info {
  * If a mbuf has its data in another mbuf and references it by mbuf
  * indirection, this mbuf can be defined as a cloned mbuf.
  */
-#define RTE_MBUF_CLONED(mb)     ((mb)->ol_flags & IND_ATTACHED_MBUF)
+#define RTE_MBUF_CLONED(mb)     ((mb)->ol_flags & RTE_MBUF_F_INDIRECT)
 
 /**
  * Returns TRUE if given mbuf has an external buffer, or FALSE otherwise.
  *
  * External buffer is a user-provided anonymous buffer.
  */
-#define RTE_MBUF_HAS_EXTBUF(mb) ((mb)->ol_flags & EXT_ATTACHED_MBUF)
+#define RTE_MBUF_HAS_EXTBUF(mb) ((mb)->ol_flags & RTE_MBUF_F_EXTERNAL)
 
 /**
  * Returns TRUE if given mbuf is direct, or FALSE otherwise.
@@ -671,7 +789,7 @@ struct rte_mbuf_ext_shared_info {
  * can be defined as a direct mbuf.
  */
 #define RTE_MBUF_DIRECT(mb) \
-	(!((mb)->ol_flags & (IND_ATTACHED_MBUF | EXT_ATTACHED_MBUF)))
+	(!((mb)->ol_flags & (RTE_MBUF_F_INDIRECT | RTE_MBUF_F_EXTERNAL)))
 
 /** Uninitialized or unspecified port. */
 #define RTE_MBUF_PORT_INVALID UINT16_MAX
diff --git a/lib/mbuf/rte_mbuf_dyn.c b/lib/mbuf/rte_mbuf_dyn.c
index ca46eb279e..0f28d51651 100644
--- a/lib/mbuf/rte_mbuf_dyn.c
+++ b/lib/mbuf/rte_mbuf_dyn.c
@@ -130,7 +130,7 @@ init_shared_mem(void)
 		mark_free(dynfield1);
 
 		/* init free_flags */
-		for (mask = PKT_FIRST_FREE; mask <= PKT_LAST_FREE; mask <<= 1)
+		for (mask = RTE_MBUF_F_FIRST_FREE; mask <= RTE_MBUF_F_LAST_FREE; mask <<= 1)
 			shm->free_flags |= mask;
 
 		process_score();
diff --git a/lib/net/rte_ether.h b/lib/net/rte_ether.h
index d0eeb6f996..43d782d986 100644
--- a/lib/net/rte_ether.h
+++ b/lib/net/rte_ether.h
@@ -350,7 +350,7 @@ static inline int rte_vlan_strip(struct rte_mbuf *m)
 		return -1;
 
 	vh = (struct rte_vlan_hdr *)(eh + 1);
-	m->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+	m->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 	m->vlan_tci = rte_be_to_cpu_16(vh->vlan_tci);
 
 	/* Copy ether header over rather than moving whole packet */
@@ -397,9 +397,9 @@ static inline int rte_vlan_insert(struct rte_mbuf **m)
 	vh = (struct rte_vlan_hdr *) (nh + 1);
 	vh->vlan_tci = rte_cpu_to_be_16((*m)->vlan_tci);
 
-	(*m)->ol_flags &= ~(PKT_RX_VLAN_STRIPPED | PKT_TX_VLAN);
+	(*m)->ol_flags &= ~(RTE_MBUF_F_RX_VLAN_STRIPPED | RTE_MBUF_F_TX_VLAN);
 
-	if ((*m)->ol_flags & PKT_TX_TUNNEL_MASK)
+	if ((*m)->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 		(*m)->outer_l2_len += sizeof(struct rte_vlan_hdr);
 	else
 		(*m)->l2_len += sizeof(struct rte_vlan_hdr);
diff --git a/lib/net/rte_ip.h b/lib/net/rte_ip.h
index 05948b69b7..38e5c9ae8a 100644
--- a/lib/net/rte_ip.h
+++ b/lib/net/rte_ip.h
@@ -333,7 +333,7 @@ rte_ipv4_phdr_cksum(const struct rte_ipv4_hdr *ipv4_hdr, uint64_t ol_flags)
 	psd_hdr.dst_addr = ipv4_hdr->dst_addr;
 	psd_hdr.zero = 0;
 	psd_hdr.proto = ipv4_hdr->next_proto_id;
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		psd_hdr.len = 0;
 	} else {
 		l3_len = rte_be_to_cpu_16(ipv4_hdr->total_length);
@@ -474,7 +474,7 @@ rte_ipv6_phdr_cksum(const struct rte_ipv6_hdr *ipv6_hdr, uint64_t ol_flags)
 	} psd_hdr;
 
 	psd_hdr.proto = (uint32_t)(ipv6_hdr->proto << 24);
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		psd_hdr.len = 0;
 	} else {
 		psd_hdr.len = ipv6_hdr->payload_len;
diff --git a/lib/net/rte_net.h b/lib/net/rte_net.h
index 42639bc154..917b7748bc 100644
--- a/lib/net/rte_net.h
+++ b/lib/net/rte_net.h
@@ -125,17 +125,17 @@ rte_net_intel_cksum_flags_prepare(struct rte_mbuf *m, uint64_t ol_flags)
 	 * Mainly it is required to avoid fragmented headers check if
 	 * no offloads are requested.
 	 */
-	if (!(ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK | PKT_TX_TCP_SEG |
-			  PKT_TX_OUTER_IP_CKSUM)))
+	if (!(ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK | RTE_MBUF_F_TX_TCP_SEG |
+			  RTE_MBUF_F_TX_OUTER_IP_CKSUM)))
 		return 0;
 
-	if (ol_flags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6)) {
+	if (ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IPV6)) {
 		inner_l3_offset += m->outer_l2_len + m->outer_l3_len;
 		/*
 		 * prepare outer IPv4 header checksum by setting it to 0,
 		 * in order to be computed by hardware NICs.
 		 */
-		if (ol_flags & PKT_TX_OUTER_IP_CKSUM) {
+		if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) {
 			ipv4_hdr = rte_pktmbuf_mtod_offset(m,
 					struct rte_ipv4_hdr *, m->outer_l2_len);
 			ipv4_hdr->hdr_checksum = 0;
@@ -151,16 +151,16 @@ rte_net_intel_cksum_flags_prepare(struct rte_mbuf *m, uint64_t ol_flags)
 		     inner_l3_offset + m->l3_len + m->l4_len))
 		return -ENOTSUP;
 
-	if (ol_flags & PKT_TX_IPV4) {
+	if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		ipv4_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv4_hdr *,
 				inner_l3_offset);
 
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			ipv4_hdr->hdr_checksum = 0;
 	}
 
-	if ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_UDP_CKSUM) {
-		if (ol_flags & PKT_TX_IPV4) {
+	if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_UDP_CKSUM) {
+		if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 			udp_hdr = (struct rte_udp_hdr *)((char *)ipv4_hdr +
 					m->l3_len);
 			udp_hdr->dgram_cksum = rte_ipv4_phdr_cksum(ipv4_hdr,
@@ -175,9 +175,9 @@ rte_net_intel_cksum_flags_prepare(struct rte_mbuf *m, uint64_t ol_flags)
 			udp_hdr->dgram_cksum = rte_ipv6_phdr_cksum(ipv6_hdr,
 					ol_flags);
 		}
-	} else if ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM ||
-			(ol_flags & PKT_TX_TCP_SEG)) {
-		if (ol_flags & PKT_TX_IPV4) {
+	} else if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_TCP_CKSUM ||
+			(ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
+		if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 			/* non-TSO tcp or TSO */
 			tcp_hdr = (struct rte_tcp_hdr *)((char *)ipv4_hdr +
 					m->l3_len);
diff --git a/lib/pipeline/rte_table_action.c b/lib/pipeline/rte_table_action.c
index ad7904c0ee..a4814b8198 100644
--- a/lib/pipeline/rte_table_action.c
+++ b/lib/pipeline/rte_table_action.c
@@ -2085,7 +2085,7 @@ pkt_work_tag(struct rte_mbuf *mbuf,
 	struct tag_data *data)
 {
 	mbuf->hash.fdir.hi = data->tag;
-	mbuf->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+	mbuf->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 }
 
 static __rte_always_inline void
@@ -2103,10 +2103,10 @@ pkt4_work_tag(struct rte_mbuf *mbuf0,
 	mbuf2->hash.fdir.hi = data2->tag;
 	mbuf3->hash.fdir.hi = data3->tag;
 
-	mbuf0->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
-	mbuf1->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
-	mbuf2->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
-	mbuf3->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+	mbuf0->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
+	mbuf1->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
+	mbuf2->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
+	mbuf3->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 }
 
 /**
diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index ec2c91e7a7..56bf98d9b1 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -411,25 +411,25 @@ vhost_shadow_enqueue_single_packed(struct virtio_net *dev,
 static __rte_always_inline void
 virtio_enqueue_offload(struct rte_mbuf *m_buf, struct virtio_net_hdr *net_hdr)
 {
-	uint64_t csum_l4 = m_buf->ol_flags & PKT_TX_L4_MASK;
+	uint64_t csum_l4 = m_buf->ol_flags & RTE_MBUF_F_TX_L4_MASK;
 
-	if (m_buf->ol_flags & PKT_TX_TCP_SEG)
-		csum_l4 |= PKT_TX_TCP_CKSUM;
+	if (m_buf->ol_flags & RTE_MBUF_F_TX_TCP_SEG)
+		csum_l4 |= RTE_MBUF_F_TX_TCP_CKSUM;
 
 	if (csum_l4) {
 		net_hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
 		net_hdr->csum_start = m_buf->l2_len + m_buf->l3_len;
 
 		switch (csum_l4) {
-		case PKT_TX_TCP_CKSUM:
+		case RTE_MBUF_F_TX_TCP_CKSUM:
 			net_hdr->csum_offset = (offsetof(struct rte_tcp_hdr,
 						cksum));
 			break;
-		case PKT_TX_UDP_CKSUM:
+		case RTE_MBUF_F_TX_UDP_CKSUM:
 			net_hdr->csum_offset = (offsetof(struct rte_udp_hdr,
 						dgram_cksum));
 			break;
-		case PKT_TX_SCTP_CKSUM:
+		case RTE_MBUF_F_TX_SCTP_CKSUM:
 			net_hdr->csum_offset = (offsetof(struct rte_sctp_hdr,
 						cksum));
 			break;
@@ -441,7 +441,7 @@ virtio_enqueue_offload(struct rte_mbuf *m_buf, struct virtio_net_hdr *net_hdr)
 	}
 
 	/* IP cksum verification cannot be bypassed, then calculate here */
-	if (m_buf->ol_flags & PKT_TX_IP_CKSUM) {
+	if (m_buf->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		struct rte_ipv4_hdr *ipv4_hdr;
 
 		ipv4_hdr = rte_pktmbuf_mtod_offset(m_buf, struct rte_ipv4_hdr *,
@@ -450,15 +450,15 @@ virtio_enqueue_offload(struct rte_mbuf *m_buf, struct virtio_net_hdr *net_hdr)
 		ipv4_hdr->hdr_checksum = rte_ipv4_cksum(ipv4_hdr);
 	}
 
-	if (m_buf->ol_flags & PKT_TX_TCP_SEG) {
-		if (m_buf->ol_flags & PKT_TX_IPV4)
+	if (m_buf->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
+		if (m_buf->ol_flags & RTE_MBUF_F_TX_IPV4)
 			net_hdr->gso_type = VIRTIO_NET_HDR_GSO_TCPV4;
 		else
 			net_hdr->gso_type = VIRTIO_NET_HDR_GSO_TCPV6;
 		net_hdr->gso_size = m_buf->tso_segsz;
 		net_hdr->hdr_len = m_buf->l2_len + m_buf->l3_len
 					+ m_buf->l4_len;
-	} else if (m_buf->ol_flags & PKT_TX_UDP_SEG) {
+	} else if (m_buf->ol_flags & RTE_MBUF_F_TX_UDP_SEG) {
 		net_hdr->gso_type = VIRTIO_NET_HDR_GSO_UDP;
 		net_hdr->gso_size = m_buf->tso_segsz;
 		net_hdr->hdr_len = m_buf->l2_len + m_buf->l3_len +
@@ -2259,7 +2259,7 @@ parse_headers(struct rte_mbuf *m, uint8_t *l4_proto)
 		m->l3_len = rte_ipv4_hdr_len(ipv4_hdr);
 		if (data_len < m->l2_len + m->l3_len)
 			goto error;
-		m->ol_flags |= PKT_TX_IPV4;
+		m->ol_flags |= RTE_MBUF_F_TX_IPV4;
 		*l4_proto = ipv4_hdr->next_proto_id;
 		break;
 	case RTE_ETHER_TYPE_IPV6:
@@ -2268,7 +2268,7 @@ parse_headers(struct rte_mbuf *m, uint8_t *l4_proto)
 		ipv6_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv6_hdr *,
 				m->l2_len);
 		m->l3_len = sizeof(struct rte_ipv6_hdr);
-		m->ol_flags |= PKT_TX_IPV6;
+		m->ol_flags |= RTE_MBUF_F_TX_IPV6;
 		*l4_proto = ipv6_hdr->proto;
 		break;
 	default:
@@ -2323,17 +2323,17 @@ vhost_dequeue_offload_legacy(struct virtio_net_hdr *hdr, struct rte_mbuf *m)
 			case (offsetof(struct rte_tcp_hdr, cksum)):
 				if (l4_proto != IPPROTO_TCP)
 					goto error;
-				m->ol_flags |= PKT_TX_TCP_CKSUM;
+				m->ol_flags |= RTE_MBUF_F_TX_TCP_CKSUM;
 				break;
 			case (offsetof(struct rte_udp_hdr, dgram_cksum)):
 				if (l4_proto != IPPROTO_UDP)
 					goto error;
-				m->ol_flags |= PKT_TX_UDP_CKSUM;
+				m->ol_flags |= RTE_MBUF_F_TX_UDP_CKSUM;
 				break;
 			case (offsetof(struct rte_sctp_hdr, cksum)):
 				if (l4_proto != IPPROTO_SCTP)
 					goto error;
-				m->ol_flags |= PKT_TX_SCTP_CKSUM;
+				m->ol_flags |= RTE_MBUF_F_TX_SCTP_CKSUM;
 				break;
 			default:
 				goto error;
@@ -2355,14 +2355,14 @@ vhost_dequeue_offload_legacy(struct virtio_net_hdr *hdr, struct rte_mbuf *m)
 			tcp_len = (tcp_hdr->data_off & 0xf0) >> 2;
 			if (data_len < m->l2_len + m->l3_len + tcp_len)
 				goto error;
-			m->ol_flags |= PKT_TX_TCP_SEG;
+			m->ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 			m->tso_segsz = hdr->gso_size;
 			m->l4_len = tcp_len;
 			break;
 		case VIRTIO_NET_HDR_GSO_UDP:
 			if (l4_proto != IPPROTO_UDP)
 				goto error;
-			m->ol_flags |= PKT_TX_UDP_SEG;
+			m->ol_flags |= RTE_MBUF_F_TX_UDP_SEG;
 			m->tso_segsz = hdr->gso_size;
 			m->l4_len = sizeof(struct rte_udp_hdr);
 			break;
@@ -2396,7 +2396,7 @@ vhost_dequeue_offload(struct virtio_net_hdr *hdr, struct rte_mbuf *m,
 		return;
 	}
 
-	m->ol_flags |= PKT_RX_IP_CKSUM_UNKNOWN;
+	m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 
 	ptype = rte_net_get_ptype(m, &hdr_lens, RTE_PTYPE_ALL_MASK);
 	m->packet_type = ptype;
@@ -2423,7 +2423,7 @@ vhost_dequeue_offload(struct virtio_net_hdr *hdr, struct rte_mbuf *m,
 
 		hdrlen = hdr_lens.l2_len + hdr_lens.l3_len + hdr_lens.l4_len;
 		if (hdr->csum_start <= hdrlen && l4_supported != 0) {
-			m->ol_flags |= PKT_RX_L4_CKSUM_NONE;
+			m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_NONE;
 		} else {
 			/* Unknown proto or tunnel, do sw cksum. We can assume
 			 * the cksum field is in the first segment since the
@@ -2453,13 +2453,13 @@ vhost_dequeue_offload(struct virtio_net_hdr *hdr, struct rte_mbuf *m,
 		case VIRTIO_NET_HDR_GSO_TCPV6:
 			if ((ptype & RTE_PTYPE_L4_MASK) != RTE_PTYPE_L4_TCP)
 				break;
-			m->ol_flags |= PKT_RX_LRO | PKT_RX_L4_CKSUM_NONE;
+			m->ol_flags |= RTE_MBUF_F_RX_LRO | RTE_MBUF_F_RX_L4_CKSUM_NONE;
 			m->tso_segsz = hdr->gso_size;
 			break;
 		case VIRTIO_NET_HDR_GSO_UDP:
 			if ((ptype & RTE_PTYPE_L4_MASK) != RTE_PTYPE_L4_UDP)
 				break;
-			m->ol_flags |= PKT_RX_LRO | PKT_RX_L4_CKSUM_NONE;
+			m->ol_flags |= RTE_MBUF_F_RX_LRO | RTE_MBUF_F_RX_L4_CKSUM_NONE;
 			m->tso_segsz = hdr->gso_size;
 			break;
 		default:
-- 
2.30.2


^ permalink raw reply	[relevance 1%]

* Re: [dpdk-dev] [PATCH v2 1/3] security: add SA config option for inner pkt csum
  2021-09-29 11:39  0%       ` Ananyev, Konstantin
@ 2021-09-30  5:05  0%         ` Anoob Joseph
  2021-09-30  9:09  0%           ` Ananyev, Konstantin
  0 siblings, 1 reply; 200+ results
From: Anoob Joseph @ 2021-09-30  5:05 UTC (permalink / raw)
  To: Ananyev, Konstantin, Archana Muniganti, Akhil Goyal, Nicolau,
	Radu, Zhang, Roy Fan, hemant.agrawal
  Cc: Tejasree Kondoj, Ankur Dwivedi, Jerin Jacob Kollanukkaran, dev

Hi Konstantin,

Please see inline.

Thanks,
Anoob

> -----Original Message-----
> From: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Sent: Wednesday, September 29, 2021 5:09 PM
> To: Anoob Joseph <anoobj@marvell.com>; Archana Muniganti
> <marchana@marvell.com>; Akhil Goyal <gakhil@marvell.com>; Nicolau, Radu
> <radu.nicolau@intel.com>; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> hemant.agrawal@nxp.com
> Cc: Tejasree Kondoj <ktejasree@marvell.com>; Ankur Dwivedi
> <adwivedi@marvell.com>; Jerin Jacob Kollanukkaran <jerinj@marvell.com>;
> dev@dpdk.org
> Subject: [EXT] RE: [PATCH v2 1/3] security: add SA config option for inner pkt
> csum
> 
> External Email
> 
> ----------------------------------------------------------------------
> Hi Anoob,
> 
> > Hi Konstanin,
> >
> > Please see inline.
> >
> > Thanks,
> > Anoob
> >
> > > -----Original Message-----
> > > From: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > > Sent: Wednesday, September 29, 2021 4:26 PM
> > > To: Archana Muniganti <marchana@marvell.com>; Akhil Goyal
> > > <gakhil@marvell.com>; Nicolau, Radu <radu.nicolau@intel.com>; Zhang,
> > > Roy Fan <roy.fan.zhang@intel.com>; hemant.agrawal@nxp.com
> > > Cc: Anoob Joseph <anoobj@marvell.com>; Tejasree Kondoj
> > > <ktejasree@marvell.com>; Ankur Dwivedi <adwivedi@marvell.com>; Jerin
> > > Jacob Kollanukkaran <jerinj@marvell.com>; dev@dpdk.org
> > > Subject: [EXT] RE: [PATCH v2 1/3] security: add SA config option for
> > > inner pkt csum
> > >
> > > External Email
> > >
> > > --------------------------------------------------------------------
> > > --
> > > > Add inner packet IPv4 hdr and L4 checksum enable options in conf.
> > > > These will be used in case of protocol offload.
> > > > Per SA, application could specify whether the
> > > > checksum(compute/verify) can be offloaded to security device.
> > > >
> > > > Signed-off-by: Archana Muniganti <marchana@marvell.com>
> > > > ---
> > > >  doc/guides/cryptodevs/features/default.ini |  1 +
> > > >  doc/guides/rel_notes/deprecation.rst       |  4 ++--
> > > >  doc/guides/rel_notes/release_21_11.rst     |  4 ++++
> > > >  lib/cryptodev/rte_cryptodev.h              |  2 ++
> > > >  lib/security/rte_security.h                | 18 ++++++++++++++++++
> > > >  5 files changed, 27 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/doc/guides/cryptodevs/features/default.ini
> > > > b/doc/guides/cryptodevs/features/default.ini
> > > > index c24814de98..96d95ddc81 100644
> > > > --- a/doc/guides/cryptodevs/features/default.ini
> > > > +++ b/doc/guides/cryptodevs/features/default.ini
> > > > @@ -33,6 +33,7 @@ Non-Byte aligned data  =  Sym raw data path API
> > > > = Cipher multiple data units =
> > > >  Cipher wrapped key     =
> > > > +Inner checksum         =
> > > >
> > > >  ;
> > > >  ; Supported crypto algorithms of a default crypto driver.
> > > > diff --git a/doc/guides/rel_notes/deprecation.rst
> > > > b/doc/guides/rel_notes/deprecation.rst
> > > > index 05fc2fdee7..8308e00ed4 100644
> > > > --- a/doc/guides/rel_notes/deprecation.rst
> > > > +++ b/doc/guides/rel_notes/deprecation.rst
> > > > @@ -232,8 +232,8 @@ Deprecation Notices
> > > >    IPsec payload MSS (Maximum Segment Size), and ESN (Extended
> > > > Sequence
> > > Number).
> > > >
> > > >  * security: The IPsec SA config options ``struct
> > > > rte_security_ipsec_sa_options``
> > > > -  will be updated with new fields to support new features like
> > > > IPsec inner
> > > > -  checksum, TSO in case of protocol offload.
> > > > +  will be updated with new fields to support new features like
> > > > + TSO in case of  protocol offload.
> > > >
> > > >  * ipsec: The structure ``rte_ipsec_sa_prm`` will be extended with a new
> field
> > > >    ``hdr_l3_len`` to configure tunnel L3 header length.
> > > > diff --git a/doc/guides/rel_notes/release_21_11.rst
> > > > b/doc/guides/rel_notes/release_21_11.rst
> > > > index 8da851cccc..93d1b36889 100644
> > > > --- a/doc/guides/rel_notes/release_21_11.rst
> > > > +++ b/doc/guides/rel_notes/release_21_11.rst
> > > > @@ -194,6 +194,10 @@ ABI Changes
> > > >    ``rte_security_ipsec_xform`` to allow applications to configure SA soft
> > > >    and hard expiry limits. Limits can be either in number of packets or bytes.
> > > >
> > > > +* security: The new options ``ip_csum_enable`` and
> > > > +``l4_csum_enable`` were added
> > > > +  in structure ``rte_security_ipsec_sa_options`` to indicate
> > > > +whether inner
> > > > +  packet IPv4 header checksum and L4 checksum need to be
> > > > +offloaded to
> > > > +  security device.
> > > >
> > > >  Known Issues
> > > >  ------------
> > > > diff --git a/lib/cryptodev/rte_cryptodev.h
> > > > b/lib/cryptodev/rte_cryptodev.h index bb01f0f195..d9271a6c45
> > > > 100644
> > > > --- a/lib/cryptodev/rte_cryptodev.h
> > > > +++ b/lib/cryptodev/rte_cryptodev.h
> > > > @@ -479,6 +479,8 @@ rte_cryptodev_asym_get_xform_enum(enum
> > > > rte_crypto_asym_xform_type *xform_enum,  /**< Support operations
> > > > on
> > > multiple data-units message */
> > > >  #define RTE_CRYPTODEV_FF_CIPHER_WRAPPED_KEY		(1ULL
> << 26)
> > > >  /**< Support wrapped key in cipher xform  */
> > > > +#define RTE_CRYPTODEV_FF_SECURITY_INNER_CSUM		(1ULL
> > > << 27)
> > > > +/**< Support inner checksum computation/verification */
> > > >
> > > >  /**
> > > >   * Get the name of a crypto device feature flag diff --git
> > > > a/lib/security/rte_security.h b/lib/security/rte_security.h index
> > > > ab1a6e1f65..945f45ad76 100644
> > > > --- a/lib/security/rte_security.h
> > > > +++ b/lib/security/rte_security.h
> > > > @@ -230,6 +230,24 @@ struct rte_security_ipsec_sa_options {
> > > >  	 * * 0: Do not match UDP ports
> > > >  	 */
> > > >  	uint32_t udp_ports_verify : 1;
> > > > +
> > > > +	/** Compute/verify inner packet IPv4 header checksum in tunnel mode
> > > > +	 *
> > > > +	 * * 1: For outbound, compute inner packet IPv4 header checksum
> > > > +	 *      before tunnel encapsulation and for inbound, verify after
> > > > +	 *      tunnel decapsulation.
> > > > +	 * * 0: Inner packet IP header checksum is not computed/verified.
> > > > +	 */
> > > > +	uint32_t ip_csum_enable : 1;
> > > > +
> > > > +	/** Compute/verify inner packet L4 checksum in tunnel mode
> > > > +	 *
> > > > +	 * * 1: For outbound, compute inner packet L4 checksum before
> > > > +	 *      tunnel encapsulation and for inbound, verify after
> > > > +	 *      tunnel decapsulation.
> > > > +	 * * 0: Inner packet L4 checksum is not computed/verified.
> > > > +	 */
> > > > +	uint32_t l4_csum_enable : 1;
> > >
> > > As I understand these 2 new flags serve two purposes:
> > > 1. report HW/PMD ability to perform these offloads.
> > > 2. allow user to enable/disable this offload on SA basis.
> >
> > [Anoob] Correct
> >
> > >
> > > One question I have - how it will work on data-path?
> > > Would decision to perform these offloads be based on mbuf->ol_flags
> > > value (same as we doing for ethdev TX offloads)?
> > > Or some other approach is implied?
> >
> > [Anoob] There will be two settings. It can enabled per SA or enabled per
> packet.
> 
> Ok, will it be documented somewhere?
> Or probably it already is, and I just missed/forgot it somehow?

[Anoob] Looks like we missed documenting this. Will update in the next version. Should we add documentation around SA options or around TX offload flags? I think it's better around SA options. Do you suggest either?
 
> 
> > >
> > > >  };
> > > >
> > > >  /** IPSec security association direction */
> > > > --
> > > > 2.22.0


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v2 1/3] security: add SA config option for inner pkt csum
  2021-09-30  5:05  0%         ` Anoob Joseph
@ 2021-09-30  9:09  0%           ` Ananyev, Konstantin
  0 siblings, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2021-09-30  9:09 UTC (permalink / raw)
  To: Anoob Joseph, Archana Muniganti, Akhil Goyal, Nicolau, Radu,
	Zhang, Roy Fan, hemant.agrawal
  Cc: Tejasree Kondoj, Ankur Dwivedi, Jerin Jacob Kollanukkaran, dev



Hi Anoob,

> >
> > External Email
> >
> > ----------------------------------------------------------------------
> > Hi Anoob,
> >
> > > Hi Konstanin,
> > >
> > > Please see inline.
> > >
> > > Thanks,
> > > Anoob
> > >
> > > > -----Original Message-----
> > > > From: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > > > Sent: Wednesday, September 29, 2021 4:26 PM
> > > > To: Archana Muniganti <marchana@marvell.com>; Akhil Goyal
> > > > <gakhil@marvell.com>; Nicolau, Radu <radu.nicolau@intel.com>; Zhang,
> > > > Roy Fan <roy.fan.zhang@intel.com>; hemant.agrawal@nxp.com
> > > > Cc: Anoob Joseph <anoobj@marvell.com>; Tejasree Kondoj
> > > > <ktejasree@marvell.com>; Ankur Dwivedi <adwivedi@marvell.com>; Jerin
> > > > Jacob Kollanukkaran <jerinj@marvell.com>; dev@dpdk.org
> > > > Subject: [EXT] RE: [PATCH v2 1/3] security: add SA config option for
> > > > inner pkt csum
> > > >
> > > > External Email
> > > >
> > > > --------------------------------------------------------------------
> > > > --
> > > > > Add inner packet IPv4 hdr and L4 checksum enable options in conf.
> > > > > These will be used in case of protocol offload.
> > > > > Per SA, application could specify whether the
> > > > > checksum(compute/verify) can be offloaded to security device.
> > > > >
> > > > > Signed-off-by: Archana Muniganti <marchana@marvell.com>
> > > > > ---
> > > > >  doc/guides/cryptodevs/features/default.ini |  1 +
> > > > >  doc/guides/rel_notes/deprecation.rst       |  4 ++--
> > > > >  doc/guides/rel_notes/release_21_11.rst     |  4 ++++
> > > > >  lib/cryptodev/rte_cryptodev.h              |  2 ++
> > > > >  lib/security/rte_security.h                | 18 ++++++++++++++++++
> > > > >  5 files changed, 27 insertions(+), 2 deletions(-)
> > > > >
> > > > > diff --git a/doc/guides/cryptodevs/features/default.ini
> > > > > b/doc/guides/cryptodevs/features/default.ini
> > > > > index c24814de98..96d95ddc81 100644
> > > > > --- a/doc/guides/cryptodevs/features/default.ini
> > > > > +++ b/doc/guides/cryptodevs/features/default.ini
> > > > > @@ -33,6 +33,7 @@ Non-Byte aligned data  =  Sym raw data path API
> > > > > = Cipher multiple data units =
> > > > >  Cipher wrapped key     =
> > > > > +Inner checksum         =
> > > > >
> > > > >  ;
> > > > >  ; Supported crypto algorithms of a default crypto driver.
> > > > > diff --git a/doc/guides/rel_notes/deprecation.rst
> > > > > b/doc/guides/rel_notes/deprecation.rst
> > > > > index 05fc2fdee7..8308e00ed4 100644
> > > > > --- a/doc/guides/rel_notes/deprecation.rst
> > > > > +++ b/doc/guides/rel_notes/deprecation.rst
> > > > > @@ -232,8 +232,8 @@ Deprecation Notices
> > > > >    IPsec payload MSS (Maximum Segment Size), and ESN (Extended
> > > > > Sequence
> > > > Number).
> > > > >
> > > > >  * security: The IPsec SA config options ``struct
> > > > > rte_security_ipsec_sa_options``
> > > > > -  will be updated with new fields to support new features like
> > > > > IPsec inner
> > > > > -  checksum, TSO in case of protocol offload.
> > > > > +  will be updated with new fields to support new features like
> > > > > + TSO in case of  protocol offload.
> > > > >
> > > > >  * ipsec: The structure ``rte_ipsec_sa_prm`` will be extended with a new
> > field
> > > > >    ``hdr_l3_len`` to configure tunnel L3 header length.
> > > > > diff --git a/doc/guides/rel_notes/release_21_11.rst
> > > > > b/doc/guides/rel_notes/release_21_11.rst
> > > > > index 8da851cccc..93d1b36889 100644
> > > > > --- a/doc/guides/rel_notes/release_21_11.rst
> > > > > +++ b/doc/guides/rel_notes/release_21_11.rst
> > > > > @@ -194,6 +194,10 @@ ABI Changes
> > > > >    ``rte_security_ipsec_xform`` to allow applications to configure SA soft
> > > > >    and hard expiry limits. Limits can be either in number of packets or bytes.
> > > > >
> > > > > +* security: The new options ``ip_csum_enable`` and
> > > > > +``l4_csum_enable`` were added
> > > > > +  in structure ``rte_security_ipsec_sa_options`` to indicate
> > > > > +whether inner
> > > > > +  packet IPv4 header checksum and L4 checksum need to be
> > > > > +offloaded to
> > > > > +  security device.
> > > > >
> > > > >  Known Issues
> > > > >  ------------
> > > > > diff --git a/lib/cryptodev/rte_cryptodev.h
> > > > > b/lib/cryptodev/rte_cryptodev.h index bb01f0f195..d9271a6c45
> > > > > 100644
> > > > > --- a/lib/cryptodev/rte_cryptodev.h
> > > > > +++ b/lib/cryptodev/rte_cryptodev.h
> > > > > @@ -479,6 +479,8 @@ rte_cryptodev_asym_get_xform_enum(enum
> > > > > rte_crypto_asym_xform_type *xform_enum,  /**< Support operations
> > > > > on
> > > > multiple data-units message */
> > > > >  #define RTE_CRYPTODEV_FF_CIPHER_WRAPPED_KEY		(1ULL
> > << 26)
> > > > >  /**< Support wrapped key in cipher xform  */
> > > > > +#define RTE_CRYPTODEV_FF_SECURITY_INNER_CSUM		(1ULL
> > > > << 27)
> > > > > +/**< Support inner checksum computation/verification */
> > > > >
> > > > >  /**
> > > > >   * Get the name of a crypto device feature flag diff --git
> > > > > a/lib/security/rte_security.h b/lib/security/rte_security.h index
> > > > > ab1a6e1f65..945f45ad76 100644
> > > > > --- a/lib/security/rte_security.h
> > > > > +++ b/lib/security/rte_security.h
> > > > > @@ -230,6 +230,24 @@ struct rte_security_ipsec_sa_options {
> > > > >  	 * * 0: Do not match UDP ports
> > > > >  	 */
> > > > >  	uint32_t udp_ports_verify : 1;
> > > > > +
> > > > > +	/** Compute/verify inner packet IPv4 header checksum in tunnel mode
> > > > > +	 *
> > > > > +	 * * 1: For outbound, compute inner packet IPv4 header checksum
> > > > > +	 *      before tunnel encapsulation and for inbound, verify after
> > > > > +	 *      tunnel decapsulation.
> > > > > +	 * * 0: Inner packet IP header checksum is not computed/verified.
> > > > > +	 */
> > > > > +	uint32_t ip_csum_enable : 1;
> > > > > +
> > > > > +	/** Compute/verify inner packet L4 checksum in tunnel mode
> > > > > +	 *
> > > > > +	 * * 1: For outbound, compute inner packet L4 checksum before
> > > > > +	 *      tunnel encapsulation and for inbound, verify after
> > > > > +	 *      tunnel decapsulation.
> > > > > +	 * * 0: Inner packet L4 checksum is not computed/verified.
> > > > > +	 */
> > > > > +	uint32_t l4_csum_enable : 1;
> > > >
> > > > As I understand these 2 new flags serve two purposes:
> > > > 1. report HW/PMD ability to perform these offloads.
> > > > 2. allow user to enable/disable this offload on SA basis.
> > >
> > > [Anoob] Correct
> > >
> > > >
> > > > One question I have - how it will work on data-path?
> > > > Would decision to perform these offloads be based on mbuf->ol_flags
> > > > value (same as we doing for ethdev TX offloads)?
> > > > Or some other approach is implied?
> > >
> > > [Anoob] There will be two settings. It can enabled per SA or enabled per
> > packet.
> >
> > Ok, will it be documented somewhere?
> > Or probably it already is, and I just missed/forgot it somehow?
> 
> [Anoob] Looks like we missed documenting this. Will update in the next version. Should we add documentation around SA options or around
> TX offload flags? I think it's better around SA options. 

Same thought here.
Thanks


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v5] ethdev: fix representor port ID search by name
  2021-09-29 11:13  0%   ` Singh, Aman Deep
@ 2021-09-30 12:03  0%     ` Andrew Rybchenko
  2021-09-30 12:51  0%       ` Singh, Aman Deep
  0 siblings, 1 reply; 200+ results
From: Andrew Rybchenko @ 2021-09-30 12:03 UTC (permalink / raw)
  To: Singh, Aman Deep, Ajit Khaparde, Somnath Kotur, John Daley,
	Hyong Youb Kim, Beilei Xing, Qiming Yang, Qi Zhang, Haiyue Wang,
	Matan Azrad, Viacheslav Ovsiienko, Thomas Monjalon, Ferruh Yigit
  Cc: dev, Viacheslav Galaktionov

On 9/29/21 2:13 PM, Singh, Aman Deep wrote:
> 
> On 9/13/2021 4:56 PM, Andrew Rybchenko wrote:
>> From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
>>
>> Getting a list of representors from a representor does not make sense.
>> Instead, a parent device should be used.
>>
>> To this end, extend the rte_eth_dev_data structure to include the port ID
>> of the backing device for representors.
>>
>> Signed-off-by: Viacheslav Galaktionov
>> <viacheslav.galaktionov@oktetlabs.ru>
>> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> Acked-by: Haiyue Wang <haiyue.wang@intel.com>
>> Acked-by: Beilei Xing <beilei.xing@intel.com>
>> ---
>> The new field is added into the hole in rte_eth_dev_data structure.
>> The patch does not change ABI, but extra care is required since ABI
>> check is disabled for the structure because of the libabigail bug [1].
>> It should not be a problem anyway since 21.11 is a ABI breaking release.
>>
>> Potentially it is bad for out-of-tree drivers which implement
>> representors but do not fill in a new parert_port_id field in
>> rte_eth_dev_data structure. Get ID by name will not work.
> Did we change name of new field from parert_port_id to backer_port_id.

Yes, see v5 changelog below.
It is done to address review notes from Ferruh on v4.

>>
>> mlx5 changes should be reviwed by maintainers very carefully, since
>> we are not sure if we patch it correctly.
>>
>> [1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060
>>
>> v5:
>>      - try to improve name: backer_port_id instead of parent_port_id
>>      - init new field to RTE_MAX_ETHPORTS on allocation to avoid
>>        zero port usage by default
>>
>> v4:
>>      - apply mlx5 review notes: remove fallback from generic ethdev
>>        code and add fallback to mlx5 code to handle legacy usecase
>>
>> v3:
>>      - fix mlx5 build breakage
>>
>> v2:
>>      - fix mlx5 review notes
>>      - try device port ID first before parent in order to address
>>        backward compatibility issue

[snip]


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v5] ethdev: fix representor port ID search by name
  2021-09-30 12:03  0%     ` Andrew Rybchenko
@ 2021-09-30 12:51  0%       ` Singh, Aman Deep
  2021-09-30 13:40  0%         ` Andrew Rybchenko
  0 siblings, 1 reply; 200+ results
From: Singh, Aman Deep @ 2021-09-30 12:51 UTC (permalink / raw)
  To: Andrew Rybchenko, Ajit Khaparde, Somnath Kotur, John Daley,
	Hyong Youb Kim, Beilei Xing, Qiming Yang, Qi Zhang, Haiyue Wang,
	Matan Azrad, Viacheslav Ovsiienko, Thomas Monjalon, Ferruh Yigit
  Cc: dev, Viacheslav Galaktionov


On 9/30/2021 5:33 PM, Andrew Rybchenko wrote:
> On 9/29/21 2:13 PM, Singh, Aman Deep wrote:
>> On 9/13/2021 4:56 PM, Andrew Rybchenko wrote:
>>> From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
>>>
>>> Getting a list of representors from a representor does not make sense.
>>> Instead, a parent device should be used.
>>>
>>> To this end, extend the rte_eth_dev_data structure to include the port ID
>>> of the backing device for representors.
>>>
>>> Signed-off-by: Viacheslav Galaktionov
>>> <viacheslav.galaktionov@oktetlabs.ru>
>>> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>> Acked-by: Haiyue Wang <haiyue.wang@intel.com>
>>> Acked-by: Beilei Xing <beilei.xing@intel.com>
>>> ---
>>> The new field is added into the hole in rte_eth_dev_data structure.
>>> The patch does not change ABI, but extra care is required since ABI
>>> check is disabled for the structure because of the libabigail bug [1].
>>> It should not be a problem anyway since 21.11 is a ABI breaking release.
>>>
>>> Potentially it is bad for out-of-tree drivers which implement
>>> representors but do not fill in a new parert_port_id field in
>>> rte_eth_dev_data structure. Get ID by name will not work.
>> Did we change name of new field from parert_port_id to backer_port_id.
> Yes, see v5 changelog below.
> It is done to address review notes from Ferruh on v4.

Maybe I did not put it clearly, my bad. Just wanted, in above lines also 
the usage
of "parert_port_id" should be changed.

>
>>> mlx5 changes should be reviwed by maintainers very carefully, since
>>> we are not sure if we patch it correctly.
>>>
>>> [1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060
>>>
>>> v5:
>>>       - try to improve name: backer_port_id instead of parent_port_id
>>>       - init new field to RTE_MAX_ETHPORTS on allocation to avoid
>>>         zero port usage by default
>>>
>>> v4:
>>>       - apply mlx5 review notes: remove fallback from generic ethdev
>>>         code and add fallback to mlx5 code to handle legacy usecase
>>>
>>> v3:
>>>       - fix mlx5 build breakage
>>>
>>> v2:
>>>       - fix mlx5 review notes
>>>       - try device port ID first before parent in order to address
>>>         backward compatibility issue
> [snip]
>

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v4 1/3] security: add SA config option for inner pkt csum
  @ 2021-09-30 12:58  4% ` Archana Muniganti
  2021-10-03 21:09  0%   ` Ananyev, Konstantin
  0 siblings, 1 reply; 200+ results
From: Archana Muniganti @ 2021-09-30 12:58 UTC (permalink / raw)
  To: gakhil, radu.nicolau, roy.fan.zhang, hemant.agrawal, konstantin.ananyev
  Cc: Archana Muniganti, anoobj, ktejasree, adwivedi, jerinj, dev

Add inner packet IPv4 hdr and L4 checksum enable options
in conf. These will be used in case of protocol offload.
Per SA, application could specify whether the
checksum(compute/verify) can be offloaded to security device.

Signed-off-by: Archana Muniganti <marchana@marvell.com>
---
 doc/guides/cryptodevs/features/default.ini |  1 +
 doc/guides/rel_notes/deprecation.rst       |  4 +--
 doc/guides/rel_notes/release_21_11.rst     |  4 +++
 lib/cryptodev/rte_cryptodev.h              |  2 ++
 lib/security/rte_security.h                | 31 ++++++++++++++++++++++
 5 files changed, 40 insertions(+), 2 deletions(-)

diff --git a/doc/guides/cryptodevs/features/default.ini b/doc/guides/cryptodevs/features/default.ini
index c24814de98..96d95ddc81 100644
--- a/doc/guides/cryptodevs/features/default.ini
+++ b/doc/guides/cryptodevs/features/default.ini
@@ -33,6 +33,7 @@ Non-Byte aligned data  =
 Sym raw data path API  =
 Cipher multiple data units =
 Cipher wrapped key     =
+Inner checksum         =
 
 ;
 ; Supported crypto algorithms of a default crypto driver.
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 05fc2fdee7..8308e00ed4 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -232,8 +232,8 @@ Deprecation Notices
   IPsec payload MSS (Maximum Segment Size), and ESN (Extended Sequence Number).
 
 * security: The IPsec SA config options ``struct rte_security_ipsec_sa_options``
-  will be updated with new fields to support new features like IPsec inner
-  checksum, TSO in case of protocol offload.
+  will be updated with new fields to support new features like TSO in case of
+  protocol offload.
 
 * ipsec: The structure ``rte_ipsec_sa_prm`` will be extended with a new field
   ``hdr_l3_len`` to configure tunnel L3 header length.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 3ade7fe5ac..5480f05a99 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -196,6 +196,10 @@ ABI Changes
   ``rte_security_ipsec_xform`` to allow applications to configure SA soft
   and hard expiry limits. Limits can be either in number of packets or bytes.
 
+* security: The new options ``ip_csum_enable`` and ``l4_csum_enable`` were added
+  in structure ``rte_security_ipsec_sa_options`` to indicate whether inner
+  packet IPv4 header checksum and L4 checksum need to be offloaded to
+  security device.
 
 Known Issues
 ------------
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index bb01f0f195..d9271a6c45 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -479,6 +479,8 @@ rte_cryptodev_asym_get_xform_enum(enum rte_crypto_asym_xform_type *xform_enum,
 /**< Support operations on multiple data-units message */
 #define RTE_CRYPTODEV_FF_CIPHER_WRAPPED_KEY		(1ULL << 26)
 /**< Support wrapped key in cipher xform  */
+#define RTE_CRYPTODEV_FF_SECURITY_INNER_CSUM		(1ULL << 27)
+/**< Support inner checksum computation/verification */
 
 /**
  * Get the name of a crypto device feature flag
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index ab1a6e1f65..0c5636377e 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -230,6 +230,37 @@ struct rte_security_ipsec_sa_options {
 	 * * 0: Do not match UDP ports
 	 */
 	uint32_t udp_ports_verify : 1;
+
+	/** Compute/verify inner packet IPv4 header checksum in tunnel mode
+	 *
+	 * * 1: For outbound, compute inner packet IPv4 header checksum
+	 *      before tunnel encapsulation and for inbound, verify after
+	 *      tunnel decapsulation.
+	 * * 0: Inner packet IP header checksum is not computed/verified.
+	 *
+	 * The checksum verification status would be set in mbuf using
+	 * PKT_RX_IP_CKSUM_xxx flags.
+	 *
+	 * Inner IP checksum computation can also be enabled(per operation)
+	 * by setting the flag PKT_TX_IP_CKSUM in mbuf.
+	 */
+	uint32_t ip_csum_enable : 1;
+
+	/** Compute/verify inner packet L4 checksum in tunnel mode
+	 *
+	 * * 1: For outbound, compute inner packet L4 checksum before
+	 *      tunnel encapsulation and for inbound, verify after
+	 *      tunnel decapsulation.
+	 * * 0: Inner packet L4 checksum is not computed/verified.
+	 *
+	 * The checksum verification status would be set in mbuf using
+	 * PKT_RX_L4_CKSUM_xxx flags.
+	 *
+	 * Inner L4 checksum computation can also be enabled(per operation)
+	 * by setting the flags PKT_TX_TCP_CKSUM or PKT_TX_SCTP_CKSUM or
+	 * PKT_TX_UDP_CKSUM or PKT_TX_L4_MASK in mbuf.
+	 */
+	uint32_t l4_csum_enable : 1;
 };
 
 /** IPSec security association direction */
-- 
2.22.0


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v5] ethdev: fix representor port ID search by name
  2021-09-30 12:51  0%       ` Singh, Aman Deep
@ 2021-09-30 13:40  0%         ` Andrew Rybchenko
  0 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-09-30 13:40 UTC (permalink / raw)
  To: Singh, Aman Deep, Ajit Khaparde, Somnath Kotur, John Daley,
	Hyong Youb Kim, Beilei Xing, Qiming Yang, Qi Zhang, Haiyue Wang,
	Matan Azrad, Viacheslav Ovsiienko, Thomas Monjalon, Ferruh Yigit
  Cc: dev, Viacheslav Galaktionov

On 9/30/21 3:51 PM, Singh, Aman Deep wrote:
> 
> On 9/30/2021 5:33 PM, Andrew Rybchenko wrote:
>> On 9/29/21 2:13 PM, Singh, Aman Deep wrote:
>>> On 9/13/2021 4:56 PM, Andrew Rybchenko wrote:
>>>> From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
>>>>
>>>> Getting a list of representors from a representor does not make sense.
>>>> Instead, a parent device should be used.
>>>>
>>>> To this end, extend the rte_eth_dev_data structure to include the
>>>> port ID
>>>> of the backing device for representors.
>>>>
>>>> Signed-off-by: Viacheslav Galaktionov
>>>> <viacheslav.galaktionov@oktetlabs.ru>
>>>> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>> Acked-by: Haiyue Wang <haiyue.wang@intel.com>
>>>> Acked-by: Beilei Xing <beilei.xing@intel.com>
>>>> ---
>>>> The new field is added into the hole in rte_eth_dev_data structure.
>>>> The patch does not change ABI, but extra care is required since ABI
>>>> check is disabled for the structure because of the libabigail bug [1].
>>>> It should not be a problem anyway since 21.11 is a ABI breaking
>>>> release.
>>>>
>>>> Potentially it is bad for out-of-tree drivers which implement
>>>> representors but do not fill in a new parert_port_id field in
>>>> rte_eth_dev_data structure. Get ID by name will not work.
>>> Did we change name of new field from parert_port_id to backer_port_id.
>> Yes, see v5 changelog below.
>> It is done to address review notes from Ferruh on v4.
> 
> Maybe I did not put it clearly, my bad. Just wanted, in above lines also
> the usage
> of "parert_port_id" should be changed.

Thanks, I'll fix it in v6, but I think it does not worse to
respin it since it is not a part of description. Just extra
information.

>>
>>>> mlx5 changes should be reviwed by maintainers very carefully, since
>>>> we are not sure if we patch it correctly.
>>>>
>>>> [1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060
>>>>
>>>> v5:
>>>>       - try to improve name: backer_port_id instead of parent_port_id
>>>>       - init new field to RTE_MAX_ETHPORTS on allocation to avoid
>>>>         zero port usage by default
>>>>
>>>> v4:
>>>>       - apply mlx5 review notes: remove fallback from generic ethdev
>>>>         code and add fallback to mlx5 code to handle legacy usecase
>>>>
>>>> v3:
>>>>       - fix mlx5 build breakage
>>>>
>>>> v2:
>>>>       - fix mlx5 review notes
>>>>       - try device port ID first before parent in order to address
>>>>         backward compatibility issue
>> [snip]
>>


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH 1/3] security: rework session framework
  @ 2021-09-30 14:50  1% ` Akhil Goyal
  2021-09-30 14:50  1% ` [dpdk-dev] [PATCH 3/3] cryptodev: " Akhil Goyal
  1 sibling, 0 replies; 200+ results
From: Akhil Goyal @ 2021-09-30 14:50 UTC (permalink / raw)
  To: dev
  Cc: thomas, david.marchand, hemant.agrawal, anoobj,
	pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
	g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang,
	konstantin.ananyev, radu.nicolau, ajit.khaparde, rnagadheeraj,
	adwivedi, ciara.power, Akhil Goyal

As per current design, rte_security_session_create()
unnecesarily use 2 mempool objects for a single session.
And structure rte_security_session is not directly used
by the application, it may cause ABI breakage if the structure
is modified in future.

To address these two issues, the API will now take only 1 mempool
object instead of 2 and return a void pointer directly
to the session private data. With this change, the library layer
will get the object from mempool and pass session_private_data
to the PMD for filling the PMD data.
Since set and get pkt metadata for security sessions are now
made inline for Inline crypto/proto mode, a new member fast_mdata
is added to the rte_security_session.
To access opaque data and fast_mdata will be accessed via inline
APIs which can do pointer manipulations inside library from
session_private_data pointer coming from application.

TODO:
- inline APIs for opaque data and fast_mdata
- move rte_security_session struct to security_driver.h

Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
 app/test-crypto-perf/cperf_ops.c              |   8 +-
 .../cperf_test_pmd_cyclecount.c               |   2 +-
 app/test/test_cryptodev.c                     |  17 +-
 app/test/test_ipsec.c                         |  11 +-
 app/test/test_security.c                      | 229 ++++++++----------
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c    |   5 +-
 .../crypto/aesni_mb/rte_aesni_mb_pmd_ops.c    |  30 +--
 drivers/crypto/caam_jr/caam_jr.c              |  32 +--
 drivers/crypto/cnxk/cn10k_cryptodev_ops.c     |   4 +-
 drivers/crypto/cnxk/cn10k_ipsec.c             |  53 +---
 drivers/crypto/cnxk/cn9k_cryptodev_ops.c      |   2 +-
 drivers/crypto/cnxk/cn9k_ipsec.c              |  50 +---
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c   |  39 +--
 drivers/crypto/dpaa_sec/dpaa_sec.c            |  38 ++-
 drivers/crypto/mvsam/rte_mrvl_pmd.c           |   3 +-
 drivers/crypto/mvsam/rte_mrvl_pmd_ops.c       |  11 +-
 drivers/crypto/octeontx2/otx2_cryptodev_ops.c |   2 +-
 drivers/crypto/octeontx2/otx2_cryptodev_sec.c |  54 +----
 drivers/crypto/qat/qat_sym.c                  |   3 +-
 drivers/crypto/qat/qat_sym.h                  |   8 +-
 drivers/crypto/qat/qat_sym_session.c          |  21 +-
 drivers/crypto/qat/qat_sym_session.h          |   4 +-
 drivers/net/ixgbe/ixgbe_ipsec.c               |  36 +--
 drivers/net/octeontx2/otx2_ethdev_sec.c       |  51 +---
 drivers/net/octeontx2/otx2_ethdev_sec_tx.h    |   2 +-
 drivers/net/txgbe/txgbe_ipsec.c               |  36 +--
 examples/ipsec-secgw/ipsec.c                  |   9 +-
 lib/security/rte_security.c                   |  28 ++-
 lib/security/rte_security.h                   |  41 ++--
 lib/security/rte_security_driver.h            |  16 +-
 30 files changed, 264 insertions(+), 581 deletions(-)

diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index 4b7d66edb2..1b3cbe77b9 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -49,8 +49,6 @@ cperf_set_ops_security(struct rte_crypto_op **ops,
 
 	for (i = 0; i < nb_ops; i++) {
 		struct rte_crypto_sym_op *sym_op = ops[i]->sym;
-		struct rte_security_session *sec_sess =
-			(struct rte_security_session *)sess;
 		uint32_t buf_sz;
 
 		uint32_t *per_pkt_hfn = rte_crypto_op_ctod_offset(ops[i],
@@ -58,7 +56,7 @@ cperf_set_ops_security(struct rte_crypto_op **ops,
 		*per_pkt_hfn = options->pdcp_ses_hfn_en ? 0 : PDCP_DEFAULT_HFN;
 
 		ops[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
-		rte_security_attach_session(ops[i], sec_sess);
+		rte_security_attach_session(ops[i], (void *)sess);
 		sym_op->m_src = (struct rte_mbuf *)((uint8_t *)ops[i] +
 							src_buf_offset);
 
@@ -673,7 +671,7 @@ cperf_create_session(struct rte_mempool *sess_mp,
 
 		/* Create security session */
 		return (void *)rte_security_session_create(ctx,
-					&sess_conf, sess_mp, priv_mp);
+					&sess_conf, sess_mp);
 	}
 	if (options->op_type == CPERF_DOCSIS) {
 		enum rte_security_docsis_direction direction;
@@ -716,7 +714,7 @@ cperf_create_session(struct rte_mempool *sess_mp,
 
 		/* Create security session */
 		return (void *)rte_security_session_create(ctx,
-					&sess_conf, sess_mp, priv_mp);
+					&sess_conf, sess_mp);
 	}
 #endif
 	sess = rte_cryptodev_sym_session_create(sess_mp);
diff --git a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
index 844659aeca..cbbbedd9ba 100644
--- a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
+++ b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
@@ -70,7 +70,7 @@ cperf_pmd_cyclecount_test_free(struct cperf_pmd_cyclecount_ctx *ctx)
 				(struct rte_security_ctx *)
 				rte_cryptodev_get_sec_ctx(ctx->dev_id);
 			rte_security_session_destroy(sec_ctx,
-				(struct rte_security_session *)ctx->sess);
+				(void *)ctx->sess);
 		} else
 #endif
 		{
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index e6ceeb487f..82f819211a 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -81,7 +81,7 @@ struct crypto_unittest_params {
 	union {
 		struct rte_cryptodev_sym_session *sess;
 #ifdef RTE_LIB_SECURITY
-		struct rte_security_session *sec_session;
+		void *sec_session;
 #endif
 	};
 #ifdef RTE_LIB_SECURITY
@@ -8276,8 +8276,7 @@ static int test_pdcp_proto(int i, int oop, enum rte_crypto_cipher_operation opc,
 
 	/* Create security session */
 	ut_params->sec_session = rte_security_session_create(ctx,
-				&sess_conf, ts_params->session_mpool,
-				ts_params->session_priv_mpool);
+				&sess_conf, ts_params->session_mpool);
 
 	if (!ut_params->sec_session) {
 		printf("TestCase %s()-%d line %d failed %s: ",
@@ -8537,8 +8536,7 @@ test_pdcp_proto_SGL(int i, int oop,
 
 	/* Create security session */
 	ut_params->sec_session = rte_security_session_create(ctx,
-				&sess_conf, ts_params->session_mpool,
-				ts_params->session_priv_mpool);
+				&sess_conf, ts_params->session_mpool);
 
 	if (!ut_params->sec_session) {
 		printf("TestCase %s()-%d line %d failed %s: ",
@@ -9022,8 +9020,7 @@ test_ipsec_proto_process(const struct ipsec_test_data td[],
 
 	/* Create security session */
 	ut_params->sec_session = rte_security_session_create(ctx, &sess_conf,
-					ts_params->session_mpool,
-					ts_params->session_priv_mpool);
+					ts_params->session_mpool);
 
 	if (ut_params->sec_session == NULL)
 		return TEST_SKIPPED;
@@ -9420,8 +9417,7 @@ test_docsis_proto_uplink(int i, struct docsis_test_data *d_td)
 
 	/* Create security session */
 	ut_params->sec_session = rte_security_session_create(ctx, &sess_conf,
-					ts_params->session_mpool,
-					ts_params->session_priv_mpool);
+					ts_params->session_mpool);
 
 	if (!ut_params->sec_session) {
 		printf("TestCase %s(%d) line %d: %s\n",
@@ -9596,8 +9592,7 @@ test_docsis_proto_downlink(int i, struct docsis_test_data *d_td)
 
 	/* Create security session */
 	ut_params->sec_session = rte_security_session_create(ctx, &sess_conf,
-					ts_params->session_mpool,
-					ts_params->session_priv_mpool);
+					ts_params->session_mpool);
 
 	if (!ut_params->sec_session) {
 		printf("TestCase %s(%d) line %d: %s\n",
diff --git a/app/test/test_ipsec.c b/app/test/test_ipsec.c
index c6d6b88d6d..2ffa2a8e79 100644
--- a/app/test/test_ipsec.c
+++ b/app/test/test_ipsec.c
@@ -148,18 +148,16 @@ const struct supported_auth_algo auth_algos[] = {
 
 static int
 dummy_sec_create(void *device, struct rte_security_session_conf *conf,
-	struct rte_security_session *sess, struct rte_mempool *mp)
+	void *sess)
 {
 	RTE_SET_USED(device);
 	RTE_SET_USED(conf);
-	RTE_SET_USED(mp);
-
-	sess->sess_private_data = NULL;
+	RTE_SET_USED(sess);
 	return 0;
 }
 
 static int
-dummy_sec_destroy(void *device, struct rte_security_session *sess)
+dummy_sec_destroy(void *device, void *sess)
 {
 	RTE_SET_USED(device);
 	RTE_SET_USED(sess);
@@ -631,8 +629,7 @@ create_dummy_sec_session(struct ipsec_unitest_params *ut,
 	static struct rte_security_session_conf conf;
 
 	ut->ss[j].security.ses = rte_security_session_create(&dummy_sec_ctx,
-					&conf, qp->mp_session,
-					qp->mp_session_private);
+					&conf, qp->mp_session);
 
 	if (ut->ss[j].security.ses == NULL)
 		return -ENOMEM;
diff --git a/app/test/test_security.c b/app/test/test_security.c
index 060cf1ffa8..b02c2cf207 100644
--- a/app/test/test_security.c
+++ b/app/test/test_security.c
@@ -207,7 +207,7 @@
  * and put back in session_destroy.
  *
  * @param   expected_priv_mp_usage	expected number of used priv mp objects
- */
+ *
 #define TEST_ASSERT_PRIV_MP_USAGE(expected_priv_mp_usage) do {		\
 	struct security_testsuite_params *ts_params = &testsuite_params;\
 	unsigned int priv_mp_usage;					\
@@ -218,7 +218,7 @@
 			"but there are %u allocated objects",		\
 			expected_priv_mp_usage, priv_mp_usage);		\
 } while (0)
-
+*/
 /**
  * Mockup structures and functions for rte_security_ops;
  *
@@ -253,37 +253,37 @@
 static struct mock_session_create_data {
 	void *device;
 	struct rte_security_session_conf *conf;
-	struct rte_security_session *sess;
+	void *sess;
 	struct rte_mempool *mp;
-	struct rte_mempool *priv_mp;
+//	struct rte_mempool *priv_mp;
 
 	int ret;
 
 	int called;
 	int failed;
-} mock_session_create_exp = {NULL, NULL, NULL, NULL, NULL, 0, 0, 0};
+} mock_session_create_exp = {NULL, NULL, NULL, NULL, 0, 0, 0};
 
 static int
 mock_session_create(void *device,
 		struct rte_security_session_conf *conf,
-		struct rte_security_session *sess,
-		struct rte_mempool *priv_mp)
+		void *sess)
+//		struct rte_mempool *priv_mp)
 {
-	void *sess_priv;
-	int ret;
+//	void *sess_priv;
+//	int ret;
 
 	mock_session_create_exp.called++;
 
 	MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_create_exp, device);
 	MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_create_exp, conf);
-	MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_create_exp, priv_mp);
+//	MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_create_exp, priv_mp);
 
 	if (mock_session_create_exp.ret == 0) {
-		ret = rte_mempool_get(priv_mp, &sess_priv);
-		TEST_ASSERT_EQUAL(0, ret,
-			"priv mempool does not have enough objects");
+//		ret = rte_mempool_get(priv_mp, &sess_priv);
+//		TEST_ASSERT_EQUAL(0, ret,
+//			"priv mempool does not have enough objects");
 
-		set_sec_session_private_data(sess, sess_priv);
+//		set_sec_session_private_data(sess, sess_priv);
 		mock_session_create_exp.sess = sess;
 	}
 
@@ -297,7 +297,7 @@ mock_session_create(void *device,
  */
 static struct mock_session_update_data {
 	void *device;
-	struct rte_security_session *sess;
+	void *sess;
 	struct rte_security_session_conf *conf;
 
 	int ret;
@@ -308,7 +308,7 @@ static struct mock_session_update_data {
 
 static int
 mock_session_update(void *device,
-		struct rte_security_session *sess,
+		void *sess,
 		struct rte_security_session_conf *conf)
 {
 	mock_session_update_exp.called++;
@@ -351,7 +351,7 @@ mock_session_get_size(void *device)
  */
 static struct mock_session_stats_get_data {
 	void *device;
-	struct rte_security_session *sess;
+	void *sess;
 	struct rte_security_stats *stats;
 
 	int ret;
@@ -362,7 +362,7 @@ static struct mock_session_stats_get_data {
 
 static int
 mock_session_stats_get(void *device,
-		struct rte_security_session *sess,
+		void *sess,
 		struct rte_security_stats *stats)
 {
 	mock_session_stats_get_exp.called++;
@@ -381,7 +381,7 @@ mock_session_stats_get(void *device,
  */
 static struct mock_session_destroy_data {
 	void *device;
-	struct rte_security_session *sess;
+	void *sess;
 
 	int ret;
 
@@ -390,15 +390,9 @@ static struct mock_session_destroy_data {
 } mock_session_destroy_exp = {NULL, NULL, 0, 0, 0};
 
 static int
-mock_session_destroy(void *device, struct rte_security_session *sess)
+mock_session_destroy(void *device, void *sess)
 {
-	void *sess_priv = get_sec_session_private_data(sess);
-
 	mock_session_destroy_exp.called++;
-	if ((mock_session_destroy_exp.ret == 0) && (sess_priv != NULL)) {
-		rte_mempool_put(rte_mempool_from_obj(sess_priv), sess_priv);
-		set_sec_session_private_data(sess, NULL);
-	}
 	MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_destroy_exp, device);
 	MOCK_TEST_ASSERT_POINTER_PARAMETER(mock_session_destroy_exp, sess);
 
@@ -412,7 +406,7 @@ mock_session_destroy(void *device, struct rte_security_session *sess)
  */
 static struct mock_set_pkt_metadata_data {
 	void *device;
-	struct rte_security_session *sess;
+	void *sess;
 	struct rte_mbuf *m;
 	void *params;
 
@@ -424,7 +418,7 @@ static struct mock_set_pkt_metadata_data {
 
 static int
 mock_set_pkt_metadata(void *device,
-		struct rte_security_session *sess,
+		void *sess,
 		struct rte_mbuf *m,
 		void *params)
 {
@@ -536,7 +530,6 @@ struct rte_security_ops mock_ops = {
  */
 static struct security_testsuite_params {
 	struct rte_mempool *session_mpool;
-	struct rte_mempool *session_priv_mpool;
 } testsuite_params = { NULL };
 
 /**
@@ -549,7 +542,7 @@ static struct security_testsuite_params {
 static struct security_unittest_params {
 	struct rte_security_ctx ctx;
 	struct rte_security_session_conf conf;
-	struct rte_security_session *sess;
+	void *sess;
 } unittest_params = {
 	.ctx = {
 		.device = NULL,
@@ -563,7 +556,7 @@ static struct security_unittest_params {
 #define SECURITY_TEST_PRIV_MEMPOOL_NAME "SecurityTestPrivMp"
 #define SECURITY_TEST_MEMPOOL_SIZE 15
 #define SECURITY_TEST_SESSION_OBJ_SZ sizeof(struct rte_security_session)
-#define SECURITY_TEST_SESSION_PRIV_OBJ_SZ 64
+#define SECURITY_TEST_SESSION_PRIV_OBJ_SZ 1024
 
 /**
  * testsuite_setup initializes whole test suite parameters.
@@ -577,26 +570,27 @@ testsuite_setup(void)
 	ts_params->session_mpool = rte_mempool_create(
 			SECURITY_TEST_MEMPOOL_NAME,
 			SECURITY_TEST_MEMPOOL_SIZE,
-			SECURITY_TEST_SESSION_OBJ_SZ,
+			SECURITY_TEST_SESSION_OBJ_SZ +
+			SECURITY_TEST_SESSION_PRIV_OBJ_SZ,
 			0, 0, NULL, NULL, NULL, NULL,
 			SOCKET_ID_ANY, 0);
 	TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
 			"Cannot create mempool %s\n", rte_strerror(rte_errno));
 
-	ts_params->session_priv_mpool = rte_mempool_create(
-			SECURITY_TEST_PRIV_MEMPOOL_NAME,
-			SECURITY_TEST_MEMPOOL_SIZE,
-			SECURITY_TEST_SESSION_PRIV_OBJ_SZ,
-			0, 0, NULL, NULL, NULL, NULL,
-			SOCKET_ID_ANY, 0);
-	if (ts_params->session_priv_mpool == NULL) {
-		RTE_LOG(ERR, USER1, "TestCase %s() line %d failed (null): "
-				"Cannot create priv mempool %s\n",
-				__func__, __LINE__, rte_strerror(rte_errno));
-		rte_mempool_free(ts_params->session_mpool);
-		ts_params->session_mpool = NULL;
-		return TEST_FAILED;
-	}
+//	ts_params->session_priv_mpool = rte_mempool_create(
+//			SECURITY_TEST_PRIV_MEMPOOL_NAME,
+//			SECURITY_TEST_MEMPOOL_SIZE,
+//			SECURITY_TEST_SESSION_PRIV_OBJ_SZ,
+//			0, 0, NULL, NULL, NULL, NULL,
+//			SOCKET_ID_ANY, 0);
+//	if (ts_params->session_priv_mpool == NULL) {
+//		RTE_LOG(ERR, USER1, "TestCase %s() line %d failed (null): "
+//				"Cannot create priv mempool %s\n",
+//				__func__, __LINE__, rte_strerror(rte_errno));
+//		rte_mempool_free(ts_params->session_mpool);
+//		ts_params->session_mpool = NULL;
+//		return TEST_FAILED;
+//	}
 
 	return TEST_SUCCESS;
 }
@@ -612,10 +606,6 @@ testsuite_teardown(void)
 		rte_mempool_free(ts_params->session_mpool);
 		ts_params->session_mpool = NULL;
 	}
-	if (ts_params->session_priv_mpool) {
-		rte_mempool_free(ts_params->session_priv_mpool);
-		ts_params->session_priv_mpool = NULL;
-	}
 }
 
 /**
@@ -704,7 +694,7 @@ ut_setup_with_session(void)
 {
 	struct security_unittest_params *ut_params = &unittest_params;
 	struct security_testsuite_params *ts_params = &testsuite_params;
-	struct rte_security_session *sess;
+	void *sess;
 
 	int ret = ut_setup();
 	if (ret != TEST_SUCCESS)
@@ -713,12 +703,11 @@ ut_setup_with_session(void)
 	mock_session_create_exp.device = NULL;
 	mock_session_create_exp.conf = &ut_params->conf;
 	mock_session_create_exp.mp = ts_params->session_mpool;
-	mock_session_create_exp.priv_mp = ts_params->session_priv_mpool;
 	mock_session_create_exp.ret = 0;
 
 	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
-			ts_params->session_mpool,
-			ts_params->session_priv_mpool);
+			ts_params->session_mpool);
+	mock_session_get_size_exp.called = 0;
 	TEST_ASSERT_MOCK_FUNCTION_CALL_NOT_NULL(rte_security_session_create,
 			sess);
 	TEST_ASSERT_EQUAL(sess, mock_session_create_exp.sess,
@@ -757,16 +746,14 @@ test_session_create_inv_context(void)
 {
 	struct security_testsuite_params *ts_params = &testsuite_params;
 	struct security_unittest_params *ut_params = &unittest_params;
-	struct rte_security_session *sess;
+	void *sess;
 
 	sess = rte_security_session_create(NULL, &ut_params->conf,
-			ts_params->session_mpool,
-			ts_params->session_priv_mpool);
+			ts_params->session_mpool);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
 			sess, NULL, "%p");
 	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
 	TEST_ASSERT_MEMPOOL_USAGE(0);
-	TEST_ASSERT_PRIV_MP_USAGE(0);
 	TEST_ASSERT_SESSION_COUNT(0);
 
 	return TEST_SUCCESS;
@@ -781,18 +768,16 @@ test_session_create_inv_context_ops(void)
 {
 	struct security_testsuite_params *ts_params = &testsuite_params;
 	struct security_unittest_params *ut_params = &unittest_params;
-	struct rte_security_session *sess;
+	void *sess;
 
 	ut_params->ctx.ops = NULL;
 
 	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
-			ts_params->session_mpool,
-			ts_params->session_priv_mpool);
+			ts_params->session_mpool);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
 			sess, NULL, "%p");
 	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
 	TEST_ASSERT_MEMPOOL_USAGE(0);
-	TEST_ASSERT_PRIV_MP_USAGE(0);
 	TEST_ASSERT_SESSION_COUNT(0);
 
 	return TEST_SUCCESS;
@@ -807,18 +792,16 @@ test_session_create_inv_context_ops_fun(void)
 {
 	struct security_testsuite_params *ts_params = &testsuite_params;
 	struct security_unittest_params *ut_params = &unittest_params;
-	struct rte_security_session *sess;
+	void *sess;
 
 	ut_params->ctx.ops = &empty_ops;
 
 	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
-			ts_params->session_mpool,
-			ts_params->session_priv_mpool);
+			ts_params->session_mpool);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
 			sess, NULL, "%p");
 	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
 	TEST_ASSERT_MEMPOOL_USAGE(0);
-	TEST_ASSERT_PRIV_MP_USAGE(0);
 	TEST_ASSERT_SESSION_COUNT(0);
 
 	return TEST_SUCCESS;
@@ -832,16 +815,14 @@ test_session_create_inv_configuration(void)
 {
 	struct security_testsuite_params *ts_params = &testsuite_params;
 	struct security_unittest_params *ut_params = &unittest_params;
-	struct rte_security_session *sess;
+	void *sess;
 
 	sess = rte_security_session_create(&ut_params->ctx, NULL,
-			ts_params->session_mpool,
-			ts_params->session_priv_mpool);
+			ts_params->session_mpool);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
 			sess, NULL, "%p");
 	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
 	TEST_ASSERT_MEMPOOL_USAGE(0);
-	TEST_ASSERT_PRIV_MP_USAGE(0);
 	TEST_ASSERT_SESSION_COUNT(0);
 
 	return TEST_SUCCESS;
@@ -855,16 +836,14 @@ static int
 test_session_create_inv_mempool(void)
 {
 	struct security_unittest_params *ut_params = &unittest_params;
-	struct security_testsuite_params *ts_params = &testsuite_params;
-	struct rte_security_session *sess;
+	void *sess;
 
 	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
-			NULL, ts_params->session_priv_mpool);
+			NULL);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
 			sess, NULL, "%p");
 	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
 	TEST_ASSERT_MEMPOOL_USAGE(0);
-	TEST_ASSERT_PRIV_MP_USAGE(0);
 	TEST_ASSERT_SESSION_COUNT(0);
 
 	return TEST_SUCCESS;
@@ -874,24 +853,24 @@ test_session_create_inv_mempool(void)
  * Test execution of rte_security_session_create with NULL session
  * priv mempool
  */
-static int
-test_session_create_inv_sess_priv_mempool(void)
-{
-	struct security_unittest_params *ut_params = &unittest_params;
-	struct security_testsuite_params *ts_params = &testsuite_params;
-	struct rte_security_session *sess;
-
-	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
-			ts_params->session_mpool, NULL);
-	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
-			sess, NULL, "%p");
-	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
-	TEST_ASSERT_MEMPOOL_USAGE(0);
-	TEST_ASSERT_PRIV_MP_USAGE(0);
-	TEST_ASSERT_SESSION_COUNT(0);
-
-	return TEST_SUCCESS;
-}
+//static int
+//test_session_create_inv_sess_priv_mempool(void)
+//{
+//	struct security_unittest_params *ut_params = &unittest_params;
+//	struct security_testsuite_params *ts_params = &testsuite_params;
+//	struct rte_security_session *sess;
+//
+//	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
+//			ts_params->session_mpool, NULL);
+//	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
+//			sess, NULL, "%p");
+//	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
+//	TEST_ASSERT_MEMPOOL_USAGE(0);
+//	TEST_ASSERT_PRIV_MP_USAGE(0);
+//	TEST_ASSERT_SESSION_COUNT(0);
+//
+//	return TEST_SUCCESS;
+//}
 
 /**
  * Test execution of rte_security_session_create in case when mempool
@@ -902,9 +881,9 @@ test_session_create_mempool_empty(void)
 {
 	struct security_testsuite_params *ts_params = &testsuite_params;
 	struct security_unittest_params *ut_params = &unittest_params;
-	struct rte_security_session *tmp[SECURITY_TEST_MEMPOOL_SIZE];
-	void *tmp1[SECURITY_TEST_MEMPOOL_SIZE];
-	struct rte_security_session *sess;
+//	struct rte_security_session *tmp[SECURITY_TEST_MEMPOOL_SIZE];
+	void *tmp[SECURITY_TEST_MEMPOOL_SIZE];
+	void *sess;
 
 	/* Get all available objects from mempool. */
 	int i, ret;
@@ -914,34 +893,34 @@ test_session_create_mempool_empty(void)
 		TEST_ASSERT_EQUAL(0, ret,
 				"Expect getting %d object from mempool"
 				" to succeed", i);
-		ret = rte_mempool_get(ts_params->session_priv_mpool,
-				(void **)(&tmp1[i]));
-		TEST_ASSERT_EQUAL(0, ret,
-				"Expect getting %d object from priv mempool"
-				" to succeed", i);
+//		ret = rte_mempool_get(ts_params->session_priv_mpool,
+//				(void **)(&tmp1[i]));
+//		TEST_ASSERT_EQUAL(0, ret,
+//				"Expect getting %d object from priv mempool"
+//				" to succeed", i);
 	}
 	TEST_ASSERT_MEMPOOL_USAGE(SECURITY_TEST_MEMPOOL_SIZE);
-	TEST_ASSERT_PRIV_MP_USAGE(SECURITY_TEST_MEMPOOL_SIZE);
+//	TEST_ASSERT_PRIV_MP_USAGE(SECURITY_TEST_MEMPOOL_SIZE);
 
 	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
-			ts_params->session_mpool,
-			ts_params->session_priv_mpool);
+			ts_params->session_mpool);
+//			ts_params->session_priv_mpool);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
 			sess, NULL, "%p");
 	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 0);
 	TEST_ASSERT_MEMPOOL_USAGE(SECURITY_TEST_MEMPOOL_SIZE);
-	TEST_ASSERT_PRIV_MP_USAGE(SECURITY_TEST_MEMPOOL_SIZE);
+//	TEST_ASSERT_PRIV_MP_USAGE(SECURITY_TEST_MEMPOOL_SIZE);
 	TEST_ASSERT_SESSION_COUNT(0);
 
 	/* Put objects back to the pool. */
 	for (i = 0; i < SECURITY_TEST_MEMPOOL_SIZE; ++i) {
 		rte_mempool_put(ts_params->session_mpool,
 				(void *)(tmp[i]));
-		rte_mempool_put(ts_params->session_priv_mpool,
-				(tmp1[i]));
+//		rte_mempool_put(ts_params->session_priv_mpool,
+//				(tmp1[i]));
 	}
 	TEST_ASSERT_MEMPOOL_USAGE(0);
-	TEST_ASSERT_PRIV_MP_USAGE(0);
+//	TEST_ASSERT_PRIV_MP_USAGE(0);
 
 	return TEST_SUCCESS;
 }
@@ -955,22 +934,22 @@ test_session_create_ops_failure(void)
 {
 	struct security_testsuite_params *ts_params = &testsuite_params;
 	struct security_unittest_params *ut_params = &unittest_params;
-	struct rte_security_session *sess;
+	void *sess;
 
 	mock_session_create_exp.device = NULL;
 	mock_session_create_exp.conf = &ut_params->conf;
 	mock_session_create_exp.mp = ts_params->session_mpool;
-	mock_session_create_exp.priv_mp = ts_params->session_priv_mpool;
+//	mock_session_create_exp.priv_mp = ts_params->session_priv_mpool;
 	mock_session_create_exp.ret = -1;	/* Return failure status. */
 
 	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
-			ts_params->session_mpool,
-			ts_params->session_priv_mpool);
+			ts_params->session_mpool);
+//			ts_params->session_priv_mpool);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_RET(rte_security_session_create,
 			sess, NULL, "%p");
 	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 1);
 	TEST_ASSERT_MEMPOOL_USAGE(0);
-	TEST_ASSERT_PRIV_MP_USAGE(0);
+//	TEST_ASSERT_PRIV_MP_USAGE(0);
 	TEST_ASSERT_SESSION_COUNT(0);
 
 	return TEST_SUCCESS;
@@ -984,17 +963,17 @@ test_session_create_success(void)
 {
 	struct security_testsuite_params *ts_params = &testsuite_params;
 	struct security_unittest_params *ut_params = &unittest_params;
-	struct rte_security_session *sess;
+	void *sess;
 
 	mock_session_create_exp.device = NULL;
 	mock_session_create_exp.conf = &ut_params->conf;
 	mock_session_create_exp.mp = ts_params->session_mpool;
-	mock_session_create_exp.priv_mp = ts_params->session_priv_mpool;
+//	mock_session_create_exp.priv_mp = ts_params->session_priv_mpool;
 	mock_session_create_exp.ret = 0;	/* Return success status. */
 
 	sess = rte_security_session_create(&ut_params->ctx, &ut_params->conf,
-			ts_params->session_mpool,
-			ts_params->session_priv_mpool);
+			ts_params->session_mpool);
+//			ts_params->session_priv_mpool);
 	TEST_ASSERT_MOCK_FUNCTION_CALL_NOT_NULL(rte_security_session_create,
 			sess);
 	TEST_ASSERT_EQUAL(sess, mock_session_create_exp.sess,
@@ -1003,7 +982,7 @@ test_session_create_success(void)
 			sess, mock_session_create_exp.sess);
 	TEST_ASSERT_MOCK_CALLS(mock_session_create_exp, 1);
 	TEST_ASSERT_MEMPOOL_USAGE(1);
-	TEST_ASSERT_PRIV_MP_USAGE(1);
+//	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	/*
@@ -1389,7 +1368,6 @@ test_session_destroy_inv_context(void)
 	struct security_unittest_params *ut_params = &unittest_params;
 
 	TEST_ASSERT_MEMPOOL_USAGE(1);
-	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	int ret = rte_security_session_destroy(NULL, ut_params->sess);
@@ -1397,7 +1375,6 @@ test_session_destroy_inv_context(void)
 			ret, -EINVAL, "%d");
 	TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 0);
 	TEST_ASSERT_MEMPOOL_USAGE(1);
-	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	return TEST_SUCCESS;
@@ -1414,7 +1391,6 @@ test_session_destroy_inv_context_ops(void)
 	ut_params->ctx.ops = NULL;
 
 	TEST_ASSERT_MEMPOOL_USAGE(1);
-	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	int ret = rte_security_session_destroy(&ut_params->ctx,
@@ -1423,7 +1399,6 @@ test_session_destroy_inv_context_ops(void)
 			ret, -EINVAL, "%d");
 	TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 0);
 	TEST_ASSERT_MEMPOOL_USAGE(1);
-	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	return TEST_SUCCESS;
@@ -1440,7 +1415,6 @@ test_session_destroy_inv_context_ops_fun(void)
 	ut_params->ctx.ops = &empty_ops;
 
 	TEST_ASSERT_MEMPOOL_USAGE(1);
-	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	int ret = rte_security_session_destroy(&ut_params->ctx,
@@ -1449,7 +1423,6 @@ test_session_destroy_inv_context_ops_fun(void)
 			ret, -ENOTSUP, "%d");
 	TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 0);
 	TEST_ASSERT_MEMPOOL_USAGE(1);
-	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	return TEST_SUCCESS;
@@ -1464,7 +1437,6 @@ test_session_destroy_inv_session(void)
 	struct security_unittest_params *ut_params = &unittest_params;
 
 	TEST_ASSERT_MEMPOOL_USAGE(1);
-	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	int ret = rte_security_session_destroy(&ut_params->ctx, NULL);
@@ -1472,7 +1444,6 @@ test_session_destroy_inv_session(void)
 			ret, -EINVAL, "%d");
 	TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 0);
 	TEST_ASSERT_MEMPOOL_USAGE(1);
-	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	return TEST_SUCCESS;
@@ -1492,7 +1463,6 @@ test_session_destroy_ops_failure(void)
 	mock_session_destroy_exp.ret = -1;
 
 	TEST_ASSERT_MEMPOOL_USAGE(1);
-	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	int ret = rte_security_session_destroy(&ut_params->ctx,
@@ -1501,7 +1471,6 @@ test_session_destroy_ops_failure(void)
 			ret, -1, "%d");
 	TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 1);
 	TEST_ASSERT_MEMPOOL_USAGE(1);
-	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	return TEST_SUCCESS;
@@ -1519,7 +1488,6 @@ test_session_destroy_success(void)
 	mock_session_destroy_exp.sess = ut_params->sess;
 	mock_session_destroy_exp.ret = 0;
 	TEST_ASSERT_MEMPOOL_USAGE(1);
-	TEST_ASSERT_PRIV_MP_USAGE(1);
 	TEST_ASSERT_SESSION_COUNT(1);
 
 	int ret = rte_security_session_destroy(&ut_params->ctx,
@@ -1528,7 +1496,6 @@ test_session_destroy_success(void)
 			ret, 0, "%d");
 	TEST_ASSERT_MOCK_CALLS(mock_session_destroy_exp, 1);
 	TEST_ASSERT_MEMPOOL_USAGE(0);
-	TEST_ASSERT_PRIV_MP_USAGE(0);
 	TEST_ASSERT_SESSION_COUNT(0);
 
 	/*
@@ -2495,8 +2462,8 @@ static struct unit_test_suite security_testsuite  = {
 				test_session_create_inv_configuration),
 		TEST_CASE_ST(ut_setup, ut_teardown,
 				test_session_create_inv_mempool),
-		TEST_CASE_ST(ut_setup, ut_teardown,
-				test_session_create_inv_sess_priv_mempool),
+//		TEST_CASE_ST(ut_setup, ut_teardown,
+//				test_session_create_inv_sess_priv_mempool),
 		TEST_CASE_ST(ut_setup, ut_teardown,
 				test_session_create_mempool_empty),
 		TEST_CASE_ST(ut_setup, ut_teardown,
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
index 60963a8208..93a56994da 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -1022,8 +1022,7 @@ get_session(struct aesni_mb_qp *qp, struct rte_crypto_op *op)
 	} else if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
 		if (likely(op->sym->sec_session != NULL))
 			sess = (struct aesni_mb_session *)
-					get_sec_session_private_data(
-						op->sym->sec_session);
+					(op->sym->sec_session);
 #endif
 	} else {
 		void *_sess = rte_cryptodev_sym_session_create(qp->sess_mp);
@@ -1639,7 +1638,7 @@ post_process_mb_job(struct aesni_mb_qp *qp, JOB_AES_HMAC *job)
 		 * this is for DOCSIS
 		 */
 		is_docsis_sec = 1;
-		sess = get_sec_session_private_data(op->sym->sec_session);
+		sess = (struct aesni_mb_session *)(op->sym->sec_session);
 	} else
 #endif
 	{
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
index 48a8f91868..39c67e3952 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
@@ -1056,10 +1056,8 @@ struct rte_cryptodev_ops *rte_aesni_mb_pmd_ops = &aesni_mb_pmd_ops;
  */
 static int
 aesni_mb_pmd_sec_sess_create(void *dev, struct rte_security_session_conf *conf,
-		struct rte_security_session *sess,
-		struct rte_mempool *mempool)
+			     void *sess)
 {
-	void *sess_private_data;
 	struct rte_cryptodev *cdev = (struct rte_cryptodev *)dev;
 	int ret;
 
@@ -1069,40 +1067,22 @@ aesni_mb_pmd_sec_sess_create(void *dev, struct rte_security_session_conf *conf,
 		return -EINVAL;
 	}
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		AESNI_MB_LOG(ERR, "Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
-
-	ret = aesni_mb_set_docsis_sec_session_parameters(cdev, conf,
-			sess_private_data);
-
+	ret = aesni_mb_set_docsis_sec_session_parameters(cdev, conf, sess);
 	if (ret != 0) {
 		AESNI_MB_LOG(ERR, "Failed to configure session parameters");
-
-		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
 		return ret;
 	}
 
-	set_sec_session_private_data(sess, sess_private_data);
-
 	return ret;
 }
 
 /** Clear the memory of session so it doesn't leave key material behind */
 static int
-aesni_mb_pmd_sec_sess_destroy(void *dev __rte_unused,
-		struct rte_security_session *sess)
+aesni_mb_pmd_sec_sess_destroy(void *dev __rte_unused, void *sess)
 {
-	void *sess_priv = get_sec_session_private_data(sess);
+	if (sess)
+		memset(sess, 0, sizeof(struct aesni_mb_session));
 
-	if (sess_priv) {
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
-		memset(sess_priv, 0, sizeof(struct aesni_mb_session));
-		set_sec_session_private_data(sess, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
-	}
 	return 0;
 }
 
diff --git a/drivers/crypto/caam_jr/caam_jr.c b/drivers/crypto/caam_jr/caam_jr.c
index 258750afe7..ce7a100778 100644
--- a/drivers/crypto/caam_jr/caam_jr.c
+++ b/drivers/crypto/caam_jr/caam_jr.c
@@ -1361,9 +1361,7 @@ caam_jr_enqueue_op(struct rte_crypto_op *op, struct caam_jr_qp *qp)
 					cryptodev_driver_id);
 		break;
 	case RTE_CRYPTO_OP_SECURITY_SESSION:
-		ses = (struct caam_jr_session *)
-			get_sec_session_private_data(
-					op->sym->sec_session);
+		ses = (struct caam_jr_session *)(op->sym->sec_session);
 		break;
 	default:
 		CAAM_JR_DP_ERR("sessionless crypto op not supported");
@@ -1911,22 +1909,14 @@ caam_jr_set_ipsec_session(__rte_unused struct rte_cryptodev *dev,
 static int
 caam_jr_security_session_create(void *dev,
 				struct rte_security_session_conf *conf,
-				struct rte_security_session *sess,
-				struct rte_mempool *mempool)
+				void *sess)
 {
-	void *sess_private_data;
 	struct rte_cryptodev *cdev = (struct rte_cryptodev *)dev;
 	int ret;
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		CAAM_JR_ERR("Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
-
 	switch (conf->protocol) {
 	case RTE_SECURITY_PROTOCOL_IPSEC:
-		ret = caam_jr_set_ipsec_session(cdev, conf,
-				sess_private_data);
+		ret = caam_jr_set_ipsec_session(cdev, conf, sess);
 		break;
 	case RTE_SECURITY_PROTOCOL_MACSEC:
 		return -ENOTSUP;
@@ -1935,34 +1925,24 @@ caam_jr_security_session_create(void *dev,
 	}
 	if (ret != 0) {
 		CAAM_JR_ERR("failed to configure session parameters");
-		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
 		return ret;
 	}
 
-	set_sec_session_private_data(sess, sess_private_data);
-
 	return ret;
 }
 
 /* Clear the memory of session so it doesn't leave key material behind */
 static int
-caam_jr_security_session_destroy(void *dev __rte_unused,
-				 struct rte_security_session *sess)
+caam_jr_security_session_destroy(void *dev __rte_unused, void *sess)
 {
 	PMD_INIT_FUNC_TRACE();
-	void *sess_priv = get_sec_session_private_data(sess);
 
-	struct caam_jr_session *s = (struct caam_jr_session *)sess_priv;
-
-	if (sess_priv) {
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
+	struct caam_jr_session *s = (struct caam_jr_session *)sess;
 
+	if (sess) {
 		rte_free(s->cipher_key.data);
 		rte_free(s->auth_key.data);
 		memset(sess, 0, sizeof(struct caam_jr_session));
-		set_sec_session_private_data(sess, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
 	}
 	return 0;
 }
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index 3caf05aab9..99968cc353 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -120,8 +120,8 @@ cn10k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct rte_crypto_op *ops[],
 
 	if (op->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
 		if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
-			sec_sess = get_sec_session_private_data(
-				sym_op->sec_session);
+			sec_sess = (struct cn10k_sec_session *)
+						(sym_op->sec_session);
 			ret = cpt_sec_inst_fill(op, sec_sess, &inst[0]);
 			if (unlikely(ret))
 				return 0;
diff --git a/drivers/crypto/cnxk/cn10k_ipsec.c b/drivers/crypto/cnxk/cn10k_ipsec.c
index ebb2a7ec48..6f31fd04c6 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec.c
+++ b/drivers/crypto/cnxk/cn10k_ipsec.c
@@ -35,16 +35,14 @@ static int
 cn10k_ipsec_outb_sa_create(struct roc_cpt *roc_cpt,
 			   struct rte_security_ipsec_xform *ipsec_xfrm,
 			   struct rte_crypto_sym_xform *crypto_xfrm,
-			   struct rte_security_session *sec_sess)
+			   struct cn10k_sec_session *sess)
 {
 	struct roc_ot_ipsec_outb_sa *out_sa;
 	struct cnxk_ipsec_outb_rlens rlens;
-	struct cn10k_sec_session *sess;
 	struct cn10k_ipsec_sa *sa;
 	union cpt_inst_w4 inst_w4;
 	int ret;
 
-	sess = get_sec_session_private_data(sec_sess);
 	sa = &sess->sa;
 	out_sa = &sa->out_sa;
 
@@ -93,15 +91,13 @@ static int
 cn10k_ipsec_inb_sa_create(struct roc_cpt *roc_cpt,
 			  struct rte_security_ipsec_xform *ipsec_xfrm,
 			  struct rte_crypto_sym_xform *crypto_xfrm,
-			  struct rte_security_session *sec_sess)
+			  struct cn10k_sec_session *sess)
 {
 	struct roc_ot_ipsec_inb_sa *in_sa;
-	struct cn10k_sec_session *sess;
 	struct cn10k_ipsec_sa *sa;
 	union cpt_inst_w4 inst_w4;
 	int ret;
 
-	sess = get_sec_session_private_data(sec_sess);
 	sa = &sess->sa;
 	in_sa = &sa->in_sa;
 
@@ -132,7 +128,7 @@ static int
 cn10k_ipsec_session_create(void *dev,
 			   struct rte_security_ipsec_xform *ipsec_xfrm,
 			   struct rte_crypto_sym_xform *crypto_xfrm,
-			   struct rte_security_session *sess)
+			   struct cn10k_sec_session *sess)
 {
 	struct rte_cryptodev *crypto_dev = dev;
 	struct roc_cpt *roc_cpt;
@@ -161,55 +157,28 @@ cn10k_ipsec_session_create(void *dev,
 
 static int
 cn10k_sec_session_create(void *device, struct rte_security_session_conf *conf,
-			 struct rte_security_session *sess,
-			 struct rte_mempool *mempool)
+			 void *sess)
 {
-	struct cn10k_sec_session *priv;
-	int ret;
+	struct cn10k_sec_session *priv = sess;
 
 	if (conf->action_type != RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL)
 		return -EINVAL;
 
-	if (rte_mempool_get(mempool, (void **)&priv)) {
-		plt_err("Could not allocate security session private data");
-		return -ENOMEM;
-	}
-
-	set_sec_session_private_data(sess, priv);
-
 	if (conf->protocol != RTE_SECURITY_PROTOCOL_IPSEC) {
-		ret = -ENOTSUP;
-		goto mempool_put;
+		return -ENOTSUP;
 	}
-	ret = cn10k_ipsec_session_create(device, &conf->ipsec,
-					 conf->crypto_xform, sess);
-	if (ret)
-		goto mempool_put;
-
-	return 0;
-
-mempool_put:
-	rte_mempool_put(mempool, priv);
-	set_sec_session_private_data(sess, NULL);
-	return ret;
+	return cn10k_ipsec_session_create(device, &conf->ipsec,
+					 conf->crypto_xform, priv);
 }
 
 static int
-cn10k_sec_session_destroy(void *device __rte_unused,
-			  struct rte_security_session *sess)
+cn10k_sec_session_destroy(void *device __rte_unused, void *sess)
 {
-	struct cn10k_sec_session *priv;
-	struct rte_mempool *sess_mp;
-
-	priv = get_sec_session_private_data(sess);
+	struct cn10k_sec_session *priv = sess;
 
 	if (priv == NULL)
 		return 0;
-
-	sess_mp = rte_mempool_from_obj(priv);
-
-	set_sec_session_private_data(sess, NULL);
-	rte_mempool_put(sess_mp, priv);
+	memset(priv, 0, sizeof(*priv));
 
 	return 0;
 }
diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
index 75277936b0..4c2dc5b080 100644
--- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
@@ -56,7 +56,7 @@ cn9k_cpt_sec_inst_fill(struct rte_crypto_op *op,
 		return -ENOTSUP;
 	}
 
-	priv = get_sec_session_private_data(op->sym->sec_session);
+	priv = (struct cn9k_sec_session *)(op->sym->sec_session);
 	sa = &priv->sa;
 
 	if (sa->dir == RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
diff --git a/drivers/crypto/cnxk/cn9k_ipsec.c b/drivers/crypto/cnxk/cn9k_ipsec.c
index 63ae025030..f3a6df0145 100644
--- a/drivers/crypto/cnxk/cn9k_ipsec.c
+++ b/drivers/crypto/cnxk/cn9k_ipsec.c
@@ -274,13 +274,12 @@ static int
 cn9k_ipsec_outb_sa_create(struct cnxk_cpt_qp *qp,
 			  struct rte_security_ipsec_xform *ipsec,
 			  struct rte_crypto_sym_xform *crypto_xform,
-			  struct rte_security_session *sec_sess)
+			  struct cn9k_sec_session *sess)
 {
 	struct rte_crypto_sym_xform *auth_xform = crypto_xform->next;
 	struct roc_ie_on_ip_template *template = NULL;
 	struct cnxk_cpt_inst_tmpl *inst_tmpl;
 	struct roc_ie_on_outb_sa *out_sa;
-	struct cn9k_sec_session *sess;
 	struct roc_ie_on_sa_ctl *ctl;
 	struct cn9k_ipsec_sa *sa;
 	struct rte_ipv6_hdr *ip6;
@@ -292,7 +291,6 @@ cn9k_ipsec_outb_sa_create(struct cnxk_cpt_qp *qp,
 	size_t ctx_len;
 	int ret;
 
-	sess = get_sec_session_private_data(sec_sess);
 	sa = &sess->sa;
 	out_sa = &sa->out_sa;
 	ctl = &out_sa->common_sa.ctl;
@@ -420,12 +418,11 @@ static int
 cn9k_ipsec_inb_sa_create(struct cnxk_cpt_qp *qp,
 			 struct rte_security_ipsec_xform *ipsec,
 			 struct rte_crypto_sym_xform *crypto_xform,
-			 struct rte_security_session *sec_sess)
+			 struct cn9k_sec_session *sess)
 {
 	struct rte_crypto_sym_xform *auth_xform = crypto_xform;
 	struct cnxk_cpt_inst_tmpl *inst_tmpl;
 	struct roc_ie_on_inb_sa *in_sa;
-	struct cn9k_sec_session *sess;
 	struct cn9k_ipsec_sa *sa;
 	const uint8_t *auth_key;
 	union cpt_inst_w4 w4;
@@ -434,7 +431,6 @@ cn9k_ipsec_inb_sa_create(struct cnxk_cpt_qp *qp,
 	size_t ctx_len = 0;
 	int ret;
 
-	sess = get_sec_session_private_data(sec_sess);
 	sa = &sess->sa;
 	in_sa = &sa->in_sa;
 
@@ -498,7 +494,7 @@ static int
 cn9k_ipsec_session_create(void *dev,
 			  struct rte_security_ipsec_xform *ipsec_xform,
 			  struct rte_crypto_sym_xform *crypto_xform,
-			  struct rte_security_session *sess)
+			  struct cn9k_sec_session *sess)
 {
 	struct rte_cryptodev *crypto_dev = dev;
 	struct cnxk_cpt_qp *qp;
@@ -529,53 +525,32 @@ cn9k_ipsec_session_create(void *dev,
 
 static int
 cn9k_sec_session_create(void *device, struct rte_security_session_conf *conf,
-			struct rte_security_session *sess,
-			struct rte_mempool *mempool)
+			void *sess)
 {
-	struct cn9k_sec_session *priv;
-	int ret;
+	struct cn9k_sec_session *priv = sess;
 
 	if (conf->action_type != RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL)
 		return -EINVAL;
 
-	if (rte_mempool_get(mempool, (void **)&priv)) {
-		plt_err("Could not allocate security session private data");
-		return -ENOMEM;
-	}
-
 	memset(priv, 0, sizeof(*priv));
 
-	set_sec_session_private_data(sess, priv);
-
 	if (conf->protocol != RTE_SECURITY_PROTOCOL_IPSEC) {
-		ret = -ENOTSUP;
-		goto mempool_put;
+		return -ENOTSUP;
 	}
 
-	ret = cn9k_ipsec_session_create(device, &conf->ipsec,
-					conf->crypto_xform, sess);
-	if (ret)
-		goto mempool_put;
-
-	return 0;
-
-mempool_put:
-	rte_mempool_put(mempool, priv);
-	set_sec_session_private_data(sess, NULL);
-	return ret;
+	return cn9k_ipsec_session_create(device, &conf->ipsec,
+					conf->crypto_xform, priv);
 }
 
 static int
-cn9k_sec_session_destroy(void *device __rte_unused,
-			 struct rte_security_session *sess)
+cn9k_sec_session_destroy(void *device __rte_unused, void *sess)
 {
 	struct roc_ie_on_outb_sa *out_sa;
 	struct cn9k_sec_session *priv;
-	struct rte_mempool *sess_mp;
 	struct roc_ie_on_sa_ctl *ctl;
 	struct cn9k_ipsec_sa *sa;
 
-	priv = get_sec_session_private_data(sess);
+	priv = sess;
 	if (priv == NULL)
 		return 0;
 
@@ -587,13 +562,8 @@ cn9k_sec_session_destroy(void *device __rte_unused,
 
 	rte_io_wmb();
 
-	sess_mp = rte_mempool_from_obj(priv);
-
 	memset(priv, 0, sizeof(*priv));
 
-	set_sec_session_private_data(sess, NULL);
-	rte_mempool_put(sess_mp, priv);
-
 	return 0;
 }
 
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index dfa72f3f93..176f1a27a0 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -1358,8 +1358,7 @@ build_sec_fd(struct rte_crypto_op *op,
 				op->sym->session, cryptodev_driver_id);
 #ifdef RTE_LIB_SECURITY
 	else if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION)
-		sess = (dpaa2_sec_session *)get_sec_session_private_data(
-				op->sym->sec_session);
+		sess = (dpaa2_sec_session *)(op->sym->sec_session);
 #endif
 	else
 		return -ENOTSUP;
@@ -1532,7 +1531,7 @@ sec_simple_fd_to_mbuf(const struct qbman_fd *fd)
 	struct rte_crypto_op *op;
 	uint16_t len = DPAA2_GET_FD_LEN(fd);
 	int16_t diff = 0;
-	dpaa2_sec_session *sess_priv __rte_unused;
+	dpaa2_sec_session *sess_priv;
 
 	struct rte_mbuf *mbuf = DPAA2_INLINE_MBUF_FROM_BUF(
 		DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd)),
@@ -1545,8 +1544,7 @@ sec_simple_fd_to_mbuf(const struct qbman_fd *fd)
 	mbuf->buf_iova = op->sym->aead.digest.phys_addr;
 	op->sym->aead.digest.phys_addr = 0L;
 
-	sess_priv = (dpaa2_sec_session *)get_sec_session_private_data(
-				op->sym->sec_session);
+	sess_priv = (dpaa2_sec_session *)(op->sym->sec_session);
 	if (sess_priv->dir == DIR_ENC)
 		mbuf->data_off += SEC_FLC_DHR_OUTBOUND;
 	else
@@ -3395,63 +3393,44 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
 static int
 dpaa2_sec_security_session_create(void *dev,
 				  struct rte_security_session_conf *conf,
-				  struct rte_security_session *sess,
-				  struct rte_mempool *mempool)
+				  void *sess)
 {
-	void *sess_private_data;
 	struct rte_cryptodev *cdev = (struct rte_cryptodev *)dev;
 	int ret;
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		DPAA2_SEC_ERR("Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
-
 	switch (conf->protocol) {
 	case RTE_SECURITY_PROTOCOL_IPSEC:
-		ret = dpaa2_sec_set_ipsec_session(cdev, conf,
-				sess_private_data);
+		ret = dpaa2_sec_set_ipsec_session(cdev, conf, sess);
 		break;
 	case RTE_SECURITY_PROTOCOL_MACSEC:
 		return -ENOTSUP;
 	case RTE_SECURITY_PROTOCOL_PDCP:
-		ret = dpaa2_sec_set_pdcp_session(cdev, conf,
-				sess_private_data);
+		ret = dpaa2_sec_set_pdcp_session(cdev, conf, sess);
 		break;
 	default:
 		return -EINVAL;
 	}
 	if (ret != 0) {
 		DPAA2_SEC_ERR("Failed to configure session parameters");
-		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
 		return ret;
 	}
 
-	set_sec_session_private_data(sess, sess_private_data);
-
 	return ret;
 }
 
 /** Clear the memory of session so it doesn't leave key material behind */
 static int
-dpaa2_sec_security_session_destroy(void *dev __rte_unused,
-		struct rte_security_session *sess)
+dpaa2_sec_security_session_destroy(void *dev __rte_unused, void *sess)
 {
 	PMD_INIT_FUNC_TRACE();
-	void *sess_priv = get_sec_session_private_data(sess);
 
-	dpaa2_sec_session *s = (dpaa2_sec_session *)sess_priv;
-
-	if (sess_priv) {
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
+	dpaa2_sec_session *s = (dpaa2_sec_session *)sess;
 
+	if (sess) {
 		rte_free(s->ctxt);
 		rte_free(s->cipher_key.data);
 		rte_free(s->auth_key.data);
 		memset(s, 0, sizeof(dpaa2_sec_session));
-		set_sec_session_private_data(sess, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
 	}
 	return 0;
 }
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index d5aa2748d6..5a087df090 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -1793,8 +1793,7 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
 #ifdef RTE_LIB_SECURITY
 			case RTE_CRYPTO_OP_SECURITY_SESSION:
 				ses = (dpaa_sec_session *)
-					get_sec_session_private_data(
-							op->sym->sec_session);
+					(op->sym->sec_session);
 				break;
 #endif
 			default:
@@ -2572,7 +2571,6 @@ static inline void
 free_session_memory(struct rte_cryptodev *dev, dpaa_sec_session *s)
 {
 	struct dpaa_sec_dev_private *qi = dev->data->dev_private;
-	struct rte_mempool *sess_mp = rte_mempool_from_obj((void *)s);
 	uint8_t i;
 
 	for (i = 0; i < MAX_DPAA_CORES; i++) {
@@ -2582,7 +2580,6 @@ free_session_memory(struct rte_cryptodev *dev, dpaa_sec_session *s)
 		s->qp[i] = NULL;
 	}
 	free_session_data(s);
-	rte_mempool_put(sess_mp, (void *)s);
 }
 
 /** Clear the memory of session so it doesn't leave key material behind */
@@ -3117,26 +3114,23 @@ dpaa_sec_set_pdcp_session(struct rte_cryptodev *dev,
 static int
 dpaa_sec_security_session_create(void *dev,
 				 struct rte_security_session_conf *conf,
-				 struct rte_security_session *sess,
-				 struct rte_mempool *mempool)
+				 void *sess)
 {
-	void *sess_private_data;
+//	void *sess_private_data = sess;
 	struct rte_cryptodev *cdev = (struct rte_cryptodev *)dev;
 	int ret;
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		DPAA_SEC_ERR("Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
+//	if (rte_mempool_get(mempool, &sess_private_data)) {
+//		DPAA_SEC_ERR("Couldn't get object from session mempool");
+//		return -ENOMEM;
+//	}
 
 	switch (conf->protocol) {
 	case RTE_SECURITY_PROTOCOL_IPSEC:
-		ret = dpaa_sec_set_ipsec_session(cdev, conf,
-				sess_private_data);
+		ret = dpaa_sec_set_ipsec_session(cdev, conf, sess);
 		break;
 	case RTE_SECURITY_PROTOCOL_PDCP:
-		ret = dpaa_sec_set_pdcp_session(cdev, conf,
-				sess_private_data);
+		ret = dpaa_sec_set_pdcp_session(cdev, conf, sess);
 		break;
 	case RTE_SECURITY_PROTOCOL_MACSEC:
 		return -ENOTSUP;
@@ -3146,28 +3140,24 @@ dpaa_sec_security_session_create(void *dev,
 	if (ret != 0) {
 		DPAA_SEC_ERR("failed to configure session parameters");
 		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
+//		rte_mempool_put(mempool, sess_private_data);
 		return ret;
 	}
 
-	set_sec_session_private_data(sess, sess_private_data);
+//	set_sec_session_private_data(sess, sess_private_data);
 
 	return ret;
 }
 
 /** Clear the memory of session so it doesn't leave key material behind */
 static int
-dpaa_sec_security_session_destroy(void *dev __rte_unused,
-		struct rte_security_session *sess)
+dpaa_sec_security_session_destroy(void *dev __rte_unused, void *sess)
 {
 	PMD_INIT_FUNC_TRACE();
-	void *sess_priv = get_sec_session_private_data(sess);
-	dpaa_sec_session *s = (dpaa_sec_session *)sess_priv;
+	dpaa_sec_session *s = (dpaa_sec_session *)sess;
 
-	if (sess_priv) {
+	if (sess)
 		free_session_memory((struct rte_cryptodev *)dev, s);
-		set_sec_session_private_data(sess, NULL);
-	}
 	return 0;
 }
 #endif
diff --git a/drivers/crypto/mvsam/rte_mrvl_pmd.c b/drivers/crypto/mvsam/rte_mrvl_pmd.c
index a72642a772..245a4ad353 100644
--- a/drivers/crypto/mvsam/rte_mrvl_pmd.c
+++ b/drivers/crypto/mvsam/rte_mrvl_pmd.c
@@ -773,8 +773,7 @@ mrvl_request_prepare_sec(struct sam_cio_ipsec_params *request,
 		return -EINVAL;
 	}
 
-	sess = (struct mrvl_crypto_session *)get_sec_session_private_data(
-			op->sym->sec_session);
+	sess = (struct mrvl_crypto_session *)(op->sym->sec_session);
 	if (unlikely(sess == NULL)) {
 		MRVL_LOG(ERR, "Session was not created for this device! %d",
 			 cryptodev_driver_id);
diff --git a/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c b/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c
index 3064b1f136..e04a2c88c7 100644
--- a/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c
+++ b/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c
@@ -913,16 +913,12 @@ mrvl_crypto_pmd_security_session_create(__rte_unused void *dev,
 
 /** Clear the memory of session so it doesn't leave key material behind */
 static int
-mrvl_crypto_pmd_security_session_destroy(void *dev __rte_unused,
-		struct rte_security_session *sess)
+mrvl_crypto_pmd_security_session_destroy(void *dev __rte_unused, void *sess)
 {
-	void *sess_priv = get_sec_session_private_data(sess);
-
 	/* Zero out the whole structure */
-	if (sess_priv) {
+	if (sess) {
 		struct mrvl_crypto_session *mrvl_sess =
 			(struct mrvl_crypto_session *)sess_priv;
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
 
 		if (mrvl_sess->sam_sess &&
 		    sam_session_destroy(mrvl_sess->sam_sess) < 0) {
@@ -932,9 +928,6 @@ mrvl_crypto_pmd_security_session_destroy(void *dev __rte_unused,
 		rte_free(mrvl_sess->sam_sess_params.cipher_key);
 		rte_free(mrvl_sess->sam_sess_params.auth_key);
 		rte_free(mrvl_sess->sam_sess_params.cipher_iv);
-		memset(sess, 0, sizeof(struct rte_security_session));
-		set_sec_session_private_data(sess, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
 	}
 	return 0;
 }
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
index 37fad11d91..7b744cd4b4 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
@@ -702,7 +702,7 @@ otx2_cpt_enqueue_sec(struct otx2_cpt_qp *qp, struct rte_crypto_op *op,
 	uint8_t esn;
 	int ret;
 
-	priv = get_sec_session_private_data(op->sym->sec_session);
+	priv = (struct otx2_sec_session *)(op->sym->sec_session);
 	sess = &priv->ipsec.lp;
 	sa = &sess->in_sa;
 
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_sec.c b/drivers/crypto/octeontx2/otx2_cryptodev_sec.c
index a5db40047d..56900e3187 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_sec.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_sec.c
@@ -203,7 +203,7 @@ static int
 crypto_sec_ipsec_outb_session_create(struct rte_cryptodev *crypto_dev,
 				     struct rte_security_ipsec_xform *ipsec,
 				     struct rte_crypto_sym_xform *crypto_xform,
-				     struct rte_security_session *sec_sess)
+				     struct otx2_sec_session *sess)
 {
 	struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
 	struct otx2_ipsec_po_ip_template *template = NULL;
@@ -212,13 +212,11 @@ crypto_sec_ipsec_outb_session_create(struct rte_cryptodev *crypto_dev,
 	struct otx2_ipsec_po_sa_ctl *ctl;
 	int cipher_key_len, auth_key_len;
 	struct otx2_ipsec_po_out_sa *sa;
-	struct otx2_sec_session *sess;
 	struct otx2_cpt_inst_s inst;
 	struct rte_ipv6_hdr *ip6;
 	struct rte_ipv4_hdr *ip;
 	int ret, ctx_len;
 
-	sess = get_sec_session_private_data(sec_sess);
 	sess->ipsec.dir = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
 	lp = &sess->ipsec.lp;
 
@@ -398,7 +396,7 @@ static int
 crypto_sec_ipsec_inb_session_create(struct rte_cryptodev *crypto_dev,
 				    struct rte_security_ipsec_xform *ipsec,
 				    struct rte_crypto_sym_xform *crypto_xform,
-				    struct rte_security_session *sec_sess)
+				    struct otx2_sec_session *sess)
 {
 	struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
 	const uint8_t *cipher_key, *auth_key;
@@ -406,11 +404,9 @@ crypto_sec_ipsec_inb_session_create(struct rte_cryptodev *crypto_dev,
 	struct otx2_ipsec_po_sa_ctl *ctl;
 	int cipher_key_len, auth_key_len;
 	struct otx2_ipsec_po_in_sa *sa;
-	struct otx2_sec_session *sess;
 	struct otx2_cpt_inst_s inst;
 	int ret;
 
-	sess = get_sec_session_private_data(sec_sess);
 	sess->ipsec.dir = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
 	lp = &sess->ipsec.lp;
 
@@ -512,7 +508,7 @@ static int
 crypto_sec_ipsec_session_create(struct rte_cryptodev *crypto_dev,
 				struct rte_security_ipsec_xform *ipsec,
 				struct rte_crypto_sym_xform *crypto_xform,
-				struct rte_security_session *sess)
+				struct otx2_sec_session *sess)
 {
 	int ret;
 
@@ -536,10 +532,9 @@ crypto_sec_ipsec_session_create(struct rte_cryptodev *crypto_dev,
 static int
 otx2_crypto_sec_session_create(void *device,
 			       struct rte_security_session_conf *conf,
-			       struct rte_security_session *sess,
-			       struct rte_mempool *mempool)
+			       void *sess)
 {
-	struct otx2_sec_session *priv;
+	struct otx2_sec_session *priv = sess;
 	int ret;
 
 	if (conf->action_type != RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL)
@@ -548,51 +543,25 @@ otx2_crypto_sec_session_create(void *device,
 	if (rte_security_dynfield_register() < 0)
 		return -rte_errno;
 
-	if (rte_mempool_get(mempool, (void **)&priv)) {
-		otx2_err("Could not allocate security session private data");
-		return -ENOMEM;
-	}
-
-	set_sec_session_private_data(sess, priv);
-
 	priv->userdata = conf->userdata;
 
 	if (conf->protocol == RTE_SECURITY_PROTOCOL_IPSEC)
 		ret = crypto_sec_ipsec_session_create(device, &conf->ipsec,
 						      conf->crypto_xform,
-						      sess);
+						      priv);
 	else
 		ret = -ENOTSUP;
 
-	if (ret)
-		goto mempool_put;
-
-	return 0;
-
-mempool_put:
-	rte_mempool_put(mempool, priv);
-	set_sec_session_private_data(sess, NULL);
 	return ret;
 }
 
 static int
-otx2_crypto_sec_session_destroy(void *device __rte_unused,
-				struct rte_security_session *sess)
+otx2_crypto_sec_session_destroy(void *device __rte_unused, void *sess)
 {
-	struct otx2_sec_session *priv;
-	struct rte_mempool *sess_mp;
+	struct otx2_sec_session *priv = sess;
 
-	priv = get_sec_session_private_data(sess);
-
-	if (priv == NULL)
-		return 0;
-
-	sess_mp = rte_mempool_from_obj(priv);
-
-	memset(priv, 0, sizeof(*priv));
-
-	set_sec_session_private_data(sess, NULL);
-	rte_mempool_put(sess_mp, priv);
+	if (priv)
+		memset(priv, 0, sizeof(*priv));
 
 	return 0;
 }
@@ -604,8 +573,7 @@ otx2_crypto_sec_session_get_size(void *device __rte_unused)
 }
 
 static int
-otx2_crypto_sec_set_pkt_mdata(void *device __rte_unused,
-			      struct rte_security_session *session,
+otx2_crypto_sec_set_pkt_mdata(void *device __rte_unused, void *session,
 			      struct rte_mbuf *m, void *params __rte_unused)
 {
 	/* Set security session as the pkt metadata */
diff --git a/drivers/crypto/qat/qat_sym.c b/drivers/crypto/qat/qat_sym.c
index 93b257522b..fbb17e61ff 100644
--- a/drivers/crypto/qat/qat_sym.c
+++ b/drivers/crypto/qat/qat_sym.c
@@ -250,8 +250,7 @@ qat_sym_build_request(void *in_op, uint8_t *out_msg,
 				op->sym->session, qat_sym_driver_id);
 #ifdef RTE_LIB_SECURITY
 	} else {
-		ctx = (struct qat_sym_session *)get_sec_session_private_data(
-				op->sym->sec_session);
+		ctx = (struct qat_sym_session *)(op->sym->sec_session);
 		if (likely(ctx)) {
 			if (unlikely(ctx->bpi_ctx == NULL)) {
 				QAT_DP_LOG(ERR, "QAT PMD only supports security"
diff --git a/drivers/crypto/qat/qat_sym.h b/drivers/crypto/qat/qat_sym.h
index e3ec7f0de4..8904aabd3d 100644
--- a/drivers/crypto/qat/qat_sym.h
+++ b/drivers/crypto/qat/qat_sym.h
@@ -202,9 +202,7 @@ qat_sym_preprocess_requests(void **ops, uint16_t nb_ops)
 		op = (struct rte_crypto_op *)ops[i];
 
 		if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
-			ctx = (struct qat_sym_session *)
-				get_sec_session_private_data(
-					op->sym->sec_session);
+			ctx = (struct qat_sym_session *)(op->sym->sec_session);
 
 			if (ctx == NULL || ctx->bpi_ctx == NULL)
 				continue;
@@ -243,9 +241,7 @@ qat_sym_process_response(void **op, uint8_t *resp, void *op_cookie)
 		 * Assuming at this point that if it's a security
 		 * op, that this is for DOCSIS
 		 */
-		sess = (struct qat_sym_session *)
-				get_sec_session_private_data(
-				rx_op->sym->sec_session);
+		sess = (struct qat_sym_session *)(rx_op->sym->sec_session);
 		is_docsis_sec = 1;
 	} else
 #endif
diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c
index 3f2f6736fc..2a22347c7f 100644
--- a/drivers/crypto/qat/qat_sym_session.c
+++ b/drivers/crypto/qat/qat_sym_session.c
@@ -2283,10 +2283,8 @@ qat_sec_session_set_docsis_parameters(struct rte_cryptodev *dev,
 int
 qat_security_session_create(void *dev,
 				struct rte_security_session_conf *conf,
-				struct rte_security_session *sess,
-				struct rte_mempool *mempool)
+				void *sess_private_data)
 {
-	void *sess_private_data;
 	struct rte_cryptodev *cdev = (struct rte_cryptodev *)dev;
 	int ret;
 
@@ -2296,40 +2294,25 @@ qat_security_session_create(void *dev,
 		return -EINVAL;
 	}
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		QAT_LOG(ERR, "Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
-
 	ret = qat_sec_session_set_docsis_parameters(cdev, conf,
 			sess_private_data);
 	if (ret != 0) {
 		QAT_LOG(ERR, "Failed to configure session parameters");
-		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
 		return ret;
 	}
 
-	set_sec_session_private_data(sess, sess_private_data);
-
 	return ret;
 }
 
 int
-qat_security_session_destroy(void *dev __rte_unused,
-				 struct rte_security_session *sess)
+qat_security_session_destroy(void *dev __rte_unused, void *sess_priv)
 {
-	void *sess_priv = get_sec_session_private_data(sess);
 	struct qat_sym_session *s = (struct qat_sym_session *)sess_priv;
 
 	if (sess_priv) {
 		if (s->bpi_ctx)
 			bpi_cipher_ctx_free(s->bpi_ctx);
 		memset(s, 0, qat_sym_session_get_private_size(dev));
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
-
-		set_sec_session_private_data(sess, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
 	}
 	return 0;
 }
diff --git a/drivers/crypto/qat/qat_sym_session.h b/drivers/crypto/qat/qat_sym_session.h
index 6ebc176729..7fcc1d6f7b 100644
--- a/drivers/crypto/qat/qat_sym_session.h
+++ b/drivers/crypto/qat/qat_sym_session.h
@@ -166,9 +166,9 @@ qat_sym_validate_zuc_key(int key_len, enum icp_qat_hw_cipher_algo *alg);
 #ifdef RTE_LIB_SECURITY
 int
 qat_security_session_create(void *dev, struct rte_security_session_conf *conf,
-		struct rte_security_session *sess, struct rte_mempool *mempool);
+		void *sess);
 int
-qat_security_session_destroy(void *dev, struct rte_security_session *sess);
+qat_security_session_destroy(void *dev, void *sess);
 #endif
 
 #endif /* _QAT_SYM_SESSION_H_ */
diff --git a/drivers/net/ixgbe/ixgbe_ipsec.c b/drivers/net/ixgbe/ixgbe_ipsec.c
index e45c5501e6..7e3f05a067 100644
--- a/drivers/net/ixgbe/ixgbe_ipsec.c
+++ b/drivers/net/ixgbe/ixgbe_ipsec.c
@@ -369,24 +369,17 @@ ixgbe_crypto_remove_sa(struct rte_eth_dev *dev,
 static int
 ixgbe_crypto_create_session(void *device,
 		struct rte_security_session_conf *conf,
-		struct rte_security_session *session,
-		struct rte_mempool *mempool)
+		void *session)
 {
 	struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;
-	struct ixgbe_crypto_session *ic_session = NULL;
+	struct ixgbe_crypto_session *ic_session = session;
 	struct rte_crypto_aead_xform *aead_xform;
 	struct rte_eth_conf *dev_conf = &eth_dev->data->dev_conf;
 
-	if (rte_mempool_get(mempool, (void **)&ic_session)) {
-		PMD_DRV_LOG(ERR, "Cannot get object from ic_session mempool");
-		return -ENOMEM;
-	}
-
 	if (conf->crypto_xform->type != RTE_CRYPTO_SYM_XFORM_AEAD ||
 			conf->crypto_xform->aead.algo !=
 					RTE_CRYPTO_AEAD_AES_GCM) {
 		PMD_DRV_LOG(ERR, "Unsupported crypto transformation mode\n");
-		rte_mempool_put(mempool, (void *)ic_session);
 		return -ENOTSUP;
 	}
 	aead_xform = &conf->crypto_xform->aead;
@@ -396,7 +389,6 @@ ixgbe_crypto_create_session(void *device,
 			ic_session->op = IXGBE_OP_AUTHENTICATED_DECRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
-			rte_mempool_put(mempool, (void *)ic_session);
 			return -ENOTSUP;
 		}
 	} else {
@@ -404,7 +396,6 @@ ixgbe_crypto_create_session(void *device,
 			ic_session->op = IXGBE_OP_AUTHENTICATED_ENCRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
-			rte_mempool_put(mempool, (void *)ic_session);
 			return -ENOTSUP;
 		}
 	}
@@ -416,12 +407,9 @@ ixgbe_crypto_create_session(void *device,
 	ic_session->spi = conf->ipsec.spi;
 	ic_session->dev = eth_dev;
 
-	set_sec_session_private_data(session, ic_session);
-
 	if (ic_session->op == IXGBE_OP_AUTHENTICATED_ENCRYPTION) {
 		if (ixgbe_crypto_add_sa(ic_session)) {
 			PMD_DRV_LOG(ERR, "Failed to add SA\n");
-			rte_mempool_put(mempool, (void *)ic_session);
 			return -EPERM;
 		}
 	}
@@ -436,14 +424,11 @@ ixgbe_crypto_session_get_size(__rte_unused void *device)
 }
 
 static int
-ixgbe_crypto_remove_session(void *device,
-		struct rte_security_session *session)
+ixgbe_crypto_remove_session(void *device, void *session)
 {
 	struct rte_eth_dev *eth_dev = device;
 	struct ixgbe_crypto_session *ic_session =
-		(struct ixgbe_crypto_session *)
-		get_sec_session_private_data(session);
-	struct rte_mempool *mempool = rte_mempool_from_obj(ic_session);
+		(struct ixgbe_crypto_session *)session;
 
 	if (eth_dev != ic_session->dev) {
 		PMD_DRV_LOG(ERR, "Session not bound to this device\n");
@@ -455,8 +440,6 @@ ixgbe_crypto_remove_session(void *device,
 		return -EFAULT;
 	}
 
-	rte_mempool_put(mempool, (void *)ic_session);
-
 	return 0;
 }
 
@@ -476,12 +459,11 @@ ixgbe_crypto_compute_pad_len(struct rte_mbuf *m)
 }
 
 static int
-ixgbe_crypto_update_mb(void *device __rte_unused,
-		struct rte_security_session *session,
+ixgbe_crypto_update_mb(void *device __rte_unused, void *session,
 		       struct rte_mbuf *m, void *params __rte_unused)
 {
-	struct ixgbe_crypto_session *ic_session =
-			get_sec_session_private_data(session);
+	struct ixgbe_crypto_session *ic_session = session;
+
 	if (ic_session->op == IXGBE_OP_AUTHENTICATED_ENCRYPTION) {
 		union ixgbe_crypto_tx_desc_md *mdata =
 			(union ixgbe_crypto_tx_desc_md *)
@@ -685,8 +667,8 @@ ixgbe_crypto_add_ingress_sa_from_flow(const void *sess,
 				      const void *ip_spec,
 				      uint8_t is_ipv6)
 {
-	struct ixgbe_crypto_session *ic_session
-		= get_sec_session_private_data(sess);
+	struct ixgbe_crypto_session *ic_session =
+				(struct ixgbe_crypto_session *)sess;
 
 	if (ic_session->op == IXGBE_OP_AUTHENTICATED_DECRYPTION) {
 		if (is_ipv6) {
diff --git a/drivers/net/octeontx2/otx2_ethdev_sec.c b/drivers/net/octeontx2/otx2_ethdev_sec.c
index c2a36883cb..ef851fe52c 100644
--- a/drivers/net/octeontx2/otx2_ethdev_sec.c
+++ b/drivers/net/octeontx2/otx2_ethdev_sec.c
@@ -350,7 +350,7 @@ static int
 eth_sec_ipsec_out_sess_create(struct rte_eth_dev *eth_dev,
 			      struct rte_security_ipsec_xform *ipsec,
 			      struct rte_crypto_sym_xform *crypto_xform,
-			      struct rte_security_session *sec_sess)
+			      struct otx2_sec_session *sec_sess)
 {
 	struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
 	struct otx2_sec_session_ipsec_ip *sess;
@@ -363,7 +363,7 @@ eth_sec_ipsec_out_sess_create(struct rte_eth_dev *eth_dev,
 	struct otx2_cpt_inst_s inst;
 	struct otx2_cpt_qp *qp;
 
-	priv = get_sec_session_private_data(sec_sess);
+	priv = sec_sess;
 	priv->ipsec.dir = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
 	sess = &priv->ipsec.ip;
 
@@ -468,7 +468,7 @@ static int
 eth_sec_ipsec_in_sess_create(struct rte_eth_dev *eth_dev,
 			     struct rte_security_ipsec_xform *ipsec,
 			     struct rte_crypto_sym_xform *crypto_xform,
-			     struct rte_security_session *sec_sess)
+			     struct otx2_sec_session *sec_sess)
 {
 	struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
 	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
@@ -495,7 +495,7 @@ eth_sec_ipsec_in_sess_create(struct rte_eth_dev *eth_dev,
 
 	ctl = &sa->ctl;
 
-	priv = get_sec_session_private_data(sec_sess);
+	priv = sec_sess;
 	priv->ipsec.dir = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
 	sess = &priv->ipsec.ip;
 
@@ -619,7 +619,7 @@ static int
 eth_sec_ipsec_sess_create(struct rte_eth_dev *eth_dev,
 			  struct rte_security_ipsec_xform *ipsec,
 			  struct rte_crypto_sym_xform *crypto_xform,
-			  struct rte_security_session *sess)
+			  struct otx2_sec_session *sess)
 {
 	int ret;
 
@@ -638,22 +638,14 @@ eth_sec_ipsec_sess_create(struct rte_eth_dev *eth_dev,
 static int
 otx2_eth_sec_session_create(void *device,
 			    struct rte_security_session_conf *conf,
-			    struct rte_security_session *sess,
-			    struct rte_mempool *mempool)
+			    void *sess)
 {
-	struct otx2_sec_session *priv;
+	struct otx2_sec_session *priv = sess;
 	int ret;
 
 	if (conf->action_type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
 		return -ENOTSUP;
 
-	if (rte_mempool_get(mempool, (void **)&priv)) {
-		otx2_err("Could not allocate security session private data");
-		return -ENOMEM;
-	}
-
-	set_sec_session_private_data(sess, priv);
-
 	/*
 	 * Save userdata provided by the application. For ingress packets, this
 	 * could be used to identify the SA.
@@ -663,19 +655,14 @@ otx2_eth_sec_session_create(void *device,
 	if (conf->protocol == RTE_SECURITY_PROTOCOL_IPSEC)
 		ret = eth_sec_ipsec_sess_create(device, &conf->ipsec,
 						conf->crypto_xform,
-						sess);
+						priv);
 	else
 		ret = -ENOTSUP;
 
 	if (ret)
-		goto mempool_put;
+		return ret;
 
 	return 0;
-
-mempool_put:
-	rte_mempool_put(mempool, priv);
-	set_sec_session_private_data(sess, NULL);
-	return ret;
 }
 
 static void
@@ -688,20 +675,14 @@ otx2_eth_sec_free_anti_replay(struct otx2_ipsec_fp_in_sa *sa)
 }
 
 static int
-otx2_eth_sec_session_destroy(void *device,
-			     struct rte_security_session *sess)
+otx2_eth_sec_session_destroy(void *device, void *sess)
 {
 	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(device);
 	struct otx2_sec_session_ipsec_ip *sess_ip;
 	struct otx2_ipsec_fp_in_sa *sa;
-	struct otx2_sec_session *priv;
-	struct rte_mempool *sess_mp;
+	struct otx2_sec_session *priv = sess;
 	int ret;
 
-	priv = get_sec_session_private_data(sess);
-	if (priv == NULL)
-		return -EINVAL;
-
 	sess_ip = &priv->ipsec.ip;
 
 	if (priv->ipsec.dir == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
@@ -727,11 +708,6 @@ otx2_eth_sec_session_destroy(void *device,
 			return ret;
 	}
 
-	sess_mp = rte_mempool_from_obj(priv);
-
-	set_sec_session_private_data(sess, NULL);
-	rte_mempool_put(sess_mp, priv);
-
 	return 0;
 }
 
@@ -742,9 +718,8 @@ otx2_eth_sec_session_get_size(void *device __rte_unused)
 }
 
 static int
-otx2_eth_sec_set_pkt_mdata(void *device __rte_unused,
-			    struct rte_security_session *session,
-			    struct rte_mbuf *m, void *params __rte_unused)
+otx2_eth_sec_set_pkt_mdata(void *device __rte_unused, void *session,
+		struct rte_mbuf *m, void *params __rte_unused)
 {
 	/* Set security session as the pkt metadata */
 	*rte_security_dynfield(m) = (rte_security_dynfield_t)session;
diff --git a/drivers/net/octeontx2/otx2_ethdev_sec_tx.h b/drivers/net/octeontx2/otx2_ethdev_sec_tx.h
index 623a2a841e..9ecb786947 100644
--- a/drivers/net/octeontx2/otx2_ethdev_sec_tx.h
+++ b/drivers/net/octeontx2/otx2_ethdev_sec_tx.h
@@ -54,7 +54,7 @@ otx2_sec_event_tx(uint64_t base, struct rte_event *ev, struct rte_mbuf *m,
 		struct nix_iova_s nix_iova;
 	} *sd;
 
-	priv = get_sec_session_private_data((void *)(*rte_security_dynfield(m)));
+	priv = (void *)(*rte_security_dynfield(m));
 	sess = &priv->ipsec.ip;
 	sa = &sess->out_sa;
 
diff --git a/drivers/net/txgbe/txgbe_ipsec.c b/drivers/net/txgbe/txgbe_ipsec.c
index ccd747973b..cc6370c2f3 100644
--- a/drivers/net/txgbe/txgbe_ipsec.c
+++ b/drivers/net/txgbe/txgbe_ipsec.c
@@ -349,24 +349,17 @@ txgbe_crypto_remove_sa(struct rte_eth_dev *dev,
 static int
 txgbe_crypto_create_session(void *device,
 		struct rte_security_session_conf *conf,
-		struct rte_security_session *session,
-		struct rte_mempool *mempool)
+		void *session)
 {
 	struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;
-	struct txgbe_crypto_session *ic_session = NULL;
+	struct txgbe_crypto_session *ic_session = session;
 	struct rte_crypto_aead_xform *aead_xform;
 	struct rte_eth_conf *dev_conf = &eth_dev->data->dev_conf;
 
-	if (rte_mempool_get(mempool, (void **)&ic_session)) {
-		PMD_DRV_LOG(ERR, "Cannot get object from ic_session mempool");
-		return -ENOMEM;
-	}
-
 	if (conf->crypto_xform->type != RTE_CRYPTO_SYM_XFORM_AEAD ||
 			conf->crypto_xform->aead.algo !=
 					RTE_CRYPTO_AEAD_AES_GCM) {
 		PMD_DRV_LOG(ERR, "Unsupported crypto transformation mode\n");
-		rte_mempool_put(mempool, (void *)ic_session);
 		return -ENOTSUP;
 	}
 	aead_xform = &conf->crypto_xform->aead;
@@ -376,7 +369,6 @@ txgbe_crypto_create_session(void *device,
 			ic_session->op = TXGBE_OP_AUTHENTICATED_DECRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
-			rte_mempool_put(mempool, (void *)ic_session);
 			return -ENOTSUP;
 		}
 	} else {
@@ -384,7 +376,6 @@ txgbe_crypto_create_session(void *device,
 			ic_session->op = TXGBE_OP_AUTHENTICATED_ENCRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
-			rte_mempool_put(mempool, (void *)ic_session);
 			return -ENOTSUP;
 		}
 	}
@@ -396,12 +387,9 @@ txgbe_crypto_create_session(void *device,
 	ic_session->spi = conf->ipsec.spi;
 	ic_session->dev = eth_dev;
 
-	set_sec_session_private_data(session, ic_session);
-
 	if (ic_session->op == TXGBE_OP_AUTHENTICATED_ENCRYPTION) {
 		if (txgbe_crypto_add_sa(ic_session)) {
 			PMD_DRV_LOG(ERR, "Failed to add SA\n");
-			rte_mempool_put(mempool, (void *)ic_session);
 			return -EPERM;
 		}
 	}
@@ -416,14 +404,11 @@ txgbe_crypto_session_get_size(__rte_unused void *device)
 }
 
 static int
-txgbe_crypto_remove_session(void *device,
-		struct rte_security_session *session)
+txgbe_crypto_remove_session(void *device, void *session)
 {
 	struct rte_eth_dev *eth_dev = device;
 	struct txgbe_crypto_session *ic_session =
-		(struct txgbe_crypto_session *)
-		get_sec_session_private_data(session);
-	struct rte_mempool *mempool = rte_mempool_from_obj(ic_session);
+		(struct txgbe_crypto_session *)session;
 
 	if (eth_dev != ic_session->dev) {
 		PMD_DRV_LOG(ERR, "Session not bound to this device\n");
@@ -435,8 +420,6 @@ txgbe_crypto_remove_session(void *device,
 		return -EFAULT;
 	}
 
-	rte_mempool_put(mempool, (void *)ic_session);
-
 	return 0;
 }
 
@@ -456,12 +439,11 @@ txgbe_crypto_compute_pad_len(struct rte_mbuf *m)
 }
 
 static int
-txgbe_crypto_update_mb(void *device __rte_unused,
-		struct rte_security_session *session,
-		       struct rte_mbuf *m, void *params __rte_unused)
+txgbe_crypto_update_mb(void *device __rte_unused, void *session,
+		struct rte_mbuf *m, void *params __rte_unused)
 {
-	struct txgbe_crypto_session *ic_session =
-			get_sec_session_private_data(session);
+	struct txgbe_crypto_session *ic_session = session;
+
 	if (ic_session->op == TXGBE_OP_AUTHENTICATED_ENCRYPTION) {
 		union txgbe_crypto_tx_desc_md *mdata =
 			(union txgbe_crypto_tx_desc_md *)
@@ -662,7 +644,7 @@ txgbe_crypto_add_ingress_sa_from_flow(const void *sess,
 				      uint8_t is_ipv6)
 {
 	struct txgbe_crypto_session *ic_session =
-			get_sec_session_private_data(sess);
+				(struct txgbe_crypto_session *)sess;
 
 	if (ic_session->op == TXGBE_OP_AUTHENTICATED_DECRYPTION) {
 		if (is_ipv6) {
diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c
index 6817139663..03d907cba8 100644
--- a/examples/ipsec-secgw/ipsec.c
+++ b/examples/ipsec-secgw/ipsec.c
@@ -117,8 +117,7 @@ create_lookaside_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa,
 			set_ipsec_conf(sa, &(sess_conf.ipsec));
 
 			ips->security.ses = rte_security_session_create(ctx,
-					&sess_conf, ipsec_ctx->session_pool,
-					ipsec_ctx->session_priv_pool);
+					&sess_conf, ipsec_ctx->session_pool);
 			if (ips->security.ses == NULL) {
 				RTE_LOG(ERR, IPSEC,
 				"SEC Session init failed: err: %d\n", ret);
@@ -199,8 +198,7 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa,
 		}
 
 		ips->security.ses = rte_security_session_create(sec_ctx,
-				&sess_conf, skt_ctx->session_pool,
-				skt_ctx->session_priv_pool);
+				&sess_conf, skt_ctx->session_pool);
 		if (ips->security.ses == NULL) {
 			RTE_LOG(ERR, IPSEC,
 				"SEC Session init failed: err: %d\n", ret);
@@ -380,8 +378,7 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa,
 		sess_conf.userdata = (void *) sa;
 
 		ips->security.ses = rte_security_session_create(sec_ctx,
-					&sess_conf, skt_ctx->session_pool,
-					skt_ctx->session_priv_pool);
+					&sess_conf, skt_ctx->session_pool);
 		if (ips->security.ses == NULL) {
 			RTE_LOG(ERR, IPSEC,
 				"SEC Session init failed: err: %d\n", ret);
diff --git a/lib/security/rte_security.c b/lib/security/rte_security.c
index fe81ed3e4c..06560b9cba 100644
--- a/lib/security/rte_security.c
+++ b/lib/security/rte_security.c
@@ -39,35 +39,37 @@ rte_security_dynfield_register(void)
 	return rte_security_dynfield_offset;
 }
 
-struct rte_security_session *
+void *
 rte_security_session_create(struct rte_security_ctx *instance,
 			    struct rte_security_session_conf *conf,
-			    struct rte_mempool *mp,
-			    struct rte_mempool *priv_mp)
+			    struct rte_mempool *mp)
 {
 	struct rte_security_session *sess = NULL;
 
 	RTE_PTR_CHAIN3_OR_ERR_RET(instance, ops, session_create, NULL, NULL);
 	RTE_PTR_OR_ERR_RET(conf, NULL);
 	RTE_PTR_OR_ERR_RET(mp, NULL);
-	RTE_PTR_OR_ERR_RET(priv_mp, NULL);
+
+	if (mp->elt_size < sizeof(struct rte_security_session) +
+			instance->ops->session_get_size(instance->device))
+		return NULL;
 
 	if (rte_mempool_get(mp, (void **)&sess))
 		return NULL;
 
 	if (instance->ops->session_create(instance->device, conf,
-				sess, priv_mp)) {
+				sess->sess_private_data)) {
 		rte_mempool_put(mp, (void *)sess);
 		return NULL;
 	}
 	instance->sess_cnt++;
 
-	return sess;
+	return sess->sess_private_data;
 }
 
 int
 rte_security_session_update(struct rte_security_ctx *instance,
-			    struct rte_security_session *sess,
+			    void *sess,
 			    struct rte_security_session_conf *conf)
 {
 	RTE_PTR_CHAIN3_OR_ERR_RET(instance, ops, session_update, -EINVAL,
@@ -88,8 +90,7 @@ rte_security_session_get_size(struct rte_security_ctx *instance)
 
 int
 rte_security_session_stats_get(struct rte_security_ctx *instance,
-			       struct rte_security_session *sess,
-			       struct rte_security_stats *stats)
+			       void *sess, struct rte_security_stats *stats)
 {
 	RTE_PTR_CHAIN3_OR_ERR_RET(instance, ops, session_stats_get, -EINVAL,
 			-ENOTSUP);
@@ -100,9 +101,9 @@ rte_security_session_stats_get(struct rte_security_ctx *instance,
 }
 
 int
-rte_security_session_destroy(struct rte_security_ctx *instance,
-			     struct rte_security_session *sess)
+rte_security_session_destroy(struct rte_security_ctx *instance, void *sess)
 {
+	struct rte_security_session *s;
 	int ret;
 
 	RTE_PTR_CHAIN3_OR_ERR_RET(instance, ops, session_destroy, -EINVAL,
@@ -113,7 +114,8 @@ rte_security_session_destroy(struct rte_security_ctx *instance,
 	if (ret != 0)
 		return ret;
 
-	rte_mempool_put(rte_mempool_from_obj(sess), (void *)sess);
+	s = container_of(sess, struct rte_security_session, sess_private_data);
+	rte_mempool_put(rte_mempool_from_obj(s), (void *)s);
 
 	if (instance->sess_cnt)
 		instance->sess_cnt--;
@@ -123,7 +125,7 @@ rte_security_session_destroy(struct rte_security_ctx *instance,
 
 int
 __rte_security_set_pkt_metadata(struct rte_security_ctx *instance,
-				struct rte_security_session *sess,
+				void *sess,
 				struct rte_mbuf *m, void *params)
 {
 #ifdef RTE_DEBUG
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index ab1a6e1f65..bef42c0686 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -457,10 +457,12 @@ struct rte_security_session_conf {
 };
 
 struct rte_security_session {
-	void *sess_private_data;
-	/**< Private session material */
 	uint64_t opaque_data;
 	/**< Opaque user defined data */
+	uint64_t fast_mdata;
+	/**< Fast metadata to be used for inline path */
+	__extension__ void *sess_private_data[0];
+	/**< Private session material */
 };
 
 /**
@@ -474,11 +476,10 @@ struct rte_security_session {
  *  - On success, pointer to session
  *  - On failure, NULL
  */
-struct rte_security_session *
+void *
 rte_security_session_create(struct rte_security_ctx *instance,
 			    struct rte_security_session_conf *conf,
-			    struct rte_mempool *mp,
-			    struct rte_mempool *priv_mp);
+			    struct rte_mempool *mp);
 
 /**
  * Update security session as specified by the session configuration
@@ -493,7 +494,7 @@ rte_security_session_create(struct rte_security_ctx *instance,
 __rte_experimental
 int
 rte_security_session_update(struct rte_security_ctx *instance,
-			    struct rte_security_session *sess,
+			    void *sess,
 			    struct rte_security_session_conf *conf);
 
 /**
@@ -524,7 +525,7 @@ rte_security_session_get_size(struct rte_security_ctx *instance);
  */
 int
 rte_security_session_destroy(struct rte_security_ctx *instance,
-			     struct rte_security_session *sess);
+			     void *sess);
 
 /** Device-specific metadata field type */
 typedef uint64_t rte_security_dynfield_t;
@@ -570,7 +571,7 @@ static inline bool rte_security_dynfield_is_registered(void)
 /** Function to call PMD specific function pointer set_pkt_metadata() */
 __rte_experimental
 extern int __rte_security_set_pkt_metadata(struct rte_security_ctx *instance,
-					   struct rte_security_session *sess,
+					   void *sess,
 					   struct rte_mbuf *m, void *params);
 
 /**
@@ -588,13 +589,13 @@ extern int __rte_security_set_pkt_metadata(struct rte_security_ctx *instance,
  */
 static inline int
 rte_security_set_pkt_metadata(struct rte_security_ctx *instance,
-			      struct rte_security_session *sess,
+			      void *sess,
 			      struct rte_mbuf *mb, void *params)
 {
 	/* Fast Path */
 	if (instance->flags & RTE_SEC_CTX_F_FAST_SET_MDATA) {
 		*rte_security_dynfield(mb) =
-			(rte_security_dynfield_t)(sess->sess_private_data);
+			(rte_security_dynfield_t)(sess);
 		return 0;
 	}
 
@@ -644,26 +645,13 @@ rte_security_get_userdata(struct rte_security_ctx *instance, uint64_t md)
  */
 static inline int
 __rte_security_attach_session(struct rte_crypto_sym_op *sym_op,
-			      struct rte_security_session *sess)
+			      void *sess)
 {
 	sym_op->sec_session = sess;
 
 	return 0;
 }
 
-static inline void *
-get_sec_session_private_data(const struct rte_security_session *sess)
-{
-	return sess->sess_private_data;
-}
-
-static inline void
-set_sec_session_private_data(struct rte_security_session *sess,
-			     void *private_data)
-{
-	sess->sess_private_data = private_data;
-}
-
 /**
  * Attach a session to a crypto operation.
  * This API is needed only in case of RTE_SECURITY_SESS_CRYPTO_PROTO_OFFLOAD
@@ -674,8 +662,7 @@ set_sec_session_private_data(struct rte_security_session *sess,
  * @param	sess	security session
  */
 static inline int
-rte_security_attach_session(struct rte_crypto_op *op,
-			    struct rte_security_session *sess)
+rte_security_attach_session(struct rte_crypto_op *op, void *sess)
 {
 	if (unlikely(op->type != RTE_CRYPTO_OP_TYPE_SYMMETRIC))
 		return -EINVAL;
@@ -737,7 +724,7 @@ struct rte_security_stats {
 __rte_experimental
 int
 rte_security_session_stats_get(struct rte_security_ctx *instance,
-			       struct rte_security_session *sess,
+			       void *sess,
 			       struct rte_security_stats *stats);
 
 /**
diff --git a/lib/security/rte_security_driver.h b/lib/security/rte_security_driver.h
index 938373205c..9afefc8c4e 100644
--- a/lib/security/rte_security_driver.h
+++ b/lib/security/rte_security_driver.h
@@ -35,8 +35,7 @@ extern "C" {
  */
 typedef int (*security_session_create_t)(void *device,
 		struct rte_security_session_conf *conf,
-		struct rte_security_session *sess,
-		struct rte_mempool *mp);
+		void *sess);
 
 /**
  * Free driver private session data.
@@ -44,8 +43,7 @@ typedef int (*security_session_create_t)(void *device,
  * @param	device		Crypto/eth device pointer
  * @param	sess		Security session structure
  */
-typedef int (*security_session_destroy_t)(void *device,
-		struct rte_security_session *sess);
+typedef int (*security_session_destroy_t)(void *device, void *sess);
 
 /**
  * Update driver private session data.
@@ -60,8 +58,7 @@ typedef int (*security_session_destroy_t)(void *device,
  *  - Returns -ENOTSUP if crypto device does not support the crypto transform.
  */
 typedef int (*security_session_update_t)(void *device,
-		struct rte_security_session *sess,
-		struct rte_security_session_conf *conf);
+		void *sess, struct rte_security_session_conf *conf);
 
 /**
  * Get the size of a security session
@@ -86,8 +83,7 @@ typedef unsigned int (*security_session_get_size)(void *device);
  *  - Returns -EINVAL if session parameters are invalid.
  */
 typedef int (*security_session_stats_get_t)(void *device,
-		struct rte_security_session *sess,
-		struct rte_security_stats *stats);
+		void *sess, struct rte_security_stats *stats);
 
 __rte_experimental
 int rte_security_dynfield_register(void);
@@ -96,7 +92,7 @@ int rte_security_dynfield_register(void);
  * Update the mbuf with provided metadata.
  *
  * @param	device		Crypto/eth device pointer
- * @param	sess		Security session structure
+ * @param	sess		Security session
  * @param	mb		Packet buffer
  * @param	params		Metadata
  *
@@ -105,7 +101,7 @@ int rte_security_dynfield_register(void);
  *  - Returns -ve value for errors.
  */
 typedef int (*security_set_pkt_metadata_t)(void *device,
-		struct rte_security_session *sess, struct rte_mbuf *mb,
+		void *sess, struct rte_mbuf *mb,
 		void *params);
 
 /**
-- 
2.25.1


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [PATCH 3/3] cryptodev: rework session framework
    2021-09-30 14:50  1% ` [dpdk-dev] [PATCH 1/3] security: rework session framework Akhil Goyal
@ 2021-09-30 14:50  1% ` Akhil Goyal
  2021-10-01 15:53  0%   ` Zhang, Roy Fan
  1 sibling, 1 reply; 200+ results
From: Akhil Goyal @ 2021-09-30 14:50 UTC (permalink / raw)
  To: dev
  Cc: thomas, david.marchand, hemant.agrawal, anoobj,
	pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
	g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang,
	konstantin.ananyev, radu.nicolau, ajit.khaparde, rnagadheeraj,
	adwivedi, ciara.power, Akhil Goyal

As per current design, rte_cryptodev_sym_session_create() and
rte_cryptodev_sym_session_init() use separate mempool objects
for a single session.
And structure rte_cryptodev_sym_session is not directly used
by the application, it may cause ABI breakage if the structure
is modified in future.

To address these two issues, the rte_cryptodev_sym_session_create
will take one mempool object for both the session and session
private data. The API rte_cryptodev_sym_session_init will now not
take mempool object.
rte_cryptodev_sym_session_create will now return an opaque session
pointer which will be used by the app in rte_cryptodev_sym_session_init
and other APIs.

With this change, rte_cryptodev_sym_session_init will send
pointer to session private data of corresponding driver to the PMD
based on the driver_id for filling the PMD data.

In data path, opaque session pointer is attached to rte_crypto_op
and the PMD can call an internal library API to get the session
private data pointer based on the driver id.

TODO:
- inline APIs for opaque data
- move rte_cryptodev_sym_session struct to cryptodev_pmd.h
- currently nb_drivers are getting updated in RTE_INIT which
result in increasing the memory requirements for session.
This will be moved to PMD probe so that memory is created
only for those PMDs which are probed and not just compiled in.

Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
 app/test-crypto-perf/cperf.h                  |   1 -
 app/test-crypto-perf/cperf_ops.c              |  33 ++---
 app/test-crypto-perf/cperf_ops.h              |   6 +-
 app/test-crypto-perf/cperf_test_latency.c     |   5 +-
 app/test-crypto-perf/cperf_test_latency.h     |   1 -
 .../cperf_test_pmd_cyclecount.c               |   5 +-
 .../cperf_test_pmd_cyclecount.h               |   1 -
 app/test-crypto-perf/cperf_test_throughput.c  |   5 +-
 app/test-crypto-perf/cperf_test_throughput.h  |   1 -
 app/test-crypto-perf/cperf_test_verify.c      |   5 +-
 app/test-crypto-perf/cperf_test_verify.h      |   1 -
 app/test-crypto-perf/main.c                   |  29 +---
 app/test/test_cryptodev.c                     | 130 +++++-------------
 app/test/test_cryptodev.h                     |   1 -
 app/test/test_cryptodev_asym.c                |   1 -
 app/test/test_cryptodev_blockcipher.c         |   6 +-
 app/test/test_event_crypto_adapter.c          |  28 +---
 app/test/test_ipsec.c                         |  22 +--
 drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c  |  33 +----
 .../crypto/aesni_mb/rte_aesni_mb_pmd_ops.c    |  34 +----
 drivers/crypto/armv8/rte_armv8_pmd_ops.c      |  34 +----
 drivers/crypto/bcmfs/bcmfs_sym_session.c      |  36 +----
 drivers/crypto/bcmfs/bcmfs_sym_session.h      |   6 +-
 drivers/crypto/caam_jr/caam_jr.c              |  32 ++---
 drivers/crypto/ccp/ccp_pmd_ops.c              |  32 +----
 drivers/crypto/cnxk/cn10k_cryptodev_ops.c     |  18 ++-
 drivers/crypto/cnxk/cn9k_cryptodev_ops.c      |  18 +--
 drivers/crypto/cnxk/cnxk_cryptodev_ops.c      |  61 +++-----
 drivers/crypto/cnxk/cnxk_cryptodev_ops.h      |  13 +-
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c   |  29 +---
 drivers/crypto/dpaa_sec/dpaa_sec.c            |  31 +----
 drivers/crypto/kasumi/rte_kasumi_pmd_ops.c    |  34 +----
 drivers/crypto/mlx5/mlx5_crypto.c             |  24 +---
 drivers/crypto/mvsam/rte_mrvl_pmd_ops.c       |  36 ++---
 drivers/crypto/nitrox/nitrox_sym.c            |  31 +----
 drivers/crypto/null/null_crypto_pmd_ops.c     |  34 +----
 .../crypto/octeontx/otx_cryptodev_hw_access.h |   1 -
 drivers/crypto/octeontx/otx_cryptodev_ops.c   |  60 +++-----
 drivers/crypto/octeontx2/otx2_cryptodev_ops.c |  52 +++----
 .../octeontx2/otx2_cryptodev_ops_helper.h     |  16 +--
 drivers/crypto/openssl/rte_openssl_pmd_ops.c  |  35 +----
 drivers/crypto/qat/qat_sym_session.c          |  29 +---
 drivers/crypto/qat/qat_sym_session.h          |   6 +-
 drivers/crypto/scheduler/scheduler_pmd_ops.c  |   9 +-
 drivers/crypto/snow3g/rte_snow3g_pmd_ops.c    |  34 +----
 drivers/crypto/virtio/virtio_cryptodev.c      |  31 ++---
 drivers/crypto/zuc/rte_zuc_pmd_ops.c          |  35 +----
 .../octeontx2/otx2_evdev_crypto_adptr_rx.h    |   3 +-
 examples/fips_validation/fips_dev_self_test.c |  32 ++---
 examples/fips_validation/main.c               |  20 +--
 examples/ipsec-secgw/ipsec-secgw.c            |  72 +++++-----
 examples/ipsec-secgw/ipsec.c                  |   3 +-
 examples/ipsec-secgw/ipsec.h                  |   1 -
 examples/ipsec-secgw/ipsec_worker.c           |   4 -
 examples/l2fwd-crypto/main.c                  |  41 +-----
 examples/vhost_crypto/main.c                  |  16 +--
 lib/cryptodev/cryptodev_pmd.h                 |   7 +-
 lib/cryptodev/rte_crypto.h                    |   2 +-
 lib/cryptodev/rte_crypto_sym.h                |   2 +-
 lib/cryptodev/rte_cryptodev.c                 |  73 ++++++----
 lib/cryptodev/rte_cryptodev.h                 |  23 ++--
 lib/cryptodev/rte_cryptodev_trace.h           |   5 +-
 lib/pipeline/rte_table_action.c               |   8 +-
 lib/pipeline/rte_table_action.h               |   2 +-
 lib/vhost/rte_vhost_crypto.h                  |   3 -
 lib/vhost/vhost_crypto.c                      |   7 +-
 66 files changed, 399 insertions(+), 1050 deletions(-)

diff --git a/app/test-crypto-perf/cperf.h b/app/test-crypto-perf/cperf.h
index 2b0aad095c..db58228dce 100644
--- a/app/test-crypto-perf/cperf.h
+++ b/app/test-crypto-perf/cperf.h
@@ -15,7 +15,6 @@ struct cperf_op_fns;
 
 typedef void  *(*cperf_constructor_t)(
 		struct rte_mempool *sess_mp,
-		struct rte_mempool *sess_priv_mp,
 		uint8_t dev_id,
 		uint16_t qp_id,
 		const struct cperf_options *options,
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index 1b3cbe77b9..f094bc656d 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -12,7 +12,7 @@ static int
 cperf_set_ops_asym(struct rte_crypto_op **ops,
 		   uint32_t src_buf_offset __rte_unused,
 		   uint32_t dst_buf_offset __rte_unused, uint16_t nb_ops,
-		   struct rte_cryptodev_sym_session *sess,
+		   void *sess,
 		   const struct cperf_options *options __rte_unused,
 		   const struct cperf_test_vector *test_vector __rte_unused,
 		   uint16_t iv_offset __rte_unused,
@@ -40,7 +40,7 @@ static int
 cperf_set_ops_security(struct rte_crypto_op **ops,
 		uint32_t src_buf_offset __rte_unused,
 		uint32_t dst_buf_offset __rte_unused,
-		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
+		uint16_t nb_ops, void *sess,
 		const struct cperf_options *options __rte_unused,
 		const struct cperf_test_vector *test_vector __rte_unused,
 		uint16_t iv_offset __rte_unused, uint32_t *imix_idx)
@@ -106,7 +106,7 @@ cperf_set_ops_security(struct rte_crypto_op **ops,
 static int
 cperf_set_ops_null_cipher(struct rte_crypto_op **ops,
 		uint32_t src_buf_offset, uint32_t dst_buf_offset,
-		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
+		uint16_t nb_ops, void *sess,
 		const struct cperf_options *options,
 		const struct cperf_test_vector *test_vector __rte_unused,
 		uint16_t iv_offset __rte_unused, uint32_t *imix_idx)
@@ -145,7 +145,7 @@ cperf_set_ops_null_cipher(struct rte_crypto_op **ops,
 static int
 cperf_set_ops_null_auth(struct rte_crypto_op **ops,
 		uint32_t src_buf_offset, uint32_t dst_buf_offset,
-		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
+		uint16_t nb_ops, void *sess,
 		const struct cperf_options *options,
 		const struct cperf_test_vector *test_vector __rte_unused,
 		uint16_t iv_offset __rte_unused, uint32_t *imix_idx)
@@ -184,7 +184,7 @@ cperf_set_ops_null_auth(struct rte_crypto_op **ops,
 static int
 cperf_set_ops_cipher(struct rte_crypto_op **ops,
 		uint32_t src_buf_offset, uint32_t dst_buf_offset,
-		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
+		uint16_t nb_ops, void *sess,
 		const struct cperf_options *options,
 		const struct cperf_test_vector *test_vector,
 		uint16_t iv_offset, uint32_t *imix_idx)
@@ -240,7 +240,7 @@ cperf_set_ops_cipher(struct rte_crypto_op **ops,
 static int
 cperf_set_ops_auth(struct rte_crypto_op **ops,
 		uint32_t src_buf_offset, uint32_t dst_buf_offset,
-		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
+		uint16_t nb_ops, void *sess,
 		const struct cperf_options *options,
 		const struct cperf_test_vector *test_vector,
 		uint16_t iv_offset, uint32_t *imix_idx)
@@ -340,7 +340,7 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
 static int
 cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
 		uint32_t src_buf_offset, uint32_t dst_buf_offset,
-		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
+		uint16_t nb_ops, void *sess,
 		const struct cperf_options *options,
 		const struct cperf_test_vector *test_vector,
 		uint16_t iv_offset, uint32_t *imix_idx)
@@ -455,7 +455,7 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
 static int
 cperf_set_ops_aead(struct rte_crypto_op **ops,
 		uint32_t src_buf_offset, uint32_t dst_buf_offset,
-		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
+		uint16_t nb_ops, void *sess,
 		const struct cperf_options *options,
 		const struct cperf_test_vector *test_vector,
 		uint16_t iv_offset, uint32_t *imix_idx)
@@ -563,9 +563,8 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
 	return 0;
 }
 
-static struct rte_cryptodev_sym_session *
+static void *
 cperf_create_session(struct rte_mempool *sess_mp,
-	struct rte_mempool *priv_mp,
 	uint8_t dev_id,
 	const struct cperf_options *options,
 	const struct cperf_test_vector *test_vector,
@@ -590,7 +589,7 @@ cperf_create_session(struct rte_mempool *sess_mp,
 		if (sess == NULL)
 			return NULL;
 		rc = rte_cryptodev_asym_session_init(dev_id, (void *)sess,
-						     &xform, priv_mp);
+						     &xform, sess_mp);
 		if (rc < 0) {
 			if (sess != NULL) {
 				rte_cryptodev_asym_session_clear(dev_id,
@@ -742,8 +741,7 @@ cperf_create_session(struct rte_mempool *sess_mp,
 			cipher_xform.cipher.iv.length = 0;
 		}
 		/* create crypto session */
-		rte_cryptodev_sym_session_init(dev_id, sess, &cipher_xform,
-				priv_mp);
+		rte_cryptodev_sym_session_init(dev_id, sess, &cipher_xform);
 	/*
 	 *  auth only
 	 */
@@ -770,8 +768,7 @@ cperf_create_session(struct rte_mempool *sess_mp,
 			auth_xform.auth.iv.length = 0;
 		}
 		/* create crypto session */
-		rte_cryptodev_sym_session_init(dev_id, sess, &auth_xform,
-				priv_mp);
+		rte_cryptodev_sym_session_init(dev_id, sess, &auth_xform);
 	/*
 	 * cipher and auth
 	 */
@@ -830,12 +827,12 @@ cperf_create_session(struct rte_mempool *sess_mp,
 			cipher_xform.next = &auth_xform;
 			/* create crypto session */
 			rte_cryptodev_sym_session_init(dev_id,
-					sess, &cipher_xform, priv_mp);
+					sess, &cipher_xform);
 		} else { /* auth then cipher */
 			auth_xform.next = &cipher_xform;
 			/* create crypto session */
 			rte_cryptodev_sym_session_init(dev_id,
-					sess, &auth_xform, priv_mp);
+					sess, &auth_xform);
 		}
 	} else { /* options->op_type == CPERF_AEAD */
 		aead_xform.type = RTE_CRYPTO_SYM_XFORM_AEAD;
@@ -856,7 +853,7 @@ cperf_create_session(struct rte_mempool *sess_mp,
 
 		/* Create crypto session */
 		rte_cryptodev_sym_session_init(dev_id,
-					sess, &aead_xform, priv_mp);
+					sess, &aead_xform);
 	}
 
 	return sess;
diff --git a/app/test-crypto-perf/cperf_ops.h b/app/test-crypto-perf/cperf_ops.h
index ff125d12cd..3ff10491a0 100644
--- a/app/test-crypto-perf/cperf_ops.h
+++ b/app/test-crypto-perf/cperf_ops.h
@@ -12,15 +12,15 @@
 #include "cperf_test_vectors.h"
 
 
-typedef struct rte_cryptodev_sym_session *(*cperf_sessions_create_t)(
-		struct rte_mempool *sess_mp, struct rte_mempool *sess_priv_mp,
+typedef void *(*cperf_sessions_create_t)(
+		struct rte_mempool *sess_mp,
 		uint8_t dev_id, const struct cperf_options *options,
 		const struct cperf_test_vector *test_vector,
 		uint16_t iv_offset);
 
 typedef int (*cperf_populate_ops_t)(struct rte_crypto_op **ops,
 		uint32_t src_buf_offset, uint32_t dst_buf_offset,
-		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
+		uint16_t nb_ops, void *sess,
 		const struct cperf_options *options,
 		const struct cperf_test_vector *test_vector,
 		uint16_t iv_offset, uint32_t *imix_idx);
diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index 159fe8492b..4193f7e777 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -24,7 +24,7 @@ struct cperf_latency_ctx {
 
 	struct rte_mempool *pool;
 
-	struct rte_cryptodev_sym_session *sess;
+	void *sess;
 
 	cperf_populate_ops_t populate_ops;
 
@@ -59,7 +59,6 @@ cperf_latency_test_free(struct cperf_latency_ctx *ctx)
 
 void *
 cperf_latency_test_constructor(struct rte_mempool *sess_mp,
-		struct rte_mempool *sess_priv_mp,
 		uint8_t dev_id, uint16_t qp_id,
 		const struct cperf_options *options,
 		const struct cperf_test_vector *test_vector,
@@ -84,7 +83,7 @@ cperf_latency_test_constructor(struct rte_mempool *sess_mp,
 		sizeof(struct rte_crypto_sym_op) +
 		sizeof(struct cperf_op_result *);
 
-	ctx->sess = op_fns->sess_create(sess_mp, sess_priv_mp, dev_id, options,
+	ctx->sess = op_fns->sess_create(sess_mp, dev_id, options,
 			test_vector, iv_offset);
 	if (ctx->sess == NULL)
 		goto err;
diff --git a/app/test-crypto-perf/cperf_test_latency.h b/app/test-crypto-perf/cperf_test_latency.h
index ed5b0a07bb..d3fc3218d7 100644
--- a/app/test-crypto-perf/cperf_test_latency.h
+++ b/app/test-crypto-perf/cperf_test_latency.h
@@ -17,7 +17,6 @@
 void *
 cperf_latency_test_constructor(
 		struct rte_mempool *sess_mp,
-		struct rte_mempool *sess_priv_mp,
 		uint8_t dev_id,
 		uint16_t qp_id,
 		const struct cperf_options *options,
diff --git a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
index cbbbedd9ba..3dd489376f 100644
--- a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
+++ b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
@@ -27,7 +27,7 @@ struct cperf_pmd_cyclecount_ctx {
 	struct rte_crypto_op **ops;
 	struct rte_crypto_op **ops_processed;
 
-	struct rte_cryptodev_sym_session *sess;
+	void *sess;
 
 	cperf_populate_ops_t populate_ops;
 
@@ -93,7 +93,6 @@ cperf_pmd_cyclecount_test_free(struct cperf_pmd_cyclecount_ctx *ctx)
 
 void *
 cperf_pmd_cyclecount_test_constructor(struct rte_mempool *sess_mp,
-		struct rte_mempool *sess_priv_mp,
 		uint8_t dev_id, uint16_t qp_id,
 		const struct cperf_options *options,
 		const struct cperf_test_vector *test_vector,
@@ -120,7 +119,7 @@ cperf_pmd_cyclecount_test_constructor(struct rte_mempool *sess_mp,
 	uint16_t iv_offset = sizeof(struct rte_crypto_op) +
 			sizeof(struct rte_crypto_sym_op);
 
-	ctx->sess = op_fns->sess_create(sess_mp, sess_priv_mp, dev_id, options,
+	ctx->sess = op_fns->sess_create(sess_mp, dev_id, options,
 			test_vector, iv_offset);
 	if (ctx->sess == NULL)
 		goto err;
diff --git a/app/test-crypto-perf/cperf_test_pmd_cyclecount.h b/app/test-crypto-perf/cperf_test_pmd_cyclecount.h
index 3084038a18..beb4419910 100644
--- a/app/test-crypto-perf/cperf_test_pmd_cyclecount.h
+++ b/app/test-crypto-perf/cperf_test_pmd_cyclecount.h
@@ -18,7 +18,6 @@
 void *
 cperf_pmd_cyclecount_test_constructor(
 		struct rte_mempool *sess_mp,
-		struct rte_mempool *sess_priv_mp,
 		uint8_t dev_id,
 		uint16_t qp_id,
 		const struct cperf_options *options,
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index 76fcda47ff..dc5c48b4da 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -18,7 +18,7 @@ struct cperf_throughput_ctx {
 
 	struct rte_mempool *pool;
 
-	struct rte_cryptodev_sym_session *sess;
+	void *sess;
 
 	cperf_populate_ops_t populate_ops;
 
@@ -64,7 +64,6 @@ cperf_throughput_test_free(struct cperf_throughput_ctx *ctx)
 
 void *
 cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
-		struct rte_mempool *sess_priv_mp,
 		uint8_t dev_id, uint16_t qp_id,
 		const struct cperf_options *options,
 		const struct cperf_test_vector *test_vector,
@@ -87,7 +86,7 @@ cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
 	uint16_t iv_offset = sizeof(struct rte_crypto_op) +
 		sizeof(struct rte_crypto_sym_op);
 
-	ctx->sess = op_fns->sess_create(sess_mp, sess_priv_mp, dev_id, options,
+	ctx->sess = op_fns->sess_create(sess_mp, dev_id, options,
 			test_vector, iv_offset);
 	if (ctx->sess == NULL)
 		goto err;
diff --git a/app/test-crypto-perf/cperf_test_throughput.h b/app/test-crypto-perf/cperf_test_throughput.h
index 91e1a4b708..439ec8e559 100644
--- a/app/test-crypto-perf/cperf_test_throughput.h
+++ b/app/test-crypto-perf/cperf_test_throughput.h
@@ -18,7 +18,6 @@
 void *
 cperf_throughput_test_constructor(
 		struct rte_mempool *sess_mp,
-		struct rte_mempool *sess_priv_mp,
 		uint8_t dev_id,
 		uint16_t qp_id,
 		const struct cperf_options *options,
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index 2939aeaa93..cf561dd700 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -18,7 +18,7 @@ struct cperf_verify_ctx {
 
 	struct rte_mempool *pool;
 
-	struct rte_cryptodev_sym_session *sess;
+	void *sess;
 
 	cperf_populate_ops_t populate_ops;
 
@@ -51,7 +51,6 @@ cperf_verify_test_free(struct cperf_verify_ctx *ctx)
 
 void *
 cperf_verify_test_constructor(struct rte_mempool *sess_mp,
-		struct rte_mempool *sess_priv_mp,
 		uint8_t dev_id, uint16_t qp_id,
 		const struct cperf_options *options,
 		const struct cperf_test_vector *test_vector,
@@ -74,7 +73,7 @@ cperf_verify_test_constructor(struct rte_mempool *sess_mp,
 	uint16_t iv_offset = sizeof(struct rte_crypto_op) +
 		sizeof(struct rte_crypto_sym_op);
 
-	ctx->sess = op_fns->sess_create(sess_mp, sess_priv_mp, dev_id, options,
+	ctx->sess = op_fns->sess_create(sess_mp, dev_id, options,
 			test_vector, iv_offset);
 	if (ctx->sess == NULL)
 		goto err;
diff --git a/app/test-crypto-perf/cperf_test_verify.h b/app/test-crypto-perf/cperf_test_verify.h
index ac2192ba99..9f70ad87ba 100644
--- a/app/test-crypto-perf/cperf_test_verify.h
+++ b/app/test-crypto-perf/cperf_test_verify.h
@@ -18,7 +18,6 @@
 void *
 cperf_verify_test_constructor(
 		struct rte_mempool *sess_mp,
-		struct rte_mempool *sess_priv_mp,
 		uint8_t dev_id,
 		uint16_t qp_id,
 		const struct cperf_options *options,
diff --git a/app/test-crypto-perf/main.c b/app/test-crypto-perf/main.c
index 390380898e..c3327b7e55 100644
--- a/app/test-crypto-perf/main.c
+++ b/app/test-crypto-perf/main.c
@@ -118,35 +118,14 @@ fill_session_pool_socket(int32_t socket_id, uint32_t session_priv_size,
 	char mp_name[RTE_MEMPOOL_NAMESIZE];
 	struct rte_mempool *sess_mp;
 
-	if (session_pool_socket[socket_id].priv_mp == NULL) {
-		snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
-			"priv_sess_mp_%u", socket_id);
-
-		sess_mp = rte_mempool_create(mp_name,
-					nb_sessions,
-					session_priv_size,
-					0, 0, NULL, NULL, NULL,
-					NULL, socket_id,
-					0);
-
-		if (sess_mp == NULL) {
-			printf("Cannot create pool \"%s\" on socket %d\n",
-				mp_name, socket_id);
-			return -ENOMEM;
-		}
-
-		printf("Allocated pool \"%s\" on socket %d\n",
-			mp_name, socket_id);
-		session_pool_socket[socket_id].priv_mp = sess_mp;
-	}
-
 	if (session_pool_socket[socket_id].sess_mp == NULL) {
 
 		snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
 			"sess_mp_%u", socket_id);
 
 		sess_mp = rte_cryptodev_sym_session_pool_create(mp_name,
-					nb_sessions, 0, 0, 0, socket_id);
+					nb_sessions, session_priv_size,
+					0, 0, socket_id);
 
 		if (sess_mp == NULL) {
 			printf("Cannot create pool \"%s\" on socket %d\n",
@@ -344,12 +323,9 @@ cperf_initialize_cryptodev(struct cperf_options *opts, uint8_t *enabled_cdevs)
 			return ret;
 
 		qp_conf.mp_session = session_pool_socket[socket_id].sess_mp;
-		qp_conf.mp_session_private =
-				session_pool_socket[socket_id].priv_mp;
 
 		if (opts->op_type == CPERF_ASYM_MODEX) {
 			qp_conf.mp_session = NULL;
-			qp_conf.mp_session_private = NULL;
 		}
 
 		ret = rte_cryptodev_configure(cdev_id, &conf);
@@ -704,7 +680,6 @@ main(int argc, char **argv)
 
 		ctx[i] = cperf_testmap[opts.test].constructor(
 				session_pool_socket[socket_id].sess_mp,
-				session_pool_socket[socket_id].priv_mp,
 				cdev_id, qp_id,
 				&opts, t_vec, &op_fns);
 		if (ctx[i] == NULL) {
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 82f819211a..e5c7930c63 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -79,7 +79,7 @@ struct crypto_unittest_params {
 #endif
 
 	union {
-		struct rte_cryptodev_sym_session *sess;
+		void *sess;
 #ifdef RTE_LIB_SECURITY
 		void *sec_session;
 #endif
@@ -119,7 +119,7 @@ test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
 		uint8_t *hmac_key);
 
 static int
-test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_sym_session *sess,
+test_AES_CBC_HMAC_SHA512_decrypt_perform(void *sess,
 		struct crypto_unittest_params *ut_params,
 		struct crypto_testsuite_params *ts_param,
 		const uint8_t *cipher,
@@ -596,23 +596,11 @@ testsuite_setup(void)
 	}
 
 	ts_params->session_mpool = rte_cryptodev_sym_session_pool_create(
-			"test_sess_mp", MAX_NB_SESSIONS, 0, 0, 0,
+			"test_sess_mp", MAX_NB_SESSIONS, session_size, 0, 0,
 			SOCKET_ID_ANY);
 	TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
 			"session mempool allocation failed");
 
-	ts_params->session_priv_mpool = rte_mempool_create(
-			"test_sess_mp_priv",
-			MAX_NB_SESSIONS,
-			session_size,
-			0, 0, NULL, NULL, NULL,
-			NULL, SOCKET_ID_ANY,
-			0);
-	TEST_ASSERT_NOT_NULL(ts_params->session_priv_mpool,
-			"session mempool allocation failed");
-
-
-
 	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id,
 			&ts_params->conf),
 			"Failed to configure cryptodev %u with %u qps",
@@ -620,7 +608,6 @@ testsuite_setup(void)
 
 	ts_params->qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
 	ts_params->qp_conf.mp_session = ts_params->session_mpool;
-	ts_params->qp_conf.mp_session_private = ts_params->session_priv_mpool;
 
 	for (qp_id = 0; qp_id < info.max_nb_queue_pairs; qp_id++) {
 		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
@@ -650,11 +637,6 @@ testsuite_teardown(void)
 	}
 
 	/* Free session mempools */
-	if (ts_params->session_priv_mpool != NULL) {
-		rte_mempool_free(ts_params->session_priv_mpool);
-		ts_params->session_priv_mpool = NULL;
-	}
-
 	if (ts_params->session_mpool != NULL) {
 		rte_mempool_free(ts_params->session_mpool);
 		ts_params->session_mpool = NULL;
@@ -1330,7 +1312,6 @@ dev_configure_and_start(uint64_t ff_disable)
 	ts_params->conf.ff_disable = ff_disable;
 	ts_params->qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
 	ts_params->qp_conf.mp_session = ts_params->session_mpool;
-	ts_params->qp_conf.mp_session_private = ts_params->session_priv_mpool;
 
 	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
 			&ts_params->conf),
@@ -1552,7 +1533,6 @@ test_queue_pair_descriptor_setup(void)
 	 */
 	qp_conf.nb_descriptors = MIN_NUM_OPS_INFLIGHT; /* min size*/
 	qp_conf.mp_session = ts_params->session_mpool;
-	qp_conf.mp_session_private = ts_params->session_priv_mpool;
 
 	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
 		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
@@ -2146,8 +2126,7 @@ test_AES_CBC_HMAC_SHA1_encrypt_digest(void)
 
 	/* Create crypto session*/
 	rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
-			ut_params->sess, &ut_params->cipher_xform,
-			ts_params->session_priv_mpool);
+			ut_params->sess, &ut_params->cipher_xform);
 	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
 
 	/* Generate crypto op data structure */
@@ -2247,7 +2226,7 @@ test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
 		uint8_t *hmac_key);
 
 static int
-test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_sym_session *sess,
+test_AES_CBC_HMAC_SHA512_decrypt_perform(void *sess,
 		struct crypto_unittest_params *ut_params,
 		struct crypto_testsuite_params *ts_params,
 		const uint8_t *cipher,
@@ -2288,7 +2267,7 @@ test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
 
 
 static int
-test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_sym_session *sess,
+test_AES_CBC_HMAC_SHA512_decrypt_perform(void *sess,
 		struct crypto_unittest_params *ut_params,
 		struct crypto_testsuite_params *ts_params,
 		const uint8_t *cipher,
@@ -2401,8 +2380,7 @@ create_wireless_algo_hash_session(uint8_t dev_id,
 			ts_params->session_mpool);
 
 	status = rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
-			&ut_params->auth_xform,
-			ts_params->session_priv_mpool);
+			&ut_params->auth_xform);
 	if (status == -ENOTSUP)
 		return TEST_SKIPPED;
 
@@ -2443,8 +2421,7 @@ create_wireless_algo_cipher_session(uint8_t dev_id,
 			ts_params->session_mpool);
 
 	status = rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
-			&ut_params->cipher_xform,
-			ts_params->session_priv_mpool);
+			&ut_params->cipher_xform);
 	if (status == -ENOTSUP)
 		return TEST_SKIPPED;
 
@@ -2566,8 +2543,7 @@ create_wireless_algo_cipher_auth_session(uint8_t dev_id,
 	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
 
 	status = rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
-			&ut_params->cipher_xform,
-			ts_params->session_priv_mpool);
+			&ut_params->cipher_xform);
 	if (status == -ENOTSUP)
 		return TEST_SKIPPED;
 
@@ -2629,8 +2605,7 @@ create_wireless_cipher_auth_session(uint8_t dev_id,
 			ts_params->session_mpool);
 
 	status = rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
-			&ut_params->cipher_xform,
-			ts_params->session_priv_mpool);
+			&ut_params->cipher_xform);
 	if (status == -ENOTSUP)
 		return TEST_SKIPPED;
 
@@ -2699,13 +2674,11 @@ create_wireless_algo_auth_cipher_session(uint8_t dev_id,
 		ut_params->auth_xform.next = NULL;
 		ut_params->cipher_xform.next = &ut_params->auth_xform;
 		status = rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
-				&ut_params->cipher_xform,
-				ts_params->session_priv_mpool);
+				&ut_params->cipher_xform);
 
 	} else
 		status = rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
-				&ut_params->auth_xform,
-				ts_params->session_priv_mpool);
+				&ut_params->auth_xform);
 
 	if (status == -ENOTSUP)
 		return TEST_SKIPPED;
@@ -7838,8 +7811,7 @@ create_aead_session(uint8_t dev_id, enum rte_crypto_aead_algorithm algo,
 			ts_params->session_mpool);
 
 	rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
-			&ut_params->aead_xform,
-			ts_params->session_priv_mpool);
+			&ut_params->aead_xform);
 
 	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
 
@@ -10992,8 +10964,7 @@ static int MD5_HMAC_create_session(struct crypto_testsuite_params *ts_params,
 			ts_params->session_mpool);
 
 	rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
-			ut_params->sess, &ut_params->auth_xform,
-			ts_params->session_priv_mpool);
+			ut_params->sess, &ut_params->auth_xform);
 
 	if (ut_params->sess == NULL)
 		return TEST_FAILED;
@@ -11206,7 +11177,7 @@ test_multi_session(void)
 	struct crypto_unittest_params *ut_params = &unittest_params;
 
 	struct rte_cryptodev_info dev_info;
-	struct rte_cryptodev_sym_session **sessions;
+	void **sessions;
 
 	uint16_t i;
 
@@ -11229,9 +11200,7 @@ test_multi_session(void)
 
 	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
 
-	sessions = rte_malloc(NULL,
-			sizeof(struct rte_cryptodev_sym_session *) *
-			(MAX_NB_SESSIONS + 1), 0);
+	sessions = rte_malloc(NULL, sizeof(void *) * (MAX_NB_SESSIONS + 1), 0);
 
 	/* Create multiple crypto sessions*/
 	for (i = 0; i < MAX_NB_SESSIONS; i++) {
@@ -11240,8 +11209,7 @@ test_multi_session(void)
 				ts_params->session_mpool);
 
 		rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
-				sessions[i], &ut_params->auth_xform,
-				ts_params->session_priv_mpool);
+				sessions[i], &ut_params->auth_xform);
 		TEST_ASSERT_NOT_NULL(sessions[i],
 				"Session creation failed at session number %u",
 				i);
@@ -11279,8 +11247,7 @@ test_multi_session(void)
 	sessions[i] = NULL;
 	/* Next session create should fail */
 	rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
-			sessions[i], &ut_params->auth_xform,
-			ts_params->session_priv_mpool);
+			sessions[i], &ut_params->auth_xform);
 	TEST_ASSERT_NULL(sessions[i],
 			"Session creation succeeded unexpectedly!");
 
@@ -11311,7 +11278,7 @@ test_multi_session_random_usage(void)
 {
 	struct crypto_testsuite_params *ts_params = &testsuite_params;
 	struct rte_cryptodev_info dev_info;
-	struct rte_cryptodev_sym_session **sessions;
+	void **sessions;
 	uint32_t i, j;
 	struct multi_session_params ut_paramz[] = {
 
@@ -11355,8 +11322,7 @@ test_multi_session_random_usage(void)
 	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
 
 	sessions = rte_malloc(NULL,
-			(sizeof(struct rte_cryptodev_sym_session *)
-					* MAX_NB_SESSIONS) + 1, 0);
+			(sizeof(void *) * MAX_NB_SESSIONS) + 1, 0);
 
 	for (i = 0; i < MB_SESSION_NUMBER; i++) {
 		sessions[i] = rte_cryptodev_sym_session_create(
@@ -11373,8 +11339,7 @@ test_multi_session_random_usage(void)
 		rte_cryptodev_sym_session_init(
 				ts_params->valid_devs[0],
 				sessions[i],
-				&ut_paramz[i].ut_params.auth_xform,
-				ts_params->session_priv_mpool);
+				&ut_paramz[i].ut_params.auth_xform);
 
 		TEST_ASSERT_NOT_NULL(sessions[i],
 				"Session creation failed at session number %u",
@@ -11457,8 +11422,7 @@ test_null_invalid_operation(void)
 
 	/* Create Crypto session*/
 	ret = rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
-			ut_params->sess, &ut_params->cipher_xform,
-			ts_params->session_priv_mpool);
+			ut_params->sess, &ut_params->cipher_xform);
 	TEST_ASSERT(ret < 0,
 			"Session creation succeeded unexpectedly");
 
@@ -11475,8 +11439,7 @@ test_null_invalid_operation(void)
 
 	/* Create Crypto session*/
 	ret = rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
-			ut_params->sess, &ut_params->auth_xform,
-			ts_params->session_priv_mpool);
+			ut_params->sess, &ut_params->auth_xform);
 	TEST_ASSERT(ret < 0,
 			"Session creation succeeded unexpectedly");
 
@@ -11521,8 +11484,7 @@ test_null_burst_operation(void)
 
 	/* Create Crypto session*/
 	rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
-			ut_params->sess, &ut_params->cipher_xform,
-			ts_params->session_priv_mpool);
+			ut_params->sess, &ut_params->cipher_xform);
 	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
 
 	TEST_ASSERT_EQUAL(rte_crypto_op_bulk_alloc(ts_params->op_mpool,
@@ -11634,7 +11596,6 @@ test_enq_callback_setup(void)
 
 	qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
 	qp_conf.mp_session = ts_params->session_mpool;
-	qp_conf.mp_session_private = ts_params->session_priv_mpool;
 
 	TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
 			ts_params->valid_devs[0], qp_id, &qp_conf,
@@ -11734,7 +11695,6 @@ test_deq_callback_setup(void)
 
 	qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
 	qp_conf.mp_session = ts_params->session_mpool;
-	qp_conf.mp_session_private = ts_params->session_priv_mpool;
 
 	TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
 			ts_params->valid_devs[0], qp_id, &qp_conf,
@@ -11943,8 +11903,7 @@ static int create_gmac_session(uint8_t dev_id,
 			ts_params->session_mpool);
 
 	rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
-			&ut_params->auth_xform,
-			ts_params->session_priv_mpool);
+			&ut_params->auth_xform);
 
 	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
 
@@ -12588,8 +12547,7 @@ create_auth_session(struct crypto_unittest_params *ut_params,
 			ts_params->session_mpool);
 
 	rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
-				&ut_params->auth_xform,
-				ts_params->session_priv_mpool);
+				&ut_params->auth_xform);
 
 	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
 
@@ -12641,8 +12599,7 @@ create_auth_cipher_session(struct crypto_unittest_params *ut_params,
 			ts_params->session_mpool);
 
 	rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
-				&ut_params->auth_xform,
-				ts_params->session_priv_mpool);
+				&ut_params->auth_xform);
 
 	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
 
@@ -13149,8 +13106,7 @@ test_authenticated_encrypt_with_esn(
 
 	rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
 				ut_params->sess,
-				&ut_params->cipher_xform,
-				ts_params->session_priv_mpool);
+				&ut_params->cipher_xform);
 
 	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
 
@@ -13281,8 +13237,7 @@ test_authenticated_decrypt_with_esn(
 
 	rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
 				ut_params->sess,
-				&ut_params->auth_xform,
-				ts_params->session_priv_mpool);
+				&ut_params->auth_xform);
 
 	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
 
@@ -14003,11 +13958,6 @@ test_scheduler_attach_worker_op(void)
 			rte_mempool_free(ts_params->session_mpool);
 			ts_params->session_mpool = NULL;
 		}
-		if (ts_params->session_priv_mpool) {
-			rte_mempool_free(ts_params->session_priv_mpool);
-			ts_params->session_priv_mpool = NULL;
-		}
-
 		if (info.sym.max_nb_sessions != 0 &&
 				info.sym.max_nb_sessions < MAX_NB_SESSIONS) {
 			RTE_LOG(ERR, USER1,
@@ -14024,32 +13974,14 @@ test_scheduler_attach_worker_op(void)
 			ts_params->session_mpool =
 				rte_cryptodev_sym_session_pool_create(
 						"test_sess_mp",
-						MAX_NB_SESSIONS, 0, 0, 0,
+						MAX_NB_SESSIONS,
+						session_size, 0, 0,
 						SOCKET_ID_ANY);
 			TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
 					"session mempool allocation failed");
 		}
 
-		/*
-		 * Create mempool with maximum number of sessions,
-		 * to include device specific session private data
-		 */
-		if (ts_params->session_priv_mpool == NULL) {
-			ts_params->session_priv_mpool = rte_mempool_create(
-					"test_sess_mp_priv",
-					MAX_NB_SESSIONS,
-					session_size,
-					0, 0, NULL, NULL, NULL,
-					NULL, SOCKET_ID_ANY,
-					0);
-
-			TEST_ASSERT_NOT_NULL(ts_params->session_priv_mpool,
-					"session mempool allocation failed");
-		}
-
 		ts_params->qp_conf.mp_session = ts_params->session_mpool;
-		ts_params->qp_conf.mp_session_private =
-				ts_params->session_priv_mpool;
 
 		ret = rte_cryptodev_scheduler_worker_attach(sched_id,
 				(uint8_t)i);
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
index 1cdd84d01f..a3a10d484b 100644
--- a/app/test/test_cryptodev.h
+++ b/app/test/test_cryptodev.h
@@ -89,7 +89,6 @@ struct crypto_testsuite_params {
 	struct rte_mempool *large_mbuf_pool;
 	struct rte_mempool *op_mpool;
 	struct rte_mempool *session_mpool;
-	struct rte_mempool *session_priv_mpool;
 	struct rte_cryptodev_config conf;
 	struct rte_cryptodev_qp_conf qp_conf;
 
diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index 9d19a6d6d9..35da574da8 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -924,7 +924,6 @@ testsuite_setup(void)
 	/* configure qp */
 	ts_params->qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
 	ts_params->qp_conf.mp_session = ts_params->session_mpool;
-	ts_params->qp_conf.mp_session_private = ts_params->session_mpool;
 	for (qp_id = 0; qp_id < info.max_nb_queue_pairs; qp_id++) {
 		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
 			dev_id, qp_id, &ts_params->qp_conf,
diff --git a/app/test/test_cryptodev_blockcipher.c b/app/test/test_cryptodev_blockcipher.c
index 3cdb2c96e8..9417803f18 100644
--- a/app/test/test_cryptodev_blockcipher.c
+++ b/app/test/test_cryptodev_blockcipher.c
@@ -68,7 +68,6 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
 	struct rte_mempool *mbuf_pool,
 	struct rte_mempool *op_mpool,
 	struct rte_mempool *sess_mpool,
-	struct rte_mempool *sess_priv_mpool,
 	uint8_t dev_id,
 	char *test_msg)
 {
@@ -81,7 +80,7 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
 	struct rte_crypto_sym_op *sym_op = NULL;
 	struct rte_crypto_op *op = NULL;
 	struct rte_cryptodev_info dev_info;
-	struct rte_cryptodev_sym_session *sess = NULL;
+	void *sess = NULL;
 
 	int status = TEST_SUCCESS;
 	const struct blockcipher_test_data *tdata = t->test_data;
@@ -514,7 +513,7 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
 		sess = rte_cryptodev_sym_session_create(sess_mpool);
 
 		status = rte_cryptodev_sym_session_init(dev_id, sess,
-				init_xform, sess_priv_mpool);
+				init_xform);
 		if (status == -ENOTSUP) {
 			snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, "UNSUPPORTED");
 			status = TEST_SKIPPED;
@@ -831,7 +830,6 @@ blockcipher_test_case_run(const void *data)
 			p_testsuite_params->mbuf_pool,
 			p_testsuite_params->op_mpool,
 			p_testsuite_params->session_mpool,
-			p_testsuite_params->session_priv_mpool,
 			p_testsuite_params->valid_devs[0],
 			test_msg);
 	return status;
diff --git a/app/test/test_event_crypto_adapter.c b/app/test/test_event_crypto_adapter.c
index 3ad20921e2..59229a1cde 100644
--- a/app/test/test_event_crypto_adapter.c
+++ b/app/test/test_event_crypto_adapter.c
@@ -61,7 +61,6 @@ struct event_crypto_adapter_test_params {
 	struct rte_mempool *mbuf_pool;
 	struct rte_mempool *op_mpool;
 	struct rte_mempool *session_mpool;
-	struct rte_mempool *session_priv_mpool;
 	struct rte_cryptodev_config *config;
 	uint8_t crypto_event_port_id;
 	uint8_t internal_port_op_fwd;
@@ -167,7 +166,7 @@ static int
 test_op_forward_mode(uint8_t session_less)
 {
 	struct rte_crypto_sym_xform cipher_xform;
-	struct rte_cryptodev_sym_session *sess;
+	void *sess;
 	union rte_event_crypto_metadata m_data;
 	struct rte_crypto_sym_op *sym_op;
 	struct rte_crypto_op *op;
@@ -203,7 +202,7 @@ test_op_forward_mode(uint8_t session_less)
 
 		/* Create Crypto session*/
 		ret = rte_cryptodev_sym_session_init(TEST_CDEV_ID, sess,
-				&cipher_xform, params.session_priv_mpool);
+				&cipher_xform);
 		TEST_ASSERT_SUCCESS(ret, "Failed to init session\n");
 
 		ret = rte_event_crypto_adapter_caps_get(evdev, TEST_CDEV_ID,
@@ -367,7 +366,7 @@ static int
 test_op_new_mode(uint8_t session_less)
 {
 	struct rte_crypto_sym_xform cipher_xform;
-	struct rte_cryptodev_sym_session *sess;
+	void *sess;
 	union rte_event_crypto_metadata m_data;
 	struct rte_crypto_sym_op *sym_op;
 	struct rte_crypto_op *op;
@@ -411,7 +410,7 @@ test_op_new_mode(uint8_t session_less)
 						&m_data, sizeof(m_data));
 		}
 		ret = rte_cryptodev_sym_session_init(TEST_CDEV_ID, sess,
-				&cipher_xform, params.session_priv_mpool);
+				&cipher_xform);
 		TEST_ASSERT_SUCCESS(ret, "Failed to init session\n");
 
 		rte_crypto_op_attach_sym_session(op, sess);
@@ -553,22 +552,12 @@ configure_cryptodev(void)
 
 	params.session_mpool = rte_cryptodev_sym_session_pool_create(
 			"CRYPTO_ADAPTER_SESSION_MP",
-			MAX_NB_SESSIONS, 0, 0,
+			MAX_NB_SESSIONS, session_size, 0,
 			sizeof(union rte_event_crypto_metadata),
 			SOCKET_ID_ANY);
 	TEST_ASSERT_NOT_NULL(params.session_mpool,
 			"session mempool allocation failed\n");
 
-	params.session_priv_mpool = rte_mempool_create(
-				"CRYPTO_AD_SESS_MP_PRIV",
-				MAX_NB_SESSIONS,
-				session_size,
-				0, 0, NULL, NULL, NULL,
-				NULL, SOCKET_ID_ANY,
-				0);
-	TEST_ASSERT_NOT_NULL(params.session_priv_mpool,
-			"session mempool allocation failed\n");
-
 	rte_cryptodev_info_get(TEST_CDEV_ID, &info);
 	conf.nb_queue_pairs = info.max_nb_queue_pairs;
 	conf.socket_id = SOCKET_ID_ANY;
@@ -580,7 +569,6 @@ configure_cryptodev(void)
 
 	qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
 	qp_conf.mp_session = params.session_mpool;
-	qp_conf.mp_session_private = params.session_priv_mpool;
 
 	TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
 			TEST_CDEV_ID, TEST_CDEV_QP_ID, &qp_conf,
@@ -934,12 +922,6 @@ crypto_teardown(void)
 		rte_mempool_free(params.session_mpool);
 		params.session_mpool = NULL;
 	}
-	if (params.session_priv_mpool != NULL) {
-		rte_mempool_avail_count(params.session_priv_mpool);
-		rte_mempool_free(params.session_priv_mpool);
-		params.session_priv_mpool = NULL;
-	}
-
 	/* Free ops mempool */
 	if (params.op_mpool != NULL) {
 		RTE_LOG(DEBUG, USER1, "EVENT_CRYPTO_SYM_OP_POOL count %u\n",
diff --git a/app/test/test_ipsec.c b/app/test/test_ipsec.c
index 2ffa2a8e79..134545efe1 100644
--- a/app/test/test_ipsec.c
+++ b/app/test/test_ipsec.c
@@ -355,20 +355,9 @@ testsuite_setup(void)
 		return TEST_FAILED;
 	}
 
-	ts_params->qp_conf.mp_session_private = rte_mempool_create(
-				"test_priv_sess_mp",
-				MAX_NB_SESSIONS,
-				sess_sz,
-				0, 0, NULL, NULL, NULL,
-				NULL, SOCKET_ID_ANY,
-				0);
-
-	TEST_ASSERT_NOT_NULL(ts_params->qp_conf.mp_session_private,
-			"private session mempool allocation failed");
-
 	ts_params->qp_conf.mp_session =
 		rte_cryptodev_sym_session_pool_create("test_sess_mp",
-			MAX_NB_SESSIONS, 0, 0, 0, SOCKET_ID_ANY);
+			MAX_NB_SESSIONS, sess_sz, 0, 0, SOCKET_ID_ANY);
 
 	TEST_ASSERT_NOT_NULL(ts_params->qp_conf.mp_session,
 			"session mempool allocation failed");
@@ -413,11 +402,6 @@ testsuite_teardown(void)
 		rte_mempool_free(ts_params->qp_conf.mp_session);
 		ts_params->qp_conf.mp_session = NULL;
 	}
-
-	if (ts_params->qp_conf.mp_session_private != NULL) {
-		rte_mempool_free(ts_params->qp_conf.mp_session_private);
-		ts_params->qp_conf.mp_session_private = NULL;
-	}
 }
 
 static int
@@ -644,7 +628,7 @@ create_crypto_session(struct ipsec_unitest_params *ut,
 	struct rte_cryptodev_qp_conf *qp, uint8_t dev_id, uint32_t j)
 {
 	int32_t rc;
-	struct rte_cryptodev_sym_session *s;
+	void *s;
 
 	s = rte_cryptodev_sym_session_create(qp->mp_session);
 	if (s == NULL)
@@ -652,7 +636,7 @@ create_crypto_session(struct ipsec_unitest_params *ut,
 
 	/* initiliaze SA crypto session for device */
 	rc = rte_cryptodev_sym_session_init(dev_id, s,
-			ut->crypto_xforms, qp->mp_session_private);
+			ut->crypto_xforms);
 	if (rc == 0) {
 		ut->ss[j].crypto.ses = s;
 		return 0;
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
index edb7275e76..75330292af 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
@@ -235,7 +235,6 @@ aesni_gcm_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 		goto qp_setup_cleanup;
 
 	qp->sess_mp = qp_conf->mp_session;
-	qp->sess_mp_priv = qp_conf->mp_session_private;
 
 	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
 
@@ -259,10 +258,8 @@ aesni_gcm_pmd_sym_session_get_size(struct rte_cryptodev *dev __rte_unused)
 static int
 aesni_gcm_pmd_sym_session_configure(struct rte_cryptodev *dev __rte_unused,
 		struct rte_crypto_sym_xform *xform,
-		struct rte_cryptodev_sym_session *sess,
-		struct rte_mempool *mempool)
+		void *sess)
 {
-	void *sess_private_data;
 	int ret;
 	struct aesni_gcm_private *internals = dev->data->dev_private;
 
@@ -271,42 +268,24 @@ aesni_gcm_pmd_sym_session_configure(struct rte_cryptodev *dev __rte_unused,
 		return -EINVAL;
 	}
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		AESNI_GCM_LOG(ERR,
-				"Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
 	ret = aesni_gcm_set_session_parameters(internals->ops,
-				sess_private_data, xform);
+				sess, xform);
 	if (ret != 0) {
 		AESNI_GCM_LOG(ERR, "failed configure session parameters");
-
-		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
 		return ret;
 	}
 
-	set_sym_session_private_data(sess, dev->driver_id,
-			sess_private_data);
-
 	return 0;
 }
 
 /** Clear the memory of session so it doesn't leave key material behind */
 static void
-aesni_gcm_pmd_sym_session_clear(struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *sess)
+aesni_gcm_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
-	uint8_t index = dev->driver_id;
-	void *sess_priv = get_sym_session_private_data(sess, index);
-
+	RTE_SET_USED(dev);
 	/* Zero out the whole structure */
-	if (sess_priv) {
-		memset(sess_priv, 0, sizeof(struct aesni_gcm_session));
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
-		set_sym_session_private_data(sess, index, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
-	}
+	if (sess)
+		memset(sess, 0, sizeof(struct aesni_gcm_session));
 }
 
 struct rte_cryptodev_ops aesni_gcm_pmd_ops = {
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
index 39c67e3952..efdc05c45f 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
@@ -944,7 +944,6 @@ aesni_mb_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 	}
 
 	qp->sess_mp = qp_conf->mp_session;
-	qp->sess_mp_priv = qp_conf->mp_session_private;
 
 	memset(&qp->stats, 0, sizeof(qp->stats));
 
@@ -974,11 +973,8 @@ aesni_mb_pmd_sym_session_get_size(struct rte_cryptodev *dev __rte_unused)
 /** Configure a aesni multi-buffer session from a crypto xform chain */
 static int
 aesni_mb_pmd_sym_session_configure(struct rte_cryptodev *dev,
-		struct rte_crypto_sym_xform *xform,
-		struct rte_cryptodev_sym_session *sess,
-		struct rte_mempool *mempool)
+		struct rte_crypto_sym_xform *xform, void *sess)
 {
-	void *sess_private_data;
 	struct aesni_mb_private *internals = dev->data->dev_private;
 	int ret;
 
@@ -987,43 +983,25 @@ aesni_mb_pmd_sym_session_configure(struct rte_cryptodev *dev,
 		return -EINVAL;
 	}
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		AESNI_MB_LOG(ERR,
-				"Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
-
 	ret = aesni_mb_set_session_parameters(internals->mb_mgr,
-			sess_private_data, xform);
+			sess, xform);
 	if (ret != 0) {
 		AESNI_MB_LOG(ERR, "failed configure session parameters");
-
-		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
 		return ret;
 	}
 
-	set_sym_session_private_data(sess, dev->driver_id,
-			sess_private_data);
-
 	return 0;
 }
 
 /** Clear the memory of session so it doesn't leave key material behind */
 static void
-aesni_mb_pmd_sym_session_clear(struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *sess)
+aesni_mb_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
-	uint8_t index = dev->driver_id;
-	void *sess_priv = get_sym_session_private_data(sess, index);
+	RTE_SET_USED(dev);
 
 	/* Zero out the whole structure */
-	if (sess_priv) {
-		memset(sess_priv, 0, sizeof(struct aesni_mb_session));
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
-		set_sym_session_private_data(sess, index, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
-	}
+	if (sess)
+		memset(sess, 0, sizeof(struct aesni_mb_session));
 }
 
 struct rte_cryptodev_ops aesni_mb_pmd_ops = {
diff --git a/drivers/crypto/armv8/rte_armv8_pmd_ops.c b/drivers/crypto/armv8/rte_armv8_pmd_ops.c
index 1b2749fe62..2d3b54b063 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd_ops.c
+++ b/drivers/crypto/armv8/rte_armv8_pmd_ops.c
@@ -244,7 +244,6 @@ armv8_crypto_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 		goto qp_setup_cleanup;
 
 	qp->sess_mp = qp_conf->mp_session;
-	qp->sess_mp_priv = qp_conf->mp_session_private;
 
 	memset(&qp->stats, 0, sizeof(qp->stats));
 
@@ -268,10 +267,8 @@ armv8_crypto_pmd_sym_session_get_size(struct rte_cryptodev *dev __rte_unused)
 static int
 armv8_crypto_pmd_sym_session_configure(struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform,
-		struct rte_cryptodev_sym_session *sess,
-		struct rte_mempool *mempool)
+		void *sess)
 {
-	void *sess_private_data;
 	int ret;
 
 	if (unlikely(sess == NULL)) {
@@ -279,42 +276,23 @@ armv8_crypto_pmd_sym_session_configure(struct rte_cryptodev *dev,
 		return -EINVAL;
 	}
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		CDEV_LOG_ERR(
-			"Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
-
-	ret = armv8_crypto_set_session_parameters(sess_private_data, xform);
+	ret = armv8_crypto_set_session_parameters(sess, xform);
 	if (ret != 0) {
 		ARMV8_CRYPTO_LOG_ERR("failed configure session parameters");
-
-		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
 		return ret;
 	}
 
-	set_sym_session_private_data(sess, dev->driver_id,
-			sess_private_data);
-
 	return 0;
 }
 
 /** Clear the memory of session so it doesn't leave key material behind */
 static void
-armv8_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *sess)
+armv8_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
-	uint8_t index = dev->driver_id;
-	void *sess_priv = get_sym_session_private_data(sess, index);
-
+	RTE_SET_USED(dev);
 	/* Zero out the whole structure */
-	if (sess_priv) {
-		memset(sess_priv, 0, sizeof(struct armv8_crypto_session));
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
-		set_sym_session_private_data(sess, index, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
-	}
+	if (sess)
+		memset(sess, 0, sizeof(struct armv8_crypto_session));
 }
 
 struct rte_cryptodev_ops armv8_crypto_pmd_ops = {
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_session.c b/drivers/crypto/bcmfs/bcmfs_sym_session.c
index 675ed0ad55..b4b167d0c2 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_session.c
+++ b/drivers/crypto/bcmfs/bcmfs_sym_session.c
@@ -224,10 +224,9 @@ bcmfs_sym_get_session(struct rte_crypto_op *op)
 int
 bcmfs_sym_session_configure(struct rte_cryptodev *dev,
 			    struct rte_crypto_sym_xform *xform,
-			    struct rte_cryptodev_sym_session *sess,
-			    struct rte_mempool *mempool)
+			    void *sess)
 {
-	void *sess_private_data;
+	RTE_SET_USED(dev);
 	int ret;
 
 	if (unlikely(sess == NULL)) {
@@ -235,44 +234,23 @@ bcmfs_sym_session_configure(struct rte_cryptodev *dev,
 		return -EINVAL;
 	}
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		BCMFS_DP_LOG(ERR,
-			"Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
-
-	ret = crypto_set_session_parameters(sess_private_data, xform);
+	ret = crypto_set_session_parameters(sess, xform);
 
 	if (ret != 0) {
 		BCMFS_DP_LOG(ERR, "Failed configure session parameters");
-		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
 		return ret;
 	}
 
-	set_sym_session_private_data(sess, dev->driver_id,
-				     sess_private_data);
-
 	return 0;
 }
 
 /* Clear the memory of session so it doesn't leave key material behind */
 void
-bcmfs_sym_session_clear(struct rte_cryptodev *dev,
-			struct rte_cryptodev_sym_session  *sess)
+bcmfs_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
-	uint8_t index = dev->driver_id;
-	void *sess_priv = get_sym_session_private_data(sess, index);
-
-	if (sess_priv) {
-		struct rte_mempool *sess_mp;
-
-		memset(sess_priv, 0, sizeof(struct bcmfs_sym_session));
-		sess_mp = rte_mempool_from_obj(sess_priv);
-
-		set_sym_session_private_data(sess, index, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
-	}
+	RTE_SET_USED(dev);
+	if (sess)
+		memset(sess, 0, sizeof(struct bcmfs_sym_session));
 }
 
 unsigned int
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_session.h b/drivers/crypto/bcmfs/bcmfs_sym_session.h
index d40595b4bd..7faafe2fd5 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_session.h
+++ b/drivers/crypto/bcmfs/bcmfs_sym_session.h
@@ -93,12 +93,10 @@ bcmfs_process_crypto_op(struct rte_crypto_op *op,
 int
 bcmfs_sym_session_configure(struct rte_cryptodev *dev,
 			    struct rte_crypto_sym_xform *xform,
-			    struct rte_cryptodev_sym_session *sess,
-			    struct rte_mempool *mempool);
+			    void *sess);
 
 void
-bcmfs_sym_session_clear(struct rte_cryptodev *dev,
-			struct rte_cryptodev_sym_session  *sess);
+bcmfs_sym_session_clear(struct rte_cryptodev *dev, void *sess);
 
 unsigned int
 bcmfs_sym_session_get_private_size(struct rte_cryptodev *dev __rte_unused);
diff --git a/drivers/crypto/caam_jr/caam_jr.c b/drivers/crypto/caam_jr/caam_jr.c
index ce7a100778..8a04820fa6 100644
--- a/drivers/crypto/caam_jr/caam_jr.c
+++ b/drivers/crypto/caam_jr/caam_jr.c
@@ -1692,52 +1692,36 @@ caam_jr_set_session_parameters(struct rte_cryptodev *dev,
 static int
 caam_jr_sym_session_configure(struct rte_cryptodev *dev,
 			      struct rte_crypto_sym_xform *xform,
-			      struct rte_cryptodev_sym_session *sess,
-			      struct rte_mempool *mempool)
+			      void *sess)
 {
-	void *sess_private_data;
 	int ret;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		CAAM_JR_ERR("Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
-
-	memset(sess_private_data, 0, sizeof(struct caam_jr_session));
-	ret = caam_jr_set_session_parameters(dev, xform, sess_private_data);
+	memset(sess, 0, sizeof(struct caam_jr_session));
+	ret = caam_jr_set_session_parameters(dev, xform, sess);
 	if (ret != 0) {
 		CAAM_JR_ERR("failed to configure session parameters");
-		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
 		return ret;
 	}
 
-	set_sym_session_private_data(sess, dev->driver_id, sess_private_data);
-
 	return 0;
 }
 
 /* Clear the memory of session so it doesn't leave key material behind */
 static void
-caam_jr_sym_session_clear(struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *sess)
+caam_jr_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
-	uint8_t index = dev->driver_id;
-	void *sess_priv = get_sym_session_private_data(sess, index);
-	struct caam_jr_session *s = (struct caam_jr_session *)sess_priv;
+	RTE_SET_USED(dev);
+
+	struct caam_jr_session *s = (struct caam_jr_session *)sess;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (sess_priv) {
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
-
+	if (sess) {
 		rte_free(s->cipher_key.data);
 		rte_free(s->auth_key.data);
 		memset(s, 0, sizeof(struct caam_jr_session));
-		set_sym_session_private_data(sess, index, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
 	}
 }
 
diff --git a/drivers/crypto/ccp/ccp_pmd_ops.c b/drivers/crypto/ccp/ccp_pmd_ops.c
index 0d615d311c..cac1268130 100644
--- a/drivers/crypto/ccp/ccp_pmd_ops.c
+++ b/drivers/crypto/ccp/ccp_pmd_ops.c
@@ -727,7 +727,6 @@ ccp_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 	}
 
 	qp->sess_mp = qp_conf->mp_session;
-	qp->sess_mp_priv = qp_conf->mp_session_private;
 
 	/* mempool for batch info */
 	qp->batch_mp = rte_mempool_create(
@@ -758,11 +757,9 @@ ccp_pmd_sym_session_get_size(struct rte_cryptodev *dev __rte_unused)
 static int
 ccp_pmd_sym_session_configure(struct rte_cryptodev *dev,
 			  struct rte_crypto_sym_xform *xform,
-			  struct rte_cryptodev_sym_session *sess,
-			  struct rte_mempool *mempool)
+			  void *sess)
 {
 	int ret;
-	void *sess_private_data;
 	struct ccp_private *internals;
 
 	if (unlikely(sess == NULL || xform == NULL)) {
@@ -770,39 +767,22 @@ ccp_pmd_sym_session_configure(struct rte_cryptodev *dev,
 		return -ENOMEM;
 	}
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		CCP_LOG_ERR("Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
 	internals = (struct ccp_private *)dev->data->dev_private;
-	ret = ccp_set_session_parameters(sess_private_data, xform, internals);
+	ret = ccp_set_session_parameters(sess, xform, internals);
 	if (ret != 0) {
 		CCP_LOG_ERR("failed configure session parameters");
-
-		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
 		return ret;
 	}
-	set_sym_session_private_data(sess, dev->driver_id,
-				 sess_private_data);
 
 	return 0;
 }
 
 static void
-ccp_pmd_sym_session_clear(struct rte_cryptodev *dev,
-		      struct rte_cryptodev_sym_session *sess)
+ccp_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
-	uint8_t index = dev->driver_id;
-	void *sess_priv = get_sym_session_private_data(sess, index);
-
-	if (sess_priv) {
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
-
-		rte_mempool_put(sess_mp, sess_priv);
-		memset(sess_priv, 0, sizeof(struct ccp_session));
-		set_sym_session_private_data(sess, index, NULL);
-	}
+	RTE_SET_USED(dev);
+	if (sess)
+		memset(sess, 0, sizeof(struct ccp_session));
 }
 
 struct rte_cryptodev_ops ccp_ops = {
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index 99968cc353..50cae5e3d6 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -32,17 +32,18 @@ cn10k_cpt_sym_temp_sess_create(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op)
 	if (sess == NULL)
 		return NULL;
 
-	ret = sym_session_configure(qp->lf.roc_cpt, driver_id, sym_op->xform,
-				    sess, qp->sess_mp_priv);
+	sess->sess_data[driver_id].data =
+			(void *)((uint8_t *)sess +
+			rte_cryptodev_sym_get_header_session_size() +
+			(driver_id * sess->priv_sz));
+	priv = get_sym_session_private_data(sess, driver_id);
+	ret = sym_session_configure(qp->lf.roc_cpt, sym_op->xform, (void *)priv);
 	if (ret)
 		goto sess_put;
 
-	priv = get_sym_session_private_data(sess, driver_id);
-
 	sym_op->session = sess;
 
 	return priv;
-
 sess_put:
 	rte_mempool_put(qp->sess_mp, sess);
 	return NULL;
@@ -144,9 +145,7 @@ cn10k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct rte_crypto_op *ops[],
 			ret = cpt_sym_inst_fill(qp, op, sess, infl_req,
 						&inst[0]);
 			if (unlikely(ret)) {
-				sym_session_clear(cn10k_cryptodev_driver_id,
-						  op->sym->session);
-				rte_mempool_put(qp->sess_mp, op->sym->session);
+				sym_session_clear(op->sym->session);
 				return 0;
 			}
 			w7 = sess->cpt_inst_w7;
@@ -437,8 +436,7 @@ cn10k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp,
 temp_sess_free:
 	if (unlikely(cop->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) {
 		if (cop->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
-			sym_session_clear(cn10k_cryptodev_driver_id,
-					  cop->sym->session);
+			sym_session_clear(cop->sym->session);
 			sz = rte_cryptodev_sym_get_existing_header_session_size(
 				cop->sym->session);
 			memset(cop->sym->session, 0, sz);
diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
index 4c2dc5b080..5f83581131 100644
--- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
@@ -81,17 +81,19 @@ cn9k_cpt_sym_temp_sess_create(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op)
 	if (sess == NULL)
 		return NULL;
 
-	ret = sym_session_configure(qp->lf.roc_cpt, driver_id, sym_op->xform,
-				    sess, qp->sess_mp_priv);
+	sess->sess_data[driver_id].data =
+			(void *)((uint8_t *)sess +
+			rte_cryptodev_sym_get_header_session_size() +
+			(driver_id * sess->priv_sz));
+	priv = get_sym_session_private_data(sess, driver_id);
+	ret = sym_session_configure(qp->lf.roc_cpt, sym_op->xform,
+			(void *)priv);
 	if (ret)
 		goto sess_put;
 
-	priv = get_sym_session_private_data(sess, driver_id);
-
 	sym_op->session = sess;
 
 	return priv;
-
 sess_put:
 	rte_mempool_put(qp->sess_mp, sess);
 	return NULL;
@@ -126,8 +128,7 @@ cn9k_cpt_inst_prep(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
 			ret = cn9k_cpt_sym_inst_fill(qp, op, sess, infl_req,
 						     inst);
 			if (unlikely(ret)) {
-				sym_session_clear(cn9k_cryptodev_driver_id,
-						  op->sym->session);
+				sym_session_clear(op->sym->session);
 				rte_mempool_put(qp->sess_mp, op->sym->session);
 			}
 			inst->w7.u64 = sess->cpt_inst_w7;
@@ -484,8 +485,7 @@ cn9k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop,
 temp_sess_free:
 	if (unlikely(cop->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) {
 		if (cop->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
-			sym_session_clear(cn9k_cryptodev_driver_id,
-					  cop->sym->session);
+			sym_session_clear(cop->sym->session);
 			sz = rte_cryptodev_sym_get_existing_header_session_size(
 				cop->sym->session);
 			memset(cop->sym->session, 0, sz);
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
index 41d8fe49e1..52d9cf0cf3 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
@@ -379,7 +379,6 @@ cnxk_cpt_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 	}
 
 	qp->sess_mp = conf->mp_session;
-	qp->sess_mp_priv = conf->mp_session_private;
 	dev->data->queue_pairs[qp_id] = qp;
 
 	return 0;
@@ -493,27 +492,20 @@ cnxk_cpt_inst_w7_get(struct cnxk_se_sess *sess, struct roc_cpt *roc_cpt)
 }
 
 int
-sym_session_configure(struct roc_cpt *roc_cpt, int driver_id,
+sym_session_configure(struct roc_cpt *roc_cpt,
 		      struct rte_crypto_sym_xform *xform,
-		      struct rte_cryptodev_sym_session *sess,
-		      struct rte_mempool *pool)
+		      void *sess) 
 {
 	struct cnxk_se_sess *sess_priv;
-	void *priv;
 	int ret;
 
 	ret = sym_xform_verify(xform);
 	if (unlikely(ret < 0))
 		return ret;
 
-	if (unlikely(rte_mempool_get(pool, &priv))) {
-		plt_dp_err("Could not allocate session private data");
-		return -ENOMEM;
-	}
+	memset(sess, 0, sizeof(struct cnxk_se_sess));
 
-	memset(priv, 0, sizeof(struct cnxk_se_sess));
-
-	sess_priv = priv;
+	sess_priv = sess;
 
 	switch (ret) {
 	case CNXK_CPT_CIPHER:
@@ -547,7 +539,7 @@ sym_session_configure(struct roc_cpt *roc_cpt, int driver_id,
 	}
 
 	if (ret)
-		goto priv_put;
+		return -ENOTSUP;
 
 	if ((sess_priv->roc_se_ctx.fc_type == ROC_SE_HASH_HMAC) &&
 	    cpt_mac_len_verify(&xform->auth)) {
@@ -557,66 +549,45 @@ sym_session_configure(struct roc_cpt *roc_cpt, int driver_id,
 			sess_priv->roc_se_ctx.auth_key = NULL;
 		}
 
-		ret = -ENOTSUP;
-		goto priv_put;
+		return -ENOTSUP;
 	}
 
 	sess_priv->cpt_inst_w7 = cnxk_cpt_inst_w7_get(sess_priv, roc_cpt);
 
-	set_sym_session_private_data(sess, driver_id, sess_priv);
-
 	return 0;
-
-priv_put:
-	rte_mempool_put(pool, priv);
-
-	return -ENOTSUP;
 }
 
 int
 cnxk_cpt_sym_session_configure(struct rte_cryptodev *dev,
 			       struct rte_crypto_sym_xform *xform,
-			       struct rte_cryptodev_sym_session *sess,
-			       struct rte_mempool *pool)
+			       void *sess)
 {
 	struct cnxk_cpt_vf *vf = dev->data->dev_private;
 	struct roc_cpt *roc_cpt = &vf->cpt;
-	uint8_t driver_id;
 
-	driver_id = dev->driver_id;
-
-	return sym_session_configure(roc_cpt, driver_id, xform, sess, pool);
+	return sym_session_configure(roc_cpt, xform, sess);
 }
 
 void
-sym_session_clear(int driver_id, struct rte_cryptodev_sym_session *sess)
+sym_session_clear(void *sess)
 {
-	void *priv = get_sym_session_private_data(sess, driver_id);
-	struct cnxk_se_sess *sess_priv;
-	struct rte_mempool *pool;
+	struct cnxk_se_sess *sess_priv = sess;
 
-	if (priv == NULL)
+	if (sess == NULL)
 		return;
 
-	sess_priv = priv;
-
 	if (sess_priv->roc_se_ctx.auth_key != NULL)
 		plt_free(sess_priv->roc_se_ctx.auth_key);
 
-	memset(priv, 0, cnxk_cpt_sym_session_get_size(NULL));
-
-	pool = rte_mempool_from_obj(priv);
-
-	set_sym_session_private_data(sess, driver_id, NULL);
-
-	rte_mempool_put(pool, priv);
+	memset(sess_priv, 0, cnxk_cpt_sym_session_get_size(NULL));
 }
 
 void
-cnxk_cpt_sym_session_clear(struct rte_cryptodev *dev,
-			   struct rte_cryptodev_sym_session *sess)
+cnxk_cpt_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
-	return sym_session_clear(dev->driver_id, sess);
+	RTE_SET_USED(dev);
+
+	return sym_session_clear(sess);
 }
 
 unsigned int
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
index c5332dec53..3c09d10582 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
@@ -111,18 +111,15 @@ unsigned int cnxk_cpt_sym_session_get_size(struct rte_cryptodev *dev);
 
 int cnxk_cpt_sym_session_configure(struct rte_cryptodev *dev,
 				   struct rte_crypto_sym_xform *xform,
-				   struct rte_cryptodev_sym_session *sess,
-				   struct rte_mempool *pool);
+				   void *sess);
 
-int sym_session_configure(struct roc_cpt *roc_cpt, int driver_id,
+int sym_session_configure(struct roc_cpt *roc_cpt,
 			  struct rte_crypto_sym_xform *xform,
-			  struct rte_cryptodev_sym_session *sess,
-			  struct rte_mempool *pool);
+			  void *sess);
 
-void cnxk_cpt_sym_session_clear(struct rte_cryptodev *dev,
-				struct rte_cryptodev_sym_session *sess);
+void cnxk_cpt_sym_session_clear(struct rte_cryptodev *dev, void *sess);
 
-void sym_session_clear(int driver_id, struct rte_cryptodev_sym_session *sess);
+void sym_session_clear(void *sess);
 
 unsigned int cnxk_ae_session_size_get(struct rte_cryptodev *dev __rte_unused);
 
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 176f1a27a0..42229763f8 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -3438,49 +3438,32 @@ dpaa2_sec_security_session_destroy(void *dev __rte_unused, void *sess)
 static int
 dpaa2_sec_sym_session_configure(struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform,
-		struct rte_cryptodev_sym_session *sess,
-		struct rte_mempool *mempool)
+		void *sess)
 {
-	void *sess_private_data;
 	int ret;
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		DPAA2_SEC_ERR("Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
-
-	ret = dpaa2_sec_set_session_parameters(dev, xform, sess_private_data);
+	ret = dpaa2_sec_set_session_parameters(dev, xform, sess);
 	if (ret != 0) {
 		DPAA2_SEC_ERR("Failed to configure session parameters");
-		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
 		return ret;
 	}
 
-	set_sym_session_private_data(sess, dev->driver_id,
-		sess_private_data);
-
 	return 0;
 }
 
 /** Clear the memory of session so it doesn't leave key material behind */
 static void
-dpaa2_sec_sym_session_clear(struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *sess)
+dpaa2_sec_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
 	PMD_INIT_FUNC_TRACE();
-	uint8_t index = dev->driver_id;
-	void *sess_priv = get_sym_session_private_data(sess, index);
-	dpaa2_sec_session *s = (dpaa2_sec_session *)sess_priv;
+	RTE_SET_USED(dev);
+	dpaa2_sec_session *s = (dpaa2_sec_session *)sess;
 
-	if (sess_priv) {
+	if (sess) {
 		rte_free(s->ctxt);
 		rte_free(s->cipher_key.data);
 		rte_free(s->auth_key.data);
 		memset(s, 0, sizeof(dpaa2_sec_session));
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
-		set_sym_session_private_data(sess, index, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
 	}
 }
 
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index 5a087df090..4727088b45 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -2537,33 +2537,18 @@ dpaa_sec_set_session_parameters(struct rte_cryptodev *dev,
 
 static int
 dpaa_sec_sym_session_configure(struct rte_cryptodev *dev,
-		struct rte_crypto_sym_xform *xform,
-		struct rte_cryptodev_sym_session *sess,
-		struct rte_mempool *mempool)
+		struct rte_crypto_sym_xform *xform, void *sess)
 {
-	void *sess_private_data;
 	int ret;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		DPAA_SEC_ERR("Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
-
-	ret = dpaa_sec_set_session_parameters(dev, xform, sess_private_data);
+	ret = dpaa_sec_set_session_parameters(dev, xform, sess);
 	if (ret != 0) {
 		DPAA_SEC_ERR("failed to configure session parameters");
-
-		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
 		return ret;
 	}
 
-	set_sym_session_private_data(sess, dev->driver_id,
-			sess_private_data);
-
-
 	return 0;
 }
 
@@ -2584,18 +2569,14 @@ free_session_memory(struct rte_cryptodev *dev, dpaa_sec_session *s)
 
 /** Clear the memory of session so it doesn't leave key material behind */
 static void
-dpaa_sec_sym_session_clear(struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *sess)
+dpaa_sec_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
 	PMD_INIT_FUNC_TRACE();
-	uint8_t index = dev->driver_id;
-	void *sess_priv = get_sym_session_private_data(sess, index);
-	dpaa_sec_session *s = (dpaa_sec_session *)sess_priv;
+	RTE_SET_USED(dev);
+	dpaa_sec_session *s = (dpaa_sec_session *)sess;
 
-	if (sess_priv) {
+	if (sess)
 		free_session_memory(dev, s);
-		set_sym_session_private_data(sess, index, NULL);
-	}
 }
 
 #ifdef RTE_LIB_SECURITY
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
index f075054807..b2e5c92598 100644
--- a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
@@ -220,7 +220,6 @@ kasumi_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 
 	qp->mgr = internals->mgr;
 	qp->sess_mp = qp_conf->mp_session;
-	qp->sess_mp_priv = qp_conf->mp_session_private;
 
 	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
 
@@ -243,10 +242,8 @@ kasumi_pmd_sym_session_get_size(struct rte_cryptodev *dev __rte_unused)
 static int
 kasumi_pmd_sym_session_configure(struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform,
-		struct rte_cryptodev_sym_session *sess,
-		struct rte_mempool *mempool)
+		void *sess)
 {
-	void *sess_private_data;
 	int ret;
 	struct kasumi_private *internals = dev->data->dev_private;
 
@@ -255,43 +252,24 @@ kasumi_pmd_sym_session_configure(struct rte_cryptodev *dev,
 		return -EINVAL;
 	}
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		KASUMI_LOG(ERR,
-				"Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
-
 	ret = kasumi_set_session_parameters(internals->mgr,
-					sess_private_data, xform);
+					sess, xform);
 	if (ret != 0) {
 		KASUMI_LOG(ERR, "failed configure session parameters");
-
-		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
 		return ret;
 	}
 
-	set_sym_session_private_data(sess, dev->driver_id,
-		sess_private_data);
-
 	return 0;
 }
 
 /** Clear the memory of session so it doesn't leave key material behind */
 static void
-kasumi_pmd_sym_session_clear(struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *sess)
+kasumi_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
-	uint8_t index = dev->driver_id;
-	void *sess_priv = get_sym_session_private_data(sess, index);
-
+	RTE_SET_USED(dev);
 	/* Zero out the whole structure */
-	if (sess_priv) {
-		memset(sess_priv, 0, sizeof(struct kasumi_session));
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
-		set_sym_session_private_data(sess, index, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
-	}
+	if (sess)
+		memset(sess, 0, sizeof(struct kasumi_session));
 }
 
 struct rte_cryptodev_ops kasumi_pmd_ops = {
diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c
index 682cf8b607..615ab9f45d 100644
--- a/drivers/crypto/mlx5/mlx5_crypto.c
+++ b/drivers/crypto/mlx5/mlx5_crypto.c
@@ -165,14 +165,12 @@ mlx5_crypto_sym_session_get_size(struct rte_cryptodev *dev __rte_unused)
 static int
 mlx5_crypto_sym_session_configure(struct rte_cryptodev *dev,
 				  struct rte_crypto_sym_xform *xform,
-				  struct rte_cryptodev_sym_session *session,
-				  struct rte_mempool *mp)
+				  void *session)
 {
 	struct mlx5_crypto_priv *priv = dev->data->dev_private;
-	struct mlx5_crypto_session *sess_private_data;
+	struct mlx5_crypto_session *sess_private_data = session;
 	struct rte_crypto_cipher_xform *cipher;
 	uint8_t encryption_order;
-	int ret;
 
 	if (unlikely(xform->next != NULL)) {
 		DRV_LOG(ERR, "Xform next is not supported.");
@@ -183,17 +181,9 @@ mlx5_crypto_sym_session_configure(struct rte_cryptodev *dev,
 		DRV_LOG(ERR, "Only AES-XTS algorithm is supported.");
 		return -ENOTSUP;
 	}
-	ret = rte_mempool_get(mp, (void *)&sess_private_data);
-	if (ret != 0) {
-		DRV_LOG(ERR,
-			"Failed to get session %p private data from mempool.",
-			sess_private_data);
-		return -ENOMEM;
-	}
 	cipher = &xform->cipher;
 	sess_private_data->dek = mlx5_crypto_dek_prepare(priv, cipher);
 	if (sess_private_data->dek == NULL) {
-		rte_mempool_put(mp, sess_private_data);
 		DRV_LOG(ERR, "Failed to prepare dek.");
 		return -ENOMEM;
 	}
@@ -228,27 +218,21 @@ mlx5_crypto_sym_session_configure(struct rte_cryptodev *dev,
 	sess_private_data->dek_id =
 			rte_cpu_to_be_32(sess_private_data->dek->obj->id &
 					 0xffffff);
-	set_sym_session_private_data(session, dev->driver_id,
-				     sess_private_data);
 	DRV_LOG(DEBUG, "Session %p was configured.", sess_private_data);
 	return 0;
 }
 
 static void
-mlx5_crypto_sym_session_clear(struct rte_cryptodev *dev,
-			      struct rte_cryptodev_sym_session *sess)
+mlx5_crypto_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
 	struct mlx5_crypto_priv *priv = dev->data->dev_private;
-	struct mlx5_crypto_session *spriv = get_sym_session_private_data(sess,
-								dev->driver_id);
+	struct mlx5_crypto_session *spriv = sess;
 
 	if (unlikely(spriv == NULL)) {
 		DRV_LOG(ERR, "Failed to get session %p private data.", spriv);
 		return;
 	}
 	mlx5_crypto_dek_destroy(priv, spriv->dek);
-	set_sym_session_private_data(sess, dev->driver_id, NULL);
-	rte_mempool_put(rte_mempool_from_obj(spriv), spriv);
 	DRV_LOG(DEBUG, "Session %p was cleared.", spriv);
 }
 
diff --git a/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c b/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c
index e04a2c88c7..2e4b27ea21 100644
--- a/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c
+++ b/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c
@@ -704,7 +704,6 @@ mrvl_crypto_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 			break;
 
 		qp->sess_mp = qp_conf->mp_session;
-		qp->sess_mp_priv = qp_conf->mp_session_private;
 
 		memset(&qp->stats, 0, sizeof(qp->stats));
 		dev->data->queue_pairs[qp_id] = qp;
@@ -735,12 +734,9 @@ mrvl_crypto_pmd_sym_session_get_size(__rte_unused struct rte_cryptodev *dev)
  */
 static int
 mrvl_crypto_pmd_sym_session_configure(__rte_unused struct rte_cryptodev *dev,
-		struct rte_crypto_sym_xform *xform,
-		struct rte_cryptodev_sym_session *sess,
-		struct rte_mempool *mp)
+		struct rte_crypto_sym_xform *xform, void *sess)
 {
 	struct mrvl_crypto_session *mrvl_sess;
-	void *sess_private_data;
 	int ret;
 
 	if (sess == NULL) {
@@ -748,25 +744,16 @@ mrvl_crypto_pmd_sym_session_configure(__rte_unused struct rte_cryptodev *dev,
 		return -EINVAL;
 	}
 
-	if (rte_mempool_get(mp, &sess_private_data)) {
-		CDEV_LOG_ERR("Couldn't get object from session mempool.");
-		return -ENOMEM;
-	}
+	memset(sess, 0, sizeof(struct mrvl_crypto_session));
 
-	memset(sess_private_data, 0, sizeof(struct mrvl_crypto_session));
-
-	ret = mrvl_crypto_set_session_parameters(sess_private_data, xform);
+	ret = mrvl_crypto_set_session_parameters(sess, xform);
 	if (ret != 0) {
 		MRVL_LOG(ERR, "Failed to configure session parameters!");
-
-		/* Return session to mempool */
-		rte_mempool_put(mp, sess_private_data);
 		return ret;
 	}
 
-	set_sym_session_private_data(sess, dev->driver_id, sess_private_data);
 
-	mrvl_sess = (struct mrvl_crypto_session *)sess_private_data;
+	mrvl_sess = (struct mrvl_crypto_session *)sess;
 	if (sam_session_create(&mrvl_sess->sam_sess_params,
 				&mrvl_sess->sam_sess) < 0) {
 		MRVL_LOG(DEBUG, "Failed to create session!");
@@ -789,17 +776,13 @@ mrvl_crypto_pmd_sym_session_configure(__rte_unused struct rte_cryptodev *dev,
  * @returns 0. Always.
  */
 static void
-mrvl_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *sess)
+mrvl_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
-
-	uint8_t index = dev->driver_id;
-	void *sess_priv = get_sym_session_private_data(sess, index);
-
+	RTE_SET_USED(dev);
 	/* Zero out the whole structure */
-	if (sess_priv) {
+	if (sess) {
 		struct mrvl_crypto_session *mrvl_sess =
-			(struct mrvl_crypto_session *)sess_priv;
+			(struct mrvl_crypto_session *)sess;
 
 		if (mrvl_sess->sam_sess &&
 		    sam_session_destroy(mrvl_sess->sam_sess) < 0) {
@@ -807,9 +790,6 @@ mrvl_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev,
 		}
 
 		memset(mrvl_sess, 0, sizeof(struct mrvl_crypto_session));
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
-		set_sym_session_private_data(sess, index, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
 	}
 }
 
diff --git a/drivers/crypto/nitrox/nitrox_sym.c b/drivers/crypto/nitrox/nitrox_sym.c
index f8b7edcd69..0c9bbfef46 100644
--- a/drivers/crypto/nitrox/nitrox_sym.c
+++ b/drivers/crypto/nitrox/nitrox_sym.c
@@ -532,22 +532,16 @@ configure_aead_ctx(struct rte_crypto_aead_xform *xform,
 static int
 nitrox_sym_dev_sess_configure(struct rte_cryptodev *cdev,
 			      struct rte_crypto_sym_xform *xform,
-			      struct rte_cryptodev_sym_session *sess,
-			      struct rte_mempool *mempool)
+			      void *sess)
 {
-	void *mp_obj;
 	struct nitrox_crypto_ctx *ctx;
 	struct rte_crypto_cipher_xform *cipher_xform = NULL;
 	struct rte_crypto_auth_xform *auth_xform = NULL;
 	struct rte_crypto_aead_xform *aead_xform = NULL;
 	int ret = -EINVAL;
 
-	if (rte_mempool_get(mempool, &mp_obj)) {
-		NITROX_LOG(ERR, "Couldn't allocate context\n");
-		return -ENOMEM;
-	}
-
-	ctx = mp_obj;
+	RTE_SET_USED(cdev);
+	ctx = sess;
 	ctx->nitrox_chain = get_crypto_chain_order(xform);
 	switch (ctx->nitrox_chain) {
 	case NITROX_CHAIN_CIPHER_ONLY:
@@ -586,28 +580,17 @@ nitrox_sym_dev_sess_configure(struct rte_cryptodev *cdev,
 	}
 
 	ctx->iova = rte_mempool_virt2iova(ctx);
-	set_sym_session_private_data(sess, cdev->driver_id, ctx);
 	return 0;
 err:
-	rte_mempool_put(mempool, mp_obj);
 	return ret;
 }
 
 static void
-nitrox_sym_dev_sess_clear(struct rte_cryptodev *cdev,
-			  struct rte_cryptodev_sym_session *sess)
+nitrox_sym_dev_sess_clear(struct rte_cryptodev *cdev, void *sess)
 {
-	struct nitrox_crypto_ctx *ctx = get_sym_session_private_data(sess,
-							cdev->driver_id);
-	struct rte_mempool *sess_mp;
-
-	if (!ctx)
-		return;
-
-	memset(ctx, 0, sizeof(*ctx));
-	sess_mp = rte_mempool_from_obj(ctx);
-	set_sym_session_private_data(sess, cdev->driver_id, NULL);
-	rte_mempool_put(sess_mp, ctx);
+	RTE_SET_USED(cdev);
+	if (sess)
+		memset(sess, 0, sizeof(struct nitrox_crypto_ctx));
 }
 
 static struct nitrox_crypto_ctx *
diff --git a/drivers/crypto/null/null_crypto_pmd_ops.c b/drivers/crypto/null/null_crypto_pmd_ops.c
index a8b5a06e7f..65bfa8dcf7 100644
--- a/drivers/crypto/null/null_crypto_pmd_ops.c
+++ b/drivers/crypto/null/null_crypto_pmd_ops.c
@@ -234,7 +234,6 @@ null_crypto_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 	}
 
 	qp->sess_mp = qp_conf->mp_session;
-	qp->sess_mp_priv = qp_conf->mp_session_private;
 
 	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
 
@@ -258,10 +257,8 @@ null_crypto_pmd_sym_session_get_size(struct rte_cryptodev *dev __rte_unused)
 static int
 null_crypto_pmd_sym_session_configure(struct rte_cryptodev *dev __rte_unused,
 		struct rte_crypto_sym_xform *xform,
-		struct rte_cryptodev_sym_session *sess,
-		struct rte_mempool *mp)
+		void *sess)
 {
-	void *sess_private_data;
 	int ret;
 
 	if (unlikely(sess == NULL)) {
@@ -269,42 +266,23 @@ null_crypto_pmd_sym_session_configure(struct rte_cryptodev *dev __rte_unused,
 		return -EINVAL;
 	}
 
-	if (rte_mempool_get(mp, &sess_private_data)) {
-		NULL_LOG(ERR,
-				"Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
-
-	ret = null_crypto_set_session_parameters(sess_private_data, xform);
+	ret = null_crypto_set_session_parameters(sess, xform);
 	if (ret != 0) {
 		NULL_LOG(ERR, "failed configure session parameters");
-
-		/* Return session to mempool */
-		rte_mempool_put(mp, sess_private_data);
 		return ret;
 	}
 
-	set_sym_session_private_data(sess, dev->driver_id,
-		sess_private_data);
-
 	return 0;
 }
 
 /** Clear the memory of session so it doesn't leave key material behind */
 static void
-null_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *sess)
+null_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
-	uint8_t index = dev->driver_id;
-	void *sess_priv = get_sym_session_private_data(sess, index);
-
+	RTE_SET_USED(dev);
 	/* Zero out the whole structure */
-	if (sess_priv) {
-		memset(sess_priv, 0, sizeof(struct null_crypto_session));
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
-		set_sym_session_private_data(sess, index, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
-	}
+	if (sess)
+		memset(sess, 0, sizeof(struct null_crypto_session));
 }
 
 static struct rte_cryptodev_ops pmd_ops = {
diff --git a/drivers/crypto/octeontx/otx_cryptodev_hw_access.h b/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
index 7c6b1e45b4..95659e472b 100644
--- a/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
+++ b/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
@@ -49,7 +49,6 @@ struct cpt_instance {
 	uint32_t queue_id;
 	uintptr_t rsvd;
 	struct rte_mempool *sess_mp;
-	struct rte_mempool *sess_mp_priv;
 	struct cpt_qp_meta_info meta_info;
 	uint8_t ca_enabled;
 };
diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c b/drivers/crypto/octeontx/otx_cryptodev_ops.c
index 9e8fd495cf..abd0963be0 100644
--- a/drivers/crypto/octeontx/otx_cryptodev_ops.c
+++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c
@@ -171,7 +171,6 @@ otx_cpt_que_pair_setup(struct rte_cryptodev *dev,
 
 	instance->queue_id = que_pair_id;
 	instance->sess_mp = qp_conf->mp_session;
-	instance->sess_mp_priv = qp_conf->mp_session_private;
 	dev->data->queue_pairs[que_pair_id] = instance;
 
 	return 0;
@@ -243,29 +242,22 @@ sym_xform_verify(struct rte_crypto_sym_xform *xform)
 }
 
 static int
-sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform,
-		      struct rte_cryptodev_sym_session *sess,
-		      struct rte_mempool *pool)
+sym_session_configure(struct rte_crypto_sym_xform *xform,
+		      void *sess)
 {
 	struct rte_crypto_sym_xform *temp_xform = xform;
 	struct cpt_sess_misc *misc;
 	vq_cmd_word3_t vq_cmd_w3;
-	void *priv;
 	int ret;
 
 	ret = sym_xform_verify(xform);
 	if (unlikely(ret))
 		return ret;
 
-	if (unlikely(rte_mempool_get(pool, &priv))) {
-		CPT_LOG_ERR("Could not allocate session private data");
-		return -ENOMEM;
-	}
-
-	memset(priv, 0, sizeof(struct cpt_sess_misc) +
+	memset(sess, 0, sizeof(struct cpt_sess_misc) +
 			offsetof(struct cpt_ctx, mc_ctx));
 
-	misc = priv;
+	misc = sess;
 
 	for ( ; xform != NULL; xform = xform->next) {
 		switch (xform->type) {
@@ -301,8 +293,6 @@ sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform,
 		goto priv_put;
 	}
 
-	set_sym_session_private_data(sess, driver_id, priv);
-
 	misc->ctx_dma_addr = rte_mempool_virt2iova(misc) +
 			     sizeof(struct cpt_sess_misc);
 
@@ -316,56 +306,46 @@ sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform,
 	return 0;
 
 priv_put:
-	if (priv)
-		rte_mempool_put(pool, priv);
 	return -ENOTSUP;
 }
 
 static void
-sym_session_clear(int driver_id, struct rte_cryptodev_sym_session *sess)
+sym_session_clear(void *sess)
 {
-	void *priv = get_sym_session_private_data(sess, driver_id);
 	struct cpt_sess_misc *misc;
-	struct rte_mempool *pool;
 	struct cpt_ctx *ctx;
 
-	if (priv == NULL)
+	if (sess == NULL)
 		return;
 
-	misc = priv;
+	misc = sess;
 	ctx = SESS_PRIV(misc);
 
 	if (ctx->auth_key != NULL)
 		rte_free(ctx->auth_key);
 
-	memset(priv, 0, cpt_get_session_size());
-
-	pool = rte_mempool_from_obj(priv);
-
-	set_sym_session_private_data(sess, driver_id, NULL);
-
-	rte_mempool_put(pool, priv);
+	memset(sess, 0, cpt_get_session_size());
 }
 
 static int
 otx_cpt_session_cfg(struct rte_cryptodev *dev,
 		    struct rte_crypto_sym_xform *xform,
-		    struct rte_cryptodev_sym_session *sess,
-		    struct rte_mempool *pool)
+		    void *sess)
 {
 	CPT_PMD_INIT_FUNC_TRACE();
+	RTE_SET_USED(dev);
 
-	return sym_session_configure(dev->driver_id, xform, sess, pool);
+	return sym_session_configure(xform, sess);
 }
 
 
 static void
-otx_cpt_session_clear(struct rte_cryptodev *dev,
-		  struct rte_cryptodev_sym_session *sess)
+otx_cpt_session_clear(struct rte_cryptodev *dev, void *sess)
 {
 	CPT_PMD_INIT_FUNC_TRACE();
+	RTE_SET_USED(dev);
 
-	return sym_session_clear(dev->driver_id, sess);
+	return sym_session_clear(sess);
 }
 
 static unsigned int
@@ -576,7 +556,6 @@ static __rte_always_inline void * __rte_hot
 otx_cpt_enq_single_sym_sessless(struct cpt_instance *instance,
 				struct rte_crypto_op *op)
 {
-	const int driver_id = otx_cryptodev_driver_id;
 	struct rte_crypto_sym_op *sym_op = op->sym;
 	struct rte_cryptodev_sym_session *sess;
 	void *req;
@@ -589,8 +568,12 @@ otx_cpt_enq_single_sym_sessless(struct cpt_instance *instance,
 		return NULL;
 	}
 
-	ret = sym_session_configure(driver_id, sym_op->xform, sess,
-				    instance->sess_mp_priv);
+	sess->sess_data[otx_cryptodev_driver_id].data =
+			(void *)((uint8_t *)sess +
+			rte_cryptodev_sym_get_header_session_size() +
+			(otx_cryptodev_driver_id * sess->priv_sz));
+	ret = sym_session_configure(sym_op->xform,
+			sess->sess_data[otx_cryptodev_driver_id].data);
 	if (ret)
 		goto sess_put;
 
@@ -604,7 +587,7 @@ otx_cpt_enq_single_sym_sessless(struct cpt_instance *instance,
 	return req;
 
 priv_put:
-	sym_session_clear(driver_id, sess);
+	sym_session_clear(sess);
 sess_put:
 	rte_mempool_put(instance->sess_mp, sess);
 	return NULL;
@@ -913,7 +896,6 @@ free_sym_session_data(const struct cpt_instance *instance,
 	memset(cop->sym->session, 0,
 	       rte_cryptodev_sym_get_existing_header_session_size(
 		       cop->sym->session));
-	rte_mempool_put(instance->sess_mp_priv, sess_private_data_t);
 	rte_mempool_put(instance->sess_mp, cop->sym->session);
 	cop->sym->session = NULL;
 }
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
index 7b744cd4b4..dcfbc49996 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
@@ -371,29 +371,21 @@ sym_xform_verify(struct rte_crypto_sym_xform *xform)
 }
 
 static int
-sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform,
-		      struct rte_cryptodev_sym_session *sess,
-		      struct rte_mempool *pool)
+sym_session_configure(struct rte_crypto_sym_xform *xform, void *sess)
 {
 	struct rte_crypto_sym_xform *temp_xform = xform;
 	struct cpt_sess_misc *misc;
 	vq_cmd_word3_t vq_cmd_w3;
-	void *priv;
 	int ret;
 
 	ret = sym_xform_verify(xform);
 	if (unlikely(ret))
 		return ret;
 
-	if (unlikely(rte_mempool_get(pool, &priv))) {
-		CPT_LOG_ERR("Could not allocate session private data");
-		return -ENOMEM;
-	}
-
-	memset(priv, 0, sizeof(struct cpt_sess_misc) +
+	memset(sess, 0, sizeof(struct cpt_sess_misc) +
 			offsetof(struct cpt_ctx, mc_ctx));
 
-	misc = priv;
+	misc = sess;
 
 	for ( ; xform != NULL; xform = xform->next) {
 		switch (xform->type) {
@@ -414,7 +406,7 @@ sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform,
 		}
 
 		if (ret)
-			goto priv_put;
+			return -ENOTSUP;
 	}
 
 	if ((GET_SESS_FC_TYPE(misc) == HASH_HMAC) &&
@@ -425,12 +417,9 @@ sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform,
 			rte_free(ctx->auth_key);
 			ctx->auth_key = NULL;
 		}
-		ret = -ENOTSUP;
-		goto priv_put;
+		return -ENOTSUP;
 	}
 
-	set_sym_session_private_data(sess, driver_id, misc);
-
 	misc->ctx_dma_addr = rte_mempool_virt2iova(misc) +
 			     sizeof(struct cpt_sess_misc);
 
@@ -451,11 +440,6 @@ sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform,
 	misc->cpt_inst_w7 = vq_cmd_w3.u64;
 
 	return 0;
-
-priv_put:
-	rte_mempool_put(pool, priv);
-
-	return -ENOTSUP;
 }
 
 static __rte_always_inline int32_t __rte_hot
@@ -765,7 +749,6 @@ otx2_cpt_enqueue_sym_sessless(struct otx2_cpt_qp *qp, struct rte_crypto_op *op,
 			      struct pending_queue *pend_q,
 			      unsigned int burst_index)
 {
-	const int driver_id = otx2_cryptodev_driver_id;
 	struct rte_crypto_sym_op *sym_op = op->sym;
 	struct rte_cryptodev_sym_session *sess;
 	int ret;
@@ -775,8 +758,12 @@ otx2_cpt_enqueue_sym_sessless(struct otx2_cpt_qp *qp, struct rte_crypto_op *op,
 	if (sess == NULL)
 		return -ENOMEM;
 
-	ret = sym_session_configure(driver_id, sym_op->xform, sess,
-				    qp->sess_mp_priv);
+	sess->sess_data[otx2_cryptodev_driver_id].data =
+			(void *)((uint8_t *)sess +
+			rte_cryptodev_sym_get_header_session_size() +
+			(otx2_cryptodev_driver_id * sess->priv_sz));
+	ret = sym_session_configure(sym_op->xform,
+			sess->sess_data[otx2_cryptodev_driver_id].data);
 	if (ret)
 		goto sess_put;
 
@@ -790,7 +777,7 @@ otx2_cpt_enqueue_sym_sessless(struct otx2_cpt_qp *qp, struct rte_crypto_op *op,
 	return 0;
 
 priv_put:
-	sym_session_clear(driver_id, sess);
+	sym_session_clear(sess);
 sess_put:
 	rte_mempool_put(qp->sess_mp, sess);
 	return ret;
@@ -1035,8 +1022,7 @@ otx2_cpt_dequeue_post_process(struct otx2_cpt_qp *qp, struct rte_crypto_op *cop,
 		}
 
 		if (unlikely(cop->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) {
-			sym_session_clear(otx2_cryptodev_driver_id,
-					  cop->sym->session);
+			sym_session_clear(cop->sym->session);
 			sz = rte_cryptodev_sym_get_existing_header_session_size(
 					cop->sym->session);
 			memset(cop->sym->session, 0, sz);
@@ -1291,7 +1277,6 @@ otx2_cpt_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 	}
 
 	qp->sess_mp = conf->mp_session;
-	qp->sess_mp_priv = conf->mp_session_private;
 	dev->data->queue_pairs[qp_id] = qp;
 
 	return 0;
@@ -1330,21 +1315,22 @@ otx2_cpt_sym_session_get_size(struct rte_cryptodev *dev __rte_unused)
 static int
 otx2_cpt_sym_session_configure(struct rte_cryptodev *dev,
 			       struct rte_crypto_sym_xform *xform,
-			       struct rte_cryptodev_sym_session *sess,
-			       struct rte_mempool *pool)
+			       void *sess)
 {
 	CPT_PMD_INIT_FUNC_TRACE();
+	RTE_SET_USED(dev);
 
-	return sym_session_configure(dev->driver_id, xform, sess, pool);
+	return sym_session_configure(xform, sess);
 }
 
 static void
 otx2_cpt_sym_session_clear(struct rte_cryptodev *dev,
-			   struct rte_cryptodev_sym_session *sess)
+			   void *sess)
 {
 	CPT_PMD_INIT_FUNC_TRACE();
+	RTE_SET_USED(dev);
 
-	return sym_session_clear(dev->driver_id, sess);
+	return sym_session_clear(sess);
 }
 
 static unsigned int
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h b/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
index 01c081a216..5f63eaf7b7 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
@@ -8,29 +8,21 @@
 #include "cpt_pmd_logs.h"
 
 static void
-sym_session_clear(int driver_id, struct rte_cryptodev_sym_session *sess)
+sym_session_clear(void *sess)
 {
-	void *priv = get_sym_session_private_data(sess, driver_id);
 	struct cpt_sess_misc *misc;
-	struct rte_mempool *pool;
 	struct cpt_ctx *ctx;
 
-	if (priv == NULL)
+	if (sess == NULL)
 		return;
 
-	misc = priv;
+	misc = sess;
 	ctx = SESS_PRIV(misc);
 
 	if (ctx->auth_key != NULL)
 		rte_free(ctx->auth_key);
 
-	memset(priv, 0, cpt_get_session_size());
-
-	pool = rte_mempool_from_obj(priv);
-
-	set_sym_session_private_data(sess, driver_id, NULL);
-
-	rte_mempool_put(pool, priv);
+	memset(sess, 0, cpt_get_session_size());
 }
 
 static __rte_always_inline uint8_t
diff --git a/drivers/crypto/openssl/rte_openssl_pmd_ops.c b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
index 52715f86f8..1b48a6b400 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_ops.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
@@ -741,7 +741,6 @@ openssl_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 		goto qp_setup_cleanup;
 
 	qp->sess_mp = qp_conf->mp_session;
-	qp->sess_mp_priv = qp_conf->mp_session_private;
 
 	memset(&qp->stats, 0, sizeof(qp->stats));
 
@@ -772,10 +771,8 @@ openssl_pmd_asym_session_get_size(struct rte_cryptodev *dev __rte_unused)
 static int
 openssl_pmd_sym_session_configure(struct rte_cryptodev *dev __rte_unused,
 		struct rte_crypto_sym_xform *xform,
-		struct rte_cryptodev_sym_session *sess,
-		struct rte_mempool *mempool)
+		void *sess)
 {
-	void *sess_private_data;
 	int ret;
 
 	if (unlikely(sess == NULL)) {
@@ -783,24 +780,12 @@ openssl_pmd_sym_session_configure(struct rte_cryptodev *dev __rte_unused,
 		return -EINVAL;
 	}
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		OPENSSL_LOG(ERR,
-			"Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
-
-	ret = openssl_set_session_parameters(sess_private_data, xform);
+	ret = openssl_set_session_parameters(sess, xform);
 	if (ret != 0) {
 		OPENSSL_LOG(ERR, "failed configure session parameters");
-
-		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
 		return ret;
 	}
 
-	set_sym_session_private_data(sess, dev->driver_id,
-			sess_private_data);
-
 	return 0;
 }
 
@@ -1154,19 +1139,13 @@ openssl_pmd_asym_session_configure(struct rte_cryptodev *dev __rte_unused,
 
 /** Clear the memory of session so it doesn't leave key material behind */
 static void
-openssl_pmd_sym_session_clear(struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *sess)
+openssl_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
-	uint8_t index = dev->driver_id;
-	void *sess_priv = get_sym_session_private_data(sess, index);
-
+	RTE_SET_USED(dev);
 	/* Zero out the whole structure */
-	if (sess_priv) {
-		openssl_reset_session(sess_priv);
-		memset(sess_priv, 0, sizeof(struct openssl_session));
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
-		set_sym_session_private_data(sess, index, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
+	if (sess) {
+		openssl_reset_session(sess);
+		memset(sess, 0, sizeof(struct openssl_session));
 	}
 }
 
diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c
index 2a22347c7f..114bf081c1 100644
--- a/drivers/crypto/qat/qat_sym_session.c
+++ b/drivers/crypto/qat/qat_sym_session.c
@@ -172,21 +172,14 @@ qat_is_auth_alg_supported(enum rte_crypto_auth_algorithm algo,
 }
 
 void
-qat_sym_session_clear(struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *sess)
+qat_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
-	uint8_t index = dev->driver_id;
-	void *sess_priv = get_sym_session_private_data(sess, index);
-	struct qat_sym_session *s = (struct qat_sym_session *)sess_priv;
+	struct qat_sym_session *s = (struct qat_sym_session *)sess;
 
-	if (sess_priv) {
+	if (sess) {
 		if (s->bpi_ctx)
 			bpi_cipher_ctx_free(s->bpi_ctx);
 		memset(s, 0, qat_sym_session_get_private_size(dev));
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
-
-		set_sym_session_private_data(sess, index, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
 	}
 }
 
@@ -458,31 +451,17 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
 int
 qat_sym_session_configure(struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform,
-		struct rte_cryptodev_sym_session *sess,
-		struct rte_mempool *mempool)
+		void *sess_private_data)
 {
-	void *sess_private_data;
 	int ret;
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		CDEV_LOG_ERR(
-			"Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
-
 	ret = qat_sym_session_set_parameters(dev, xform, sess_private_data);
 	if (ret != 0) {
 		QAT_LOG(ERR,
 		    "Crypto QAT PMD: failed to configure session parameters");
-
-		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
 		return ret;
 	}
 
-	set_sym_session_private_data(sess, dev->driver_id,
-		sess_private_data);
-
 	return 0;
 }
 
diff --git a/drivers/crypto/qat/qat_sym_session.h b/drivers/crypto/qat/qat_sym_session.h
index 7fcc1d6f7b..6da29e2305 100644
--- a/drivers/crypto/qat/qat_sym_session.h
+++ b/drivers/crypto/qat/qat_sym_session.h
@@ -112,8 +112,7 @@ struct qat_sym_session {
 int
 qat_sym_session_configure(struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform,
-		struct rte_cryptodev_sym_session *sess,
-		struct rte_mempool *mempool);
+		void *sess);
 
 int
 qat_sym_session_set_parameters(struct rte_cryptodev *dev,
@@ -135,8 +134,7 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev,
 				struct qat_sym_session *session);
 
 void
-qat_sym_session_clear(struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *session);
+qat_sym_session_clear(struct rte_cryptodev *dev, void *session);
 
 unsigned int
 qat_sym_session_get_private_size(struct rte_cryptodev *dev);
diff --git a/drivers/crypto/scheduler/scheduler_pmd_ops.c b/drivers/crypto/scheduler/scheduler_pmd_ops.c
index 465b88ade8..87260b5a22 100644
--- a/drivers/crypto/scheduler/scheduler_pmd_ops.c
+++ b/drivers/crypto/scheduler/scheduler_pmd_ops.c
@@ -476,9 +476,7 @@ scheduler_pmd_sym_session_get_size(struct rte_cryptodev *dev __rte_unused)
 
 static int
 scheduler_pmd_sym_session_configure(struct rte_cryptodev *dev,
-	struct rte_crypto_sym_xform *xform,
-	struct rte_cryptodev_sym_session *sess,
-	struct rte_mempool *mempool)
+	struct rte_crypto_sym_xform *xform, void *sess)
 {
 	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
 	uint32_t i;
@@ -488,7 +486,7 @@ scheduler_pmd_sym_session_configure(struct rte_cryptodev *dev,
 		struct scheduler_worker *worker = &sched_ctx->workers[i];
 
 		ret = rte_cryptodev_sym_session_init(worker->dev_id, sess,
-					xform, mempool);
+					xform);
 		if (ret < 0) {
 			CR_SCHED_LOG(ERR, "unable to config sym session");
 			return ret;
@@ -500,8 +498,7 @@ scheduler_pmd_sym_session_configure(struct rte_cryptodev *dev,
 
 /** Clear the memory of session so it doesn't leave key material behind */
 static void
-scheduler_pmd_sym_session_clear(struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *sess)
+scheduler_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
 	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
 	uint32_t i;
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c b/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
index 3f46014b7d..b0f8f6d86a 100644
--- a/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
@@ -226,7 +226,6 @@ snow3g_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 
 	qp->mgr = internals->mgr;
 	qp->sess_mp = qp_conf->mp_session;
-	qp->sess_mp_priv = qp_conf->mp_session_private;
 
 	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
 
@@ -250,10 +249,8 @@ snow3g_pmd_sym_session_get_size(struct rte_cryptodev *dev __rte_unused)
 static int
 snow3g_pmd_sym_session_configure(struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform,
-		struct rte_cryptodev_sym_session *sess,
-		struct rte_mempool *mempool)
+		void *sess)
 {
-	void *sess_private_data;
 	int ret;
 	struct snow3g_private *internals = dev->data->dev_private;
 
@@ -262,43 +259,24 @@ snow3g_pmd_sym_session_configure(struct rte_cryptodev *dev,
 		return -EINVAL;
 	}
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		SNOW3G_LOG(ERR,
-			"Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
-
 	ret = snow3g_set_session_parameters(internals->mgr,
-					sess_private_data, xform);
+					sess, xform);
 	if (ret != 0) {
 		SNOW3G_LOG(ERR, "failed configure session parameters");
-
-		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
 		return ret;
 	}
 
-	set_sym_session_private_data(sess, dev->driver_id,
-		sess_private_data);
-
 	return 0;
 }
 
 /** Clear the memory of session so it doesn't leave key material behind */
 static void
-snow3g_pmd_sym_session_clear(struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *sess)
+snow3g_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
-	uint8_t index = dev->driver_id;
-	void *sess_priv = get_sym_session_private_data(sess, index);
-
+	RTE_SET_USED(dev);
 	/* Zero out the whole structure */
-	if (sess_priv) {
-		memset(sess_priv, 0, sizeof(struct snow3g_session));
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
-		set_sym_session_private_data(sess, index, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
-	}
+	if (sess)
+		memset(sess, 0, sizeof(struct snow3g_session));
 }
 
 struct rte_cryptodev_ops snow3g_pmd_ops = {
diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c
index 8faa39df4a..de52fec32e 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.c
+++ b/drivers/crypto/virtio/virtio_cryptodev.c
@@ -37,11 +37,10 @@ static void virtio_crypto_dev_free_mbufs(struct rte_cryptodev *dev);
 static unsigned int virtio_crypto_sym_get_session_private_size(
 		struct rte_cryptodev *dev);
 static void virtio_crypto_sym_clear_session(struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *sess);
+		void *sess);
 static int virtio_crypto_sym_configure_session(struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform,
-		struct rte_cryptodev_sym_session *session,
-		struct rte_mempool *mp);
+		void *session);
 
 /*
  * The set of PCI devices this driver supports
@@ -927,7 +926,7 @@ virtio_crypto_check_sym_clear_session_paras(
 static void
 virtio_crypto_sym_clear_session(
 		struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *sess)
+		void *sess)
 {
 	struct virtio_crypto_hw *hw;
 	struct virtqueue *vq;
@@ -1290,11 +1289,9 @@ static int
 virtio_crypto_check_sym_configure_session_paras(
 		struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform,
-		struct rte_cryptodev_sym_session *sym_sess,
-		struct rte_mempool *mempool)
+		void *sym_sess)
 {
-	if (unlikely(xform == NULL) || unlikely(sym_sess == NULL) ||
-		unlikely(mempool == NULL)) {
+	if (unlikely(xform == NULL) || unlikely(sym_sess == NULL)) {
 		VIRTIO_CRYPTO_SESSION_LOG_ERR("NULL pointer");
 		return -1;
 	}
@@ -1309,12 +1306,9 @@ static int
 virtio_crypto_sym_configure_session(
 		struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform,
-		struct rte_cryptodev_sym_session *sess,
-		struct rte_mempool *mempool)
+		void *sess)
 {
 	int ret;
-	struct virtio_crypto_session crypto_sess;
-	void *session_private = &crypto_sess;
 	struct virtio_crypto_session *session;
 	struct virtio_crypto_op_ctrl_req *ctrl_req;
 	enum virtio_crypto_cmd_id cmd_id;
@@ -1326,19 +1320,13 @@ virtio_crypto_sym_configure_session(
 	PMD_INIT_FUNC_TRACE();
 
 	ret = virtio_crypto_check_sym_configure_session_paras(dev, xform,
-			sess, mempool);
+			sess);
 	if (ret < 0) {
 		VIRTIO_CRYPTO_SESSION_LOG_ERR("Invalid parameters");
 		return ret;
 	}
 
-	if (rte_mempool_get(mempool, &session_private)) {
-		VIRTIO_CRYPTO_SESSION_LOG_ERR(
-			"Couldn't get object from session mempool");
-		return -ENOMEM;
-	}
-
-	session = (struct virtio_crypto_session *)session_private;
+	session = (struct virtio_crypto_session *)sess;
 	memset(session, 0, sizeof(struct virtio_crypto_session));
 	ctrl_req = &session->ctrl;
 	ctrl_req->header.opcode = VIRTIO_CRYPTO_CIPHER_CREATE_SESSION;
@@ -1401,9 +1389,6 @@ virtio_crypto_sym_configure_session(
 		goto error_out;
 	}
 
-	set_sym_session_private_data(sess, dev->driver_id,
-		session_private);
-
 	return 0;
 
 error_out:
diff --git a/drivers/crypto/zuc/rte_zuc_pmd_ops.c b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
index 38642d45ab..04126c8a04 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd_ops.c
+++ b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
@@ -226,7 +226,6 @@ zuc_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 
 	qp->mb_mgr = internals->mb_mgr;
 	qp->sess_mp = qp_conf->mp_session;
-	qp->sess_mp_priv = qp_conf->mp_session_private;
 
 	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
 
@@ -250,10 +249,8 @@ zuc_pmd_sym_session_get_size(struct rte_cryptodev *dev __rte_unused)
 static int
 zuc_pmd_sym_session_configure(struct rte_cryptodev *dev __rte_unused,
 		struct rte_crypto_sym_xform *xform,
-		struct rte_cryptodev_sym_session *sess,
-		struct rte_mempool *mempool)
+		void *sess)
 {
-	void *sess_private_data;
 	int ret;
 
 	if (unlikely(sess == NULL)) {
@@ -261,43 +258,23 @@ zuc_pmd_sym_session_configure(struct rte_cryptodev *dev __rte_unused,
 		return -EINVAL;
 	}
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
-		ZUC_LOG(ERR,
-			"Couldn't get object from session mempool");
-
-		return -ENOMEM;
-	}
-
-	ret = zuc_set_session_parameters(sess_private_data, xform);
+	ret = zuc_set_session_parameters(sess, xform);
 	if (ret != 0) {
 		ZUC_LOG(ERR, "failed configure session parameters");
-
-		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
 		return ret;
 	}
 
-	set_sym_session_private_data(sess, dev->driver_id,
-		sess_private_data);
-
 	return 0;
 }
 
 /** Clear the memory of session so it doesn't leave key material behind */
 static void
-zuc_pmd_sym_session_clear(struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *sess)
+zuc_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
 {
-	uint8_t index = dev->driver_id;
-	void *sess_priv = get_sym_session_private_data(sess, index);
-
+	RTE_SET_USED(dev);
 	/* Zero out the whole structure */
-	if (sess_priv) {
-		memset(sess_priv, 0, sizeof(struct zuc_session));
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
-		set_sym_session_private_data(sess, index, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
-	}
+	if (sess)
+		memset(sess, 0, sizeof(struct zuc_session));
 }
 
 struct rte_cryptodev_ops zuc_pmd_ops = {
diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
index b33cb7e139..8522f2dfda 100644
--- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
+++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
@@ -38,8 +38,7 @@ otx2_ca_deq_post_process(const struct otx2_cpt_qp *qp,
 		}
 
 		if (unlikely(cop->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) {
-			sym_session_clear(otx2_cryptodev_driver_id,
-					  cop->sym->session);
+			sym_session_clear(cop->sym->session);
 			memset(cop->sym->session, 0,
 			rte_cryptodev_sym_get_existing_header_session_size(
 				cop->sym->session));
diff --git a/examples/fips_validation/fips_dev_self_test.c b/examples/fips_validation/fips_dev_self_test.c
index b4eab05a98..bbc27a1b6f 100644
--- a/examples/fips_validation/fips_dev_self_test.c
+++ b/examples/fips_validation/fips_dev_self_test.c
@@ -969,7 +969,6 @@ struct fips_dev_auto_test_env {
 	struct rte_mempool *mpool;
 	struct rte_mempool *op_pool;
 	struct rte_mempool *sess_pool;
-	struct rte_mempool *sess_priv_pool;
 	struct rte_mbuf *mbuf;
 	struct rte_crypto_op *op;
 };
@@ -981,7 +980,7 @@ typedef int (*fips_dev_self_test_prepare_xform_t)(uint8_t,
 		uint32_t);
 
 typedef int (*fips_dev_self_test_prepare_op_t)(struct rte_crypto_op *,
-		struct rte_mbuf *, struct rte_cryptodev_sym_session *,
+		struct rte_mbuf *, void *,
 		uint32_t, struct fips_dev_self_test_vector *);
 
 typedef int (*fips_dev_self_test_check_result_t)(struct rte_crypto_op *,
@@ -1173,7 +1172,7 @@ prepare_aead_xform(uint8_t dev_id,
 static int
 prepare_cipher_op(struct rte_crypto_op *op,
 		struct rte_mbuf *mbuf,
-		struct rte_cryptodev_sym_session *session,
+		void *session,
 		uint32_t dir,
 		struct fips_dev_self_test_vector *vec)
 {
@@ -1212,7 +1211,7 @@ prepare_cipher_op(struct rte_crypto_op *op,
 static int
 prepare_auth_op(struct rte_crypto_op *op,
 		struct rte_mbuf *mbuf,
-		struct rte_cryptodev_sym_session *session,
+		void *session,
 		uint32_t dir,
 		struct fips_dev_self_test_vector *vec)
 {
@@ -1251,7 +1250,7 @@ prepare_auth_op(struct rte_crypto_op *op,
 static int
 prepare_aead_op(struct rte_crypto_op *op,
 		struct rte_mbuf *mbuf,
-		struct rte_cryptodev_sym_session *session,
+		void *session,
 		uint32_t dir,
 		struct fips_dev_self_test_vector *vec)
 {
@@ -1464,7 +1463,7 @@ run_single_test(uint8_t dev_id,
 		uint32_t negative_test)
 {
 	struct rte_crypto_sym_xform xform;
-	struct rte_cryptodev_sym_session *sess;
+	void *sess;
 	uint16_t n_deqd;
 	uint8_t key[256];
 	int ret;
@@ -1484,8 +1483,7 @@ run_single_test(uint8_t dev_id,
 	if (!sess)
 		return -ENOMEM;
 
-	ret = rte_cryptodev_sym_session_init(dev_id,
-			sess, &xform, env->sess_priv_pool);
+	ret = rte_cryptodev_sym_session_init(dev_id, sess, &xform);
 	if (ret < 0) {
 		RTE_LOG(ERR, PMD, "Error %i: Init session\n", ret);
 		return ret;
@@ -1533,8 +1531,6 @@ fips_dev_auto_test_uninit(uint8_t dev_id,
 		rte_mempool_free(env->op_pool);
 	if (env->sess_pool)
 		rte_mempool_free(env->sess_pool);
-	if (env->sess_priv_pool)
-		rte_mempool_free(env->sess_priv_pool);
 
 	rte_cryptodev_stop(dev_id);
 }
@@ -1542,7 +1538,7 @@ fips_dev_auto_test_uninit(uint8_t dev_id,
 static int
 fips_dev_auto_test_init(uint8_t dev_id, struct fips_dev_auto_test_env *env)
 {
-	struct rte_cryptodev_qp_conf qp_conf = {128, NULL, NULL};
+	struct rte_cryptodev_qp_conf qp_conf = {128, NULL};
 	uint32_t sess_sz = rte_cryptodev_sym_get_private_session_size(dev_id);
 	struct rte_cryptodev_config conf;
 	char name[128];
@@ -1586,25 +1582,13 @@ fips_dev_auto_test_init(uint8_t dev_id, struct fips_dev_auto_test_env *env)
 	snprintf(name, 128, "%s%u", "SELF_TEST_SESS_POOL", dev_id);
 
 	env->sess_pool = rte_cryptodev_sym_session_pool_create(name,
-			128, 0, 0, 0, rte_cryptodev_socket_id(dev_id));
+			128, sess_sz, 0, 0, rte_cryptodev_socket_id(dev_id));
 	if (!env->sess_pool) {
 		ret = -ENOMEM;
 		goto error_exit;
 	}
 
-	memset(name, 0, 128);
-	snprintf(name, 128, "%s%u", "SELF_TEST_SESS_PRIV_POOL", dev_id);
-
-	env->sess_priv_pool = rte_mempool_create(name,
-			128, sess_sz, 0, 0, NULL, NULL, NULL,
-			NULL, rte_cryptodev_socket_id(dev_id), 0);
-	if (!env->sess_priv_pool) {
-		ret = -ENOMEM;
-		goto error_exit;
-	}
-
 	qp_conf.mp_session = env->sess_pool;
-	qp_conf.mp_session_private = env->sess_priv_pool;
 
 	ret = rte_cryptodev_queue_pair_setup(dev_id, 0, &qp_conf,
 			rte_cryptodev_socket_id(dev_id));
diff --git a/examples/fips_validation/main.c b/examples/fips_validation/main.c
index a8daad1f48..03c6ccb5b8 100644
--- a/examples/fips_validation/main.c
+++ b/examples/fips_validation/main.c
@@ -48,13 +48,12 @@ struct cryptodev_fips_validate_env {
 	uint16_t mbuf_data_room;
 	struct rte_mempool *mpool;
 	struct rte_mempool *sess_mpool;
-	struct rte_mempool *sess_priv_mpool;
 	struct rte_mempool *op_pool;
 	struct rte_mbuf *mbuf;
 	uint8_t *digest;
 	uint16_t digest_len;
 	struct rte_crypto_op *op;
-	struct rte_cryptodev_sym_session *sess;
+	void *sess;
 	uint16_t self_test;
 	struct fips_dev_broken_test_config *broken_test_config;
 } env;
@@ -63,7 +62,7 @@ static int
 cryptodev_fips_validate_app_int(void)
 {
 	struct rte_cryptodev_config conf = {rte_socket_id(), 1, 0};
-	struct rte_cryptodev_qp_conf qp_conf = {128, NULL, NULL};
+	struct rte_cryptodev_qp_conf qp_conf = {128, NULL};
 	struct rte_cryptodev_info dev_info;
 	uint32_t sess_sz = rte_cryptodev_sym_get_private_session_size(
 			env.dev_id);
@@ -103,16 +102,11 @@ cryptodev_fips_validate_app_int(void)
 	ret = -ENOMEM;
 
 	env.sess_mpool = rte_cryptodev_sym_session_pool_create(
-			"FIPS_SESS_MEMPOOL", 16, 0, 0, 0, rte_socket_id());
+			"FIPS_SESS_MEMPOOL", 16, sess_sz, 0, 0,
+			rte_socket_id());
 	if (!env.sess_mpool)
 		goto error_exit;
 
-	env.sess_priv_mpool = rte_mempool_create("FIPS_SESS_PRIV_MEMPOOL",
-			16, sess_sz, 0, 0, NULL, NULL, NULL,
-			NULL, rte_socket_id(), 0);
-	if (!env.sess_priv_mpool)
-		goto error_exit;
-
 	env.op_pool = rte_crypto_op_pool_create(
 			"FIPS_OP_POOL",
 			RTE_CRYPTO_OP_TYPE_SYMMETRIC,
@@ -127,7 +121,6 @@ cryptodev_fips_validate_app_int(void)
 		goto error_exit;
 
 	qp_conf.mp_session = env.sess_mpool;
-	qp_conf.mp_session_private = env.sess_priv_mpool;
 
 	ret = rte_cryptodev_queue_pair_setup(env.dev_id, 0, &qp_conf,
 			rte_socket_id());
@@ -141,8 +134,6 @@ cryptodev_fips_validate_app_int(void)
 	rte_mempool_free(env.mpool);
 	if (env.sess_mpool)
 		rte_mempool_free(env.sess_mpool);
-	if (env.sess_priv_mpool)
-		rte_mempool_free(env.sess_priv_mpool);
 	if (env.op_pool)
 		rte_mempool_free(env.op_pool);
 
@@ -158,7 +149,6 @@ cryptodev_fips_validate_app_uninit(void)
 	rte_cryptodev_sym_session_free(env.sess);
 	rte_mempool_free(env.mpool);
 	rte_mempool_free(env.sess_mpool);
-	rte_mempool_free(env.sess_priv_mpool);
 	rte_mempool_free(env.op_pool);
 }
 
@@ -1179,7 +1169,7 @@ fips_run_test(void)
 		return -ENOMEM;
 
 	ret = rte_cryptodev_sym_session_init(env.dev_id,
-			env.sess, &xform, env.sess_priv_mpool);
+			env.sess, &xform);
 	if (ret < 0) {
 		RTE_LOG(ERR, USER1, "Error %i: Init session\n",
 				ret);
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 7ad94cb822..65528ee2e7 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -1216,15 +1216,11 @@ ipsec_poll_mode_worker(void)
 	qconf->inbound.sa_ctx = socket_ctx[socket_id].sa_in;
 	qconf->inbound.cdev_map = cdev_map_in;
 	qconf->inbound.session_pool = socket_ctx[socket_id].session_pool;
-	qconf->inbound.session_priv_pool =
-			socket_ctx[socket_id].session_priv_pool;
 	qconf->outbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_out;
 	qconf->outbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_out;
 	qconf->outbound.sa_ctx = socket_ctx[socket_id].sa_out;
 	qconf->outbound.cdev_map = cdev_map_out;
 	qconf->outbound.session_pool = socket_ctx[socket_id].session_pool;
-	qconf->outbound.session_priv_pool =
-			socket_ctx[socket_id].session_priv_pool;
 	qconf->frag.pool_dir = socket_ctx[socket_id].mbuf_pool;
 	qconf->frag.pool_indir = socket_ctx[socket_id].mbuf_pool_indir;
 
@@ -2142,8 +2138,6 @@ cryptodevs_init(uint16_t req_queue_num)
 		qp_conf.nb_descriptors = CDEV_QUEUE_DESC;
 		qp_conf.mp_session =
 			socket_ctx[dev_conf.socket_id].session_pool;
-		qp_conf.mp_session_private =
-			socket_ctx[dev_conf.socket_id].session_priv_pool;
 		for (qp = 0; qp < dev_conf.nb_queue_pairs; qp++)
 			if (rte_cryptodev_queue_pair_setup(cdev_id, qp,
 					&qp_conf, dev_conf.socket_id))
@@ -2405,37 +2399,37 @@ session_pool_init(struct socket_ctx *ctx, int32_t socket_id, size_t sess_sz)
 		printf("Allocated session pool on socket %d\n",	socket_id);
 }
 
-static void
-session_priv_pool_init(struct socket_ctx *ctx, int32_t socket_id,
-	size_t sess_sz)
-{
-	char mp_name[RTE_MEMPOOL_NAMESIZE];
-	struct rte_mempool *sess_mp;
-	uint32_t nb_sess;
-
-	snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
-			"sess_mp_priv_%u", socket_id);
-	nb_sess = (get_nb_crypto_sessions() + CDEV_MP_CACHE_SZ *
-		rte_lcore_count());
-	nb_sess = RTE_MAX(nb_sess, CDEV_MP_CACHE_SZ *
-			CDEV_MP_CACHE_MULTIPLIER);
-	sess_mp = rte_mempool_create(mp_name,
-			nb_sess,
-			sess_sz,
-			CDEV_MP_CACHE_SZ,
-			0, NULL, NULL, NULL,
-			NULL, socket_id,
-			0);
-	ctx->session_priv_pool = sess_mp;
-
-	if (ctx->session_priv_pool == NULL)
-		rte_exit(EXIT_FAILURE,
-			"Cannot init session priv pool on socket %d\n",
-			socket_id);
-	else
-		printf("Allocated session priv pool on socket %d\n",
-			socket_id);
-}
+//static void
+//session_priv_pool_init(struct socket_ctx *ctx, int32_t socket_id,
+//	size_t sess_sz)
+//{
+//	char mp_name[RTE_MEMPOOL_NAMESIZE];
+//	struct rte_mempool *sess_mp;
+//	uint32_t nb_sess;
+//
+//	snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
+//			"sess_mp_priv_%u", socket_id);
+//	nb_sess = (get_nb_crypto_sessions() + CDEV_MP_CACHE_SZ *
+//		rte_lcore_count());
+//	nb_sess = RTE_MAX(nb_sess, CDEV_MP_CACHE_SZ *
+//			CDEV_MP_CACHE_MULTIPLIER);
+//	sess_mp = rte_mempool_create(mp_name,
+//			nb_sess,
+//			sess_sz,
+//			CDEV_MP_CACHE_SZ,
+//			0, NULL, NULL, NULL,
+//			NULL, socket_id,
+//			0);
+//	ctx->session_priv_pool = sess_mp;
+//
+//	if (ctx->session_priv_pool == NULL)
+//		rte_exit(EXIT_FAILURE,
+//			"Cannot init session priv pool on socket %d\n",
+//			socket_id);
+//	else
+//		printf("Allocated session priv pool on socket %d\n",
+//			socket_id);
+//}
 
 static void
 pool_init(struct socket_ctx *ctx, int32_t socket_id, uint32_t nb_mbuf)
@@ -2938,8 +2932,8 @@ main(int32_t argc, char **argv)
 
 		pool_init(&socket_ctx[socket_id], socket_id, nb_bufs_in_pool);
 		session_pool_init(&socket_ctx[socket_id], socket_id, sess_sz);
-		session_priv_pool_init(&socket_ctx[socket_id], socket_id,
-			sess_sz);
+//		session_priv_pool_init(&socket_ctx[socket_id], socket_id,
+//			sess_sz);
 	}
 	printf("Number of mbufs in packet pool %d\n", nb_bufs_in_pool);
 
diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c
index 03d907cba8..a5921de11c 100644
--- a/examples/ipsec-secgw/ipsec.c
+++ b/examples/ipsec-secgw/ipsec.c
@@ -143,8 +143,7 @@ create_lookaside_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa,
 		ips->crypto.ses = rte_cryptodev_sym_session_create(
 				ipsec_ctx->session_pool);
 		rte_cryptodev_sym_session_init(ipsec_ctx->tbl[cdev_id_qp].id,
-				ips->crypto.ses, sa->xforms,
-				ipsec_ctx->session_priv_pool);
+				ips->crypto.ses, sa->xforms);
 
 		rte_cryptodev_info_get(ipsec_ctx->tbl[cdev_id_qp].id,
 				&cdev_info);
diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h
index 8405c48171..673c64e8dc 100644
--- a/examples/ipsec-secgw/ipsec.h
+++ b/examples/ipsec-secgw/ipsec.h
@@ -243,7 +243,6 @@ struct socket_ctx {
 	struct rte_mempool *mbuf_pool;
 	struct rte_mempool *mbuf_pool_indir;
 	struct rte_mempool *session_pool;
-	struct rte_mempool *session_priv_pool;
 };
 
 struct cnt_blk {
diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c
index c545497cee..04bcce49db 100644
--- a/examples/ipsec-secgw/ipsec_worker.c
+++ b/examples/ipsec-secgw/ipsec_worker.c
@@ -537,14 +537,10 @@ ipsec_wrkr_non_burst_int_port_app_mode(struct eh_event_link_info *links,
 	lconf.inbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_in;
 	lconf.inbound.sa_ctx = socket_ctx[socket_id].sa_in;
 	lconf.inbound.session_pool = socket_ctx[socket_id].session_pool;
-	lconf.inbound.session_priv_pool =
-			socket_ctx[socket_id].session_priv_pool;
 	lconf.outbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_out;
 	lconf.outbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_out;
 	lconf.outbound.sa_ctx = socket_ctx[socket_id].sa_out;
 	lconf.outbound.session_pool = socket_ctx[socket_id].session_pool;
-	lconf.outbound.session_priv_pool =
-			socket_ctx[socket_id].session_priv_pool;
 
 	RTE_LOG(INFO, IPSEC,
 		"Launching event mode worker (non-burst - Tx internal port - "
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index 66d1491bf7..bdc3da731c 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -188,7 +188,7 @@ struct l2fwd_crypto_params {
 	struct l2fwd_iv auth_iv;
 	struct l2fwd_iv aead_iv;
 	struct l2fwd_key aad;
-	struct rte_cryptodev_sym_session *session;
+	void *session;
 
 	uint8_t do_cipher;
 	uint8_t do_hash;
@@ -229,7 +229,6 @@ struct rte_mempool *l2fwd_pktmbuf_pool;
 struct rte_mempool *l2fwd_crypto_op_pool;
 static struct {
 	struct rte_mempool *sess_mp;
-	struct rte_mempool *priv_mp;
 } session_pool_socket[RTE_MAX_NUMA_NODES];
 
 /* Per-port statistics struct */
@@ -671,11 +670,11 @@ generate_random_key(uint8_t *key, unsigned length)
 }
 
 /* Session is created and is later attached to the crypto operation. 8< */
-static struct rte_cryptodev_sym_session *
+static void *
 initialize_crypto_session(struct l2fwd_crypto_options *options, uint8_t cdev_id)
 {
 	struct rte_crypto_sym_xform *first_xform;
-	struct rte_cryptodev_sym_session *session;
+	void *session;
 	int retval = rte_cryptodev_socket_id(cdev_id);
 
 	if (retval < 0)
@@ -703,8 +702,7 @@ initialize_crypto_session(struct l2fwd_crypto_options *options, uint8_t cdev_id)
 		return NULL;
 
 	if (rte_cryptodev_sym_session_init(cdev_id, session,
-				first_xform,
-				session_pool_socket[socket_id].priv_mp) < 0)
+				first_xform) < 0)
 		return NULL;
 
 	return session;
@@ -730,7 +728,7 @@ l2fwd_main_loop(struct l2fwd_crypto_options *options)
 			US_PER_S * BURST_TX_DRAIN_US;
 	struct l2fwd_crypto_params *cparams;
 	struct l2fwd_crypto_params port_cparams[qconf->nb_crypto_devs];
-	struct rte_cryptodev_sym_session *session;
+	void *session;
 
 	if (qconf->nb_rx_ports == 0) {
 		RTE_LOG(INFO, L2FWD, "lcore %u has nothing to do\n", lcore_id);
@@ -2388,30 +2386,6 @@ initialize_cryptodevs(struct l2fwd_crypto_options *options, unsigned nb_ports,
 		} else
 			sessions_needed = enabled_cdev_count;
 
-		if (session_pool_socket[socket_id].priv_mp == NULL) {
-			char mp_name[RTE_MEMPOOL_NAMESIZE];
-
-			snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
-				"priv_sess_mp_%u", socket_id);
-
-			session_pool_socket[socket_id].priv_mp =
-					rte_mempool_create(mp_name,
-						sessions_needed,
-						max_sess_sz,
-						0, 0, NULL, NULL, NULL,
-						NULL, socket_id,
-						0);
-
-			if (session_pool_socket[socket_id].priv_mp == NULL) {
-				printf("Cannot create pool on socket %d\n",
-					socket_id);
-				return -ENOMEM;
-			}
-
-			printf("Allocated pool \"%s\" on socket %d\n",
-				mp_name, socket_id);
-		}
-
 		if (session_pool_socket[socket_id].sess_mp == NULL) {
 			char mp_name[RTE_MEMPOOL_NAMESIZE];
 			snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
@@ -2421,7 +2395,8 @@ initialize_cryptodevs(struct l2fwd_crypto_options *options, unsigned nb_ports,
 					rte_cryptodev_sym_session_pool_create(
 							mp_name,
 							sessions_needed,
-							0, 0, 0, socket_id);
+							max_sess_sz,
+							0, 0, socket_id);
 
 			if (session_pool_socket[socket_id].sess_mp == NULL) {
 				printf("Cannot create pool on socket %d\n",
@@ -2573,8 +2548,6 @@ initialize_cryptodevs(struct l2fwd_crypto_options *options, unsigned nb_ports,
 
 		qp_conf.nb_descriptors = 2048;
 		qp_conf.mp_session = session_pool_socket[socket_id].sess_mp;
-		qp_conf.mp_session_private =
-				session_pool_socket[socket_id].priv_mp;
 
 		retval = rte_cryptodev_queue_pair_setup(cdev_id, 0, &qp_conf,
 				socket_id);
diff --git a/examples/vhost_crypto/main.c b/examples/vhost_crypto/main.c
index dea7dcbd07..cbb97aaf76 100644
--- a/examples/vhost_crypto/main.c
+++ b/examples/vhost_crypto/main.c
@@ -46,7 +46,6 @@ struct vhost_crypto_info {
 	int vids[MAX_NB_SOCKETS];
 	uint32_t nb_vids;
 	struct rte_mempool *sess_pool;
-	struct rte_mempool *sess_priv_pool;
 	struct rte_mempool *cop_pool;
 	uint8_t cid;
 	uint32_t qid;
@@ -304,7 +303,6 @@ new_device(int vid)
 	}
 
 	ret = rte_vhost_crypto_create(vid, info->cid, info->sess_pool,
-			info->sess_priv_pool,
 			rte_lcore_to_socket_id(options.los[i].lcore_id));
 	if (ret) {
 		RTE_LOG(ERR, USER1, "Cannot create vhost crypto\n");
@@ -458,7 +456,6 @@ free_resource(void)
 
 		rte_mempool_free(info->cop_pool);
 		rte_mempool_free(info->sess_pool);
-		rte_mempool_free(info->sess_priv_pool);
 
 		for (j = 0; j < lo->nb_sockets; j++) {
 			rte_vhost_driver_unregister(lo->socket_files[i]);
@@ -544,16 +541,12 @@ main(int argc, char *argv[])
 
 		snprintf(name, 127, "SESS_POOL_%u", lo->lcore_id);
 		info->sess_pool = rte_cryptodev_sym_session_pool_create(name,
-				SESSION_MAP_ENTRIES, 0, 0, 0,
-				rte_lcore_to_socket_id(lo->lcore_id));
-
-		snprintf(name, 127, "SESS_POOL_PRIV_%u", lo->lcore_id);
-		info->sess_priv_pool = rte_mempool_create(name,
 				SESSION_MAP_ENTRIES,
 				rte_cryptodev_sym_get_private_session_size(
-				info->cid), 64, 0, NULL, NULL, NULL, NULL,
-				rte_lcore_to_socket_id(lo->lcore_id), 0);
-		if (!info->sess_priv_pool || !info->sess_pool) {
+					info->cid), 0, 0,
+				rte_lcore_to_socket_id(lo->lcore_id));
+
+		if (!info->sess_pool) {
 			RTE_LOG(ERR, USER1, "Failed to create mempool");
 			goto error_exit;
 		}
@@ -574,7 +567,6 @@ main(int argc, char *argv[])
 
 		qp_conf.nb_descriptors = NB_CRYPTO_DESCRIPTORS;
 		qp_conf.mp_session = info->sess_pool;
-		qp_conf.mp_session_private = info->sess_priv_pool;
 
 		for (j = 0; j < dev_info.max_nb_queue_pairs; j++) {
 			ret = rte_cryptodev_queue_pair_setup(info->cid, j,
diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h
index ae3aac59ae..d758b3b85d 100644
--- a/lib/cryptodev/cryptodev_pmd.h
+++ b/lib/cryptodev/cryptodev_pmd.h
@@ -242,7 +242,6 @@ typedef unsigned int (*cryptodev_asym_get_session_private_size_t)(
  * @param	dev		Crypto device pointer
  * @param	xform		Single or chain of crypto xforms
  * @param	session		Pointer to cryptodev's private session structure
- * @param	mp		Mempool where the private session is allocated
  *
  * @return
  *  - Returns 0 if private session structure have been created successfully.
@@ -251,9 +250,7 @@ typedef unsigned int (*cryptodev_asym_get_session_private_size_t)(
  *  - Returns -ENOMEM if the private session could not be allocated.
  */
 typedef int (*cryptodev_sym_configure_session_t)(struct rte_cryptodev *dev,
-		struct rte_crypto_sym_xform *xform,
-		struct rte_cryptodev_sym_session *session,
-		struct rte_mempool *mp);
+		struct rte_crypto_sym_xform *xform, void *session);
 /**
  * Configure a Crypto asymmetric session on a device.
  *
@@ -279,7 +276,7 @@ typedef int (*cryptodev_asym_configure_session_t)(struct rte_cryptodev *dev,
  * @param	sess		Cryptodev session structure
  */
 typedef void (*cryptodev_sym_free_session_t)(struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *sess);
+		void *sess);
 /**
  * Free asymmetric session private data.
  *
diff --git a/lib/cryptodev/rte_crypto.h b/lib/cryptodev/rte_crypto.h
index a864f5036f..200617f623 100644
--- a/lib/cryptodev/rte_crypto.h
+++ b/lib/cryptodev/rte_crypto.h
@@ -420,7 +420,7 @@ rte_crypto_op_sym_xforms_alloc(struct rte_crypto_op *op, uint8_t nb_xforms)
  */
 static inline int
 rte_crypto_op_attach_sym_session(struct rte_crypto_op *op,
-		struct rte_cryptodev_sym_session *sess)
+		void *sess)
 {
 	if (unlikely(op->type != RTE_CRYPTO_OP_TYPE_SYMMETRIC))
 		return -1;
diff --git a/lib/cryptodev/rte_crypto_sym.h b/lib/cryptodev/rte_crypto_sym.h
index 58c0724743..848da1942c 100644
--- a/lib/cryptodev/rte_crypto_sym.h
+++ b/lib/cryptodev/rte_crypto_sym.h
@@ -932,7 +932,7 @@ __rte_crypto_sym_op_sym_xforms_alloc(struct rte_crypto_sym_op *sym_op,
  */
 static inline int
 __rte_crypto_sym_op_attach_sym_session(struct rte_crypto_sym_op *sym_op,
-		struct rte_cryptodev_sym_session *sess)
+		void *sess)
 {
 	sym_op->session = sess;
 
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index 9fa3aff1d3..a31cae202a 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -199,6 +199,8 @@ struct rte_cryptodev_sym_session_pool_private_data {
 	/**< number of elements in sess_data array */
 	uint16_t user_data_sz;
 	/**< session user data will be placed after sess_data */
+	uint16_t sess_priv_sz;
+	/**< session user data will be placed after sess_data */
 };
 
 int
@@ -1223,8 +1225,7 @@ rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
 		return -EINVAL;
 	}
 
-	if ((qp_conf->mp_session && !qp_conf->mp_session_private) ||
-			(!qp_conf->mp_session && qp_conf->mp_session_private)) {
+	if (!qp_conf->mp_session) {
 		CDEV_LOG_ERR("Invalid mempools\n");
 		return -EINVAL;
 	}
@@ -1232,7 +1233,6 @@ rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
 	if (qp_conf->mp_session) {
 		struct rte_cryptodev_sym_session_pool_private_data *pool_priv;
 		uint32_t obj_size = qp_conf->mp_session->elt_size;
-		uint32_t obj_priv_size = qp_conf->mp_session_private->elt_size;
 		struct rte_cryptodev_sym_session s = {0};
 
 		pool_priv = rte_mempool_get_priv(qp_conf->mp_session);
@@ -1244,11 +1244,11 @@ rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
 
 		s.nb_drivers = pool_priv->nb_drivers;
 		s.user_data_sz = pool_priv->user_data_sz;
+		s.priv_sz = pool_priv->sess_priv_sz;
 
-		if ((rte_cryptodev_sym_get_existing_header_session_size(&s) >
-			obj_size) || (s.nb_drivers <= dev->driver_id) ||
-			rte_cryptodev_sym_get_private_session_size(dev_id) >
-				obj_priv_size) {
+		if (((rte_cryptodev_sym_get_existing_header_session_size(&s) +
+				(s.nb_drivers * s.priv_sz)) > obj_size) ||
+				(s.nb_drivers <= dev->driver_id)) {
 			CDEV_LOG_ERR("Invalid mempool\n");
 			return -EINVAL;
 		}
@@ -1710,11 +1710,11 @@ rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
 
 int
 rte_cryptodev_sym_session_init(uint8_t dev_id,
-		struct rte_cryptodev_sym_session *sess,
-		struct rte_crypto_sym_xform *xforms,
-		struct rte_mempool *mp)
+		void *sess_opaque,
+		struct rte_crypto_sym_xform *xforms)
 {
 	struct rte_cryptodev *dev;
+	struct rte_cryptodev_sym_session *sess = sess_opaque;
 	uint32_t sess_priv_sz = rte_cryptodev_sym_get_private_session_size(
 			dev_id);
 	uint8_t index;
@@ -1727,10 +1727,10 @@ rte_cryptodev_sym_session_init(uint8_t dev_id,
 
 	dev = rte_cryptodev_pmd_get_dev(dev_id);
 
-	if (sess == NULL || xforms == NULL || dev == NULL || mp == NULL)
+	if (sess == NULL || xforms == NULL || dev == NULL)
 		return -EINVAL;
 
-	if (mp->elt_size < sess_priv_sz)
+	if (sess->priv_sz < sess_priv_sz)
 		return -EINVAL;
 
 	index = dev->driver_id;
@@ -1740,8 +1740,11 @@ rte_cryptodev_sym_session_init(uint8_t dev_id,
 	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->sym_session_configure, -ENOTSUP);
 
 	if (sess->sess_data[index].refcnt == 0) {
+		sess->sess_data[index].data = (void *)((uint8_t *)sess +
+				rte_cryptodev_sym_get_header_session_size() +
+				(index * sess->priv_sz));
 		ret = dev->dev_ops->sym_session_configure(dev, xforms,
-							sess, mp);
+				sess->sess_data[index].data);
 		if (ret < 0) {
 			CDEV_LOG_ERR(
 				"dev_id %d failed to configure session details",
@@ -1750,7 +1753,7 @@ rte_cryptodev_sym_session_init(uint8_t dev_id,
 		}
 	}
 
-	rte_cryptodev_trace_sym_session_init(dev_id, sess, xforms, mp);
+	rte_cryptodev_trace_sym_session_init(dev_id, sess, xforms);
 	sess->sess_data[index].refcnt++;
 	return 0;
 }
@@ -1795,6 +1798,21 @@ rte_cryptodev_asym_session_init(uint8_t dev_id,
 	rte_cryptodev_trace_asym_session_init(dev_id, sess, xforms, mp);
 	return 0;
 }
+static size_t
+get_max_sym_sess_priv_sz(void)
+{
+	size_t max_sz, sz;
+	int16_t cdev_id, n;
+
+	max_sz = 0;
+	n =  rte_cryptodev_count();
+	for (cdev_id = 0; cdev_id != n; cdev_id++) {
+		sz = rte_cryptodev_sym_get_private_session_size(cdev_id);
+		if (sz > max_sz)
+			max_sz = sz;
+	}
+	return max_sz;
+}
 
 struct rte_mempool *
 rte_cryptodev_sym_session_pool_create(const char *name, uint32_t nb_elts,
@@ -1804,15 +1822,15 @@ rte_cryptodev_sym_session_pool_create(const char *name, uint32_t nb_elts,
 	struct rte_mempool *mp;
 	struct rte_cryptodev_sym_session_pool_private_data *pool_priv;
 	uint32_t obj_sz;
+	uint32_t sess_priv_sz = get_max_sym_sess_priv_sz();
 
 	obj_sz = rte_cryptodev_sym_get_header_session_size() + user_data_size;
-	if (obj_sz > elt_size)
+	if (elt_size < obj_sz + (sess_priv_sz * nb_drivers)) {
 		CDEV_LOG_INFO("elt_size %u is expanded to %u\n", elt_size,
-				obj_sz);
-	else
-		obj_sz = elt_size;
-
-	mp = rte_mempool_create(name, nb_elts, obj_sz, cache_size,
+				obj_sz + (sess_priv_sz * nb_drivers));
+		elt_size = obj_sz + (sess_priv_sz * nb_drivers);
+	}
+	mp = rte_mempool_create(name, nb_elts, elt_size, cache_size,
 			(uint32_t)(sizeof(*pool_priv)),
 			NULL, NULL, NULL, NULL,
 			socket_id, 0);
@@ -1832,6 +1850,7 @@ rte_cryptodev_sym_session_pool_create(const char *name, uint32_t nb_elts,
 
 	pool_priv->nb_drivers = nb_drivers;
 	pool_priv->user_data_sz = user_data_size;
+	pool_priv->sess_priv_sz = sess_priv_sz;
 
 	rte_cryptodev_trace_sym_session_pool_create(name, nb_elts,
 		elt_size, cache_size, user_data_size, mp);
@@ -1865,7 +1884,7 @@ rte_cryptodev_sym_is_valid_session_pool(struct rte_mempool *mp)
 	return 1;
 }
 
-struct rte_cryptodev_sym_session *
+void *
 rte_cryptodev_sym_session_create(struct rte_mempool *mp)
 {
 	struct rte_cryptodev_sym_session *sess;
@@ -1886,6 +1905,7 @@ rte_cryptodev_sym_session_create(struct rte_mempool *mp)
 
 	sess->nb_drivers = pool_priv->nb_drivers;
 	sess->user_data_sz = pool_priv->user_data_sz;
+	sess->priv_sz = pool_priv->sess_priv_sz;
 	sess->opaque_data = 0;
 
 	/* Clear device session pointer.
@@ -1895,7 +1915,7 @@ rte_cryptodev_sym_session_create(struct rte_mempool *mp)
 			rte_cryptodev_sym_session_data_size(sess));
 
 	rte_cryptodev_trace_sym_session_create(mp, sess);
-	return sess;
+	return (void *)sess;
 }
 
 struct rte_cryptodev_asym_session *
@@ -1933,9 +1953,9 @@ rte_cryptodev_asym_session_create(struct rte_mempool *mp)
 }
 
 int
-rte_cryptodev_sym_session_clear(uint8_t dev_id,
-		struct rte_cryptodev_sym_session *sess)
+rte_cryptodev_sym_session_clear(uint8_t dev_id, void *s)
 {
+	struct rte_cryptodev_sym_session *sess = s;
 	struct rte_cryptodev *dev;
 	uint8_t driver_id;
 
@@ -1957,7 +1977,7 @@ rte_cryptodev_sym_session_clear(uint8_t dev_id,
 
 	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->sym_session_clear, -ENOTSUP);
 
-	dev->dev_ops->sym_session_clear(dev, sess);
+	dev->dev_ops->sym_session_clear(dev, sess->sess_data[driver_id].data);
 
 	rte_cryptodev_trace_sym_session_clear(dev_id, sess);
 	return 0;
@@ -1988,10 +2008,11 @@ rte_cryptodev_asym_session_clear(uint8_t dev_id,
 }
 
 int
-rte_cryptodev_sym_session_free(struct rte_cryptodev_sym_session *sess)
+rte_cryptodev_sym_session_free(void *s)
 {
 	uint8_t i;
 	struct rte_mempool *sess_mp;
+	struct rte_cryptodev_sym_session *sess = s;
 
 	if (sess == NULL)
 		return -EINVAL;
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index bb01f0f195..25af6fa7b9 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -537,8 +537,6 @@ struct rte_cryptodev_qp_conf {
 	uint32_t nb_descriptors; /**< Number of descriptors per queue pair */
 	struct rte_mempool *mp_session;
 	/**< The mempool for creating session in sessionless mode */
-	struct rte_mempool *mp_session_private;
-	/**< The mempool for creating sess private data in sessionless mode */
 };
 
 /**
@@ -1126,6 +1124,8 @@ struct rte_cryptodev_sym_session {
 	/**< number of elements in sess_data array */
 	uint16_t user_data_sz;
 	/**< session user data will be placed after sess_data */
+	uint16_t priv_sz;
+	/**< Maximum private session data size which each driver can use */
 	__extension__ struct {
 		void *data;
 		uint16_t refcnt;
@@ -1177,10 +1177,10 @@ rte_cryptodev_sym_session_pool_create(const char *name, uint32_t nb_elts,
  * @param   mempool    Symmetric session mempool to allocate session
  *                     objects from
  * @return
- *  - On success return pointer to sym-session
+ *  - On success return opaque pointer to sym-session
  *  - On failure returns NULL
  */
-struct rte_cryptodev_sym_session *
+void *
 rte_cryptodev_sym_session_create(struct rte_mempool *mempool);
 
 /**
@@ -1209,7 +1209,7 @@ rte_cryptodev_asym_session_create(struct rte_mempool *mempool);
  *  - -EBUSY if not all device private data has been freed.
  */
 int
-rte_cryptodev_sym_session_free(struct rte_cryptodev_sym_session *sess);
+rte_cryptodev_sym_session_free(void *sess);
 
 /**
  * Frees asymmetric crypto session header, after checking that all
@@ -1229,25 +1229,23 @@ rte_cryptodev_asym_session_free(struct rte_cryptodev_asym_session *sess);
 
 /**
  * Fill out private data for the device id, based on its device type.
+ * Memory for private data is already allocated in sess, driver need
+ * to fill the content.
  *
  * @param   dev_id   ID of device that we want the session to be used on
  * @param   sess     Session where the private data will be attached to
  * @param   xforms   Symmetric crypto transform operations to apply on flow
  *                   processed with this session
- * @param   mempool  Mempool where the private data is allocated.
  *
  * @return
  *  - On success, zero.
  *  - -EINVAL if input parameters are invalid.
  *  - -ENOTSUP if crypto device does not support the crypto transform or
  *    does not support symmetric operations.
- *  - -ENOMEM if the private session could not be allocated.
  */
 int
-rte_cryptodev_sym_session_init(uint8_t dev_id,
-			struct rte_cryptodev_sym_session *sess,
-			struct rte_crypto_sym_xform *xforms,
-			struct rte_mempool *mempool);
+rte_cryptodev_sym_session_init(uint8_t dev_id, void *sess,
+			struct rte_crypto_sym_xform *xforms);
 
 /**
  * Initialize asymmetric session on a device with specific asymmetric xform
@@ -1286,8 +1284,7 @@ rte_cryptodev_asym_session_init(uint8_t dev_id,
  *  - -ENOTSUP if crypto device does not support symmetric operations.
  */
 int
-rte_cryptodev_sym_session_clear(uint8_t dev_id,
-			struct rte_cryptodev_sym_session *sess);
+rte_cryptodev_sym_session_clear(uint8_t dev_id, void *sess);
 
 /**
  * Frees resources held by asymmetric session during rte_cryptodev_session_init
diff --git a/lib/cryptodev/rte_cryptodev_trace.h b/lib/cryptodev/rte_cryptodev_trace.h
index d1f4f069a3..44da04c425 100644
--- a/lib/cryptodev/rte_cryptodev_trace.h
+++ b/lib/cryptodev/rte_cryptodev_trace.h
@@ -56,7 +56,6 @@ RTE_TRACE_POINT(
 	rte_trace_point_emit_u16(queue_pair_id);
 	rte_trace_point_emit_u32(conf->nb_descriptors);
 	rte_trace_point_emit_ptr(conf->mp_session);
-	rte_trace_point_emit_ptr(conf->mp_session_private);
 )
 
 RTE_TRACE_POINT(
@@ -106,15 +105,13 @@ RTE_TRACE_POINT(
 RTE_TRACE_POINT(
 	rte_cryptodev_trace_sym_session_init,
 	RTE_TRACE_POINT_ARGS(uint8_t dev_id,
-		struct rte_cryptodev_sym_session *sess, void *xforms,
-		void *mempool),
+		struct rte_cryptodev_sym_session *sess, void *xforms),
 	rte_trace_point_emit_u8(dev_id);
 	rte_trace_point_emit_ptr(sess);
 	rte_trace_point_emit_u64(sess->opaque_data);
 	rte_trace_point_emit_u16(sess->nb_drivers);
 	rte_trace_point_emit_u16(sess->user_data_sz);
 	rte_trace_point_emit_ptr(xforms);
-	rte_trace_point_emit_ptr(mempool);
 )
 
 RTE_TRACE_POINT(
diff --git a/lib/pipeline/rte_table_action.c b/lib/pipeline/rte_table_action.c
index ad7904c0ee..efdba9c899 100644
--- a/lib/pipeline/rte_table_action.c
+++ b/lib/pipeline/rte_table_action.c
@@ -1719,7 +1719,7 @@ struct sym_crypto_data {
 	uint16_t op_mask;
 
 	/** Session pointer. */
-	struct rte_cryptodev_sym_session *session;
+	void *session;
 
 	/** Direction of crypto, encrypt or decrypt */
 	uint16_t direction;
@@ -1780,7 +1780,7 @@ sym_crypto_apply(struct sym_crypto_data *data,
 	const struct rte_crypto_auth_xform *auth_xform = NULL;
 	const struct rte_crypto_aead_xform *aead_xform = NULL;
 	struct rte_crypto_sym_xform *xform = p->xform;
-	struct rte_cryptodev_sym_session *session;
+	void *session;
 	int ret;
 
 	memset(data, 0, sizeof(*data));
@@ -1905,7 +1905,7 @@ sym_crypto_apply(struct sym_crypto_data *data,
 		return -ENOMEM;
 
 	ret = rte_cryptodev_sym_session_init(cfg->cryptodev_id, session,
-			p->xform, cfg->mp_init);
+			p->xform);
 	if (ret < 0) {
 		rte_cryptodev_sym_session_free(session);
 		return ret;
@@ -2858,7 +2858,7 @@ rte_table_action_time_read(struct rte_table_action *action,
 	return 0;
 }
 
-struct rte_cryptodev_sym_session *
+void *
 rte_table_action_crypto_sym_session_get(struct rte_table_action *action,
 	void *data)
 {
diff --git a/lib/pipeline/rte_table_action.h b/lib/pipeline/rte_table_action.h
index 82bc9d9ac9..68db453a8b 100644
--- a/lib/pipeline/rte_table_action.h
+++ b/lib/pipeline/rte_table_action.h
@@ -1129,7 +1129,7 @@ rte_table_action_time_read(struct rte_table_action *action,
  *   The pointer to the session on success, NULL otherwise.
  */
 __rte_experimental
-struct rte_cryptodev_sym_session *
+void *
 rte_table_action_crypto_sym_session_get(struct rte_table_action *action,
 	void *data);
 
diff --git a/lib/vhost/rte_vhost_crypto.h b/lib/vhost/rte_vhost_crypto.h
index f54d731139..d9b7beed9c 100644
--- a/lib/vhost/rte_vhost_crypto.h
+++ b/lib/vhost/rte_vhost_crypto.h
@@ -50,8 +50,6 @@ rte_vhost_crypto_driver_start(const char *path);
  *  multiple Vhost-crypto devices.
  * @param sess_pool
  *  The pointer to the created cryptodev session pool.
- * @param sess_priv_pool
- *  The pointer to the created cryptodev session private data mempool.
  * @param socket_id
  *  NUMA Socket ID to allocate resources on. *
  * @return
@@ -61,7 +59,6 @@ rte_vhost_crypto_driver_start(const char *path);
 int
 rte_vhost_crypto_create(int vid, uint8_t cryptodev_id,
 		struct rte_mempool *sess_pool,
-		struct rte_mempool *sess_priv_pool,
 		int socket_id);
 
 /**
diff --git a/lib/vhost/vhost_crypto.c b/lib/vhost/vhost_crypto.c
index 926b5c0bd9..b4464c4253 100644
--- a/lib/vhost/vhost_crypto.c
+++ b/lib/vhost/vhost_crypto.c
@@ -338,7 +338,7 @@ vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
 		VhostUserCryptoSessionParam *sess_param)
 {
 	struct rte_crypto_sym_xform xform1 = {0}, xform2 = {0};
-	struct rte_cryptodev_sym_session *session;
+	void *session;
 	int ret;
 
 	switch (sess_param->op_type) {
@@ -383,8 +383,7 @@ vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
 		return;
 	}
 
-	if (rte_cryptodev_sym_session_init(vcrypto->cid, session, &xform1,
-			vcrypto->sess_priv_pool) < 0) {
+	if (rte_cryptodev_sym_session_init(vcrypto->cid, session, &xform1) < 0) {
 		VC_LOG_ERR("Failed to initialize session");
 		sess_param->session_id = -VIRTIO_CRYPTO_ERR;
 		return;
@@ -1425,7 +1424,6 @@ rte_vhost_crypto_driver_start(const char *path)
 int
 rte_vhost_crypto_create(int vid, uint8_t cryptodev_id,
 		struct rte_mempool *sess_pool,
-		struct rte_mempool *sess_priv_pool,
 		int socket_id)
 {
 	struct virtio_net *dev = get_device(vid);
@@ -1447,7 +1445,6 @@ rte_vhost_crypto_create(int vid, uint8_t cryptodev_id,
 	}
 
 	vcrypto->sess_pool = sess_pool;
-	vcrypto->sess_priv_pool = sess_priv_pool;
 	vcrypto->cid = cryptodev_id;
 	vcrypto->cache_session_id = UINT64_MAX;
 	vcrypto->last_session_id = 1;
-- 
2.25.1


^ permalink raw reply	[relevance 1%]

* Re: [dpdk-dev] [PATCH] net/softnic: remove experimental table from API
  @ 2021-09-30 15:42  3%     ` David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2021-09-30 15:42 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: Kinsella, Ray, Jasvinder Singh, dev, Cristian Dumitrescu

On Wed, Sep 15, 2021 at 9:33 AM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>
> On 9/1/2021 2:48 PM, Kinsella, Ray wrote:
> >
> >
> > On 01/09/2021 13:20, Jasvinder Singh wrote:
> >> This API was introduced in 18.08, therefore removing
> >> experimental tag to promote it to stable state.
> >>
> >> Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
> >> ---
> >>  drivers/net/softnic/rte_eth_softnic.h | 1 -
> >>  drivers/net/softnic/version.map       | 7 +------
> >>  2 files changed, 1 insertion(+), 7 deletions(-)
> >>
> > Acked-by: Ray Kinsella <mdr@ashroe.eu>
> >
>
> Applied to dpdk-next-net/main, thanks.


Nit:

diff --git a/drivers/net/softnic/version.map b/drivers/net/softnic/version.map
index cd5afcf155..01e1514276 100644
--- a/drivers/net/softnic/version.map
+++ b/drivers/net/softnic/version.map
@@ -1,8 +1,8 @@
 DPDK_22 {
        global:

-       rte_pmd_softnic_run;
        rte_pmd_softnic_manage;
+       rte_pmd_softnic_run;

        local: *;
 };


When merging patches touching .map, I recommend running:
./devtools/update-abi.sh 22.0


-- 
David Marchand


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] DPDK Release Status Meeting 2021-09-23
@ 2021-09-30 19:38  4% Mcnamara, John
  0 siblings, 0 replies; 200+ results
From: Mcnamara, John @ 2021-09-30 19:38 UTC (permalink / raw)
  To: dev; +Cc: thomas, Yigit, Ferruh

Release status meeting minutes 2021-09-23
=========================================

Agenda:
* Release Dates
* Subtrees
* Roadmaps
* LTS
* Defects
* Opens

Participants:
* ARM
* Debian
* Intel
* Marvell
* Nvidia
* Red Hat


Release Dates
-------------

* v21.11 dates
  * Proposal/V1:    Friday, 10 September (complete)
  * -rc1:           Friday, 15 October
  * Release:        Friday, 19 November


Subtrees
--------

* next-net

  * Merged from subtrees 2021-09-22
  * Issue with Broadcom tree and older compilers
  * Reviewing Enetfec NXP driver
  * Lot of Ethdev patches -  concerning due to number or patches and to conflicts between them.

* next-net-intel

  - Merged base codes updates
  - More coming
  - Waiting for fix on ICE driver. CI issue with RSS

* next-net-mlx

  - Some patches merged and some merges ongoing

* next-net-brcm

  * Issue with GCC7 - under investigation

* next-net-mrvl

  - Some patches merged and some ongoing

* next-eventdev

  * 2 series from Intel - In good shape for merge
  * Changes on API hiding (for ABI stability)

* next-virtio

  * More reviews ongoing. ~6     ready for merge
  * Reviewing virtio RSS feature

* next-crypto

  * Patches for test framework
  * ABI changes reviewed
  * Crypto and security session will be sent in the next few days

* main

  * API updates for interrupts

  * - Call to driver maintainers      to look at it
    - https://patchwork.dpdk.org/project/dpdk/list/?series=18664

  * Pipelines patches to review

  * Issue with KNI build on SuSE

  * New Slack channel: https://join.slack.com/t/dpdkproject/shared_invite/zt-v6c9ef5z-FqgOAS7BDAYqev_a~pbXdw




LTS
---

* v19.11 (next version is v19.11.10)
  * 20.11.3 released on Monday Sept 6

  * 19.11.10 released on Monday Sept 6



* Distros
  * v20.11 in Debian 11

  * v20.11 in Ubuntu 21.04




Defects
-------

* Bugzilla links, 'Bugs',  added for hosted projects
  - https://www.dpdk.org/hosted-projects/


Opens
-----

* Release of OpenSSL 3.0 will cause some compilation issues in cryptodev: https://www.openssl.org/blog/blog/2021/09/07/OpenSSL3.Final/


DPDK Release Status Meetings
----------------------------

The DPDK Release Status Meeting is intended for DPDK Committers to discuss the status of the master tree and sub-trees, and for project managers to track progress or milestone dates.

The meeting occurs on every Thursdays at 8:30 UTC. on https://meet.jit.si/DPDK

If you wish to attend just send an email to "John McNamara john.mcnamara@intel.com<mailto:john.mcnamara@intel.com>" for the invite.

^ permalink raw reply	[relevance 4%]

* [dpdk-dev] DPDK Release Status Meeting 2021-09-30
@ 2021-09-30 19:59  3% Mcnamara, John
  0 siblings, 0 replies; 200+ results
From: Mcnamara, John @ 2021-09-30 19:59 UTC (permalink / raw)
  To: dev; +Cc: thomas, Yigit, Ferruh

Release status meeting minutes 2021-09-30
=========================================

Agenda:
* Release Dates
* Subtrees
* Roadmaps
* LTS
* Defects
* Opens

Participants:
* Broadcom
* Debian
* Intel
* Marvell
* Nvidia
* Red Hat


Release Dates
-------------

* v21.11 dates
  * Proposal/V1:    Friday, 10 September (complete)
  * -rc1:           Friday, 15 October
  * Release:        Friday, 19 November


Subtrees
--------

* next-net

  * Reviewing and merging ongoing
  * Pulled sub-trees and rebased to main
  * Will merge other subtrees today
  * Patch from ARM to fix descriptor order. However, there is a 20% decrease in performance on ThunderX. Other ARM vendors should check.
  * Release queue patch -  awaiting response

* next-net-intel

  - Waiting base code update
  - Tests need to be enabled in the  CI

* next-net-mlx

  - 3 patches merged and some merges ongoing
  - Big regex patchset will be merged directly to main

* next-net-brcm

  * Issue with GCC7 resolved

* next-net-mrvl

  - A lot of patches to merge
  - 2 big patches for inline-ipsec and ingress meter
  - https://patches.dpdk.org/project/dpdk/list/?series=18917&state=*

* next-eventdev

  * Intel patches merged (except one)
  * Changes on API hiding (for ABI stability) reviewed and looks  good.

* next-virtio

  * Still quite a few series under review
  * Fix for regression that could be merged: net/virtio: revert forcing IOVA as VA mode for virtio-user

* next-crypto

  * Progressing.
  * A few patches that can be merged
  * Issue with Windows CI failure

* main

  * **NOTE**: there are several large patchset on main the review detailed
    reviews from the community such as the series for Interrupts
    and for Thread API.
  * Merging experimental API updates ongoing
  * API updates for interrupts
    * Call to driver maintainers  to look at it:
    * https://patchwork.dpdk.org/project/dpdk/list/?series=18664
  * Some ASAN patches under review but need rework
  * Thread API series needs more reviews
    * https://patchwork.dpdk.org/project/dpdk/list/?series=18338
  * DMA device under review
  * Need to fix issue with MBUF segments
    * Should be merged this release and needs review
    * https://patches.dpdk.org/project/dpdk/patch/20210929213707.17727-1-olivier.matz@6wind.com/
  * Patches under test for automatic delegation patch delegation in the CI
  * Issue with KNI build on SuSE
  * New Slack channel: https://join.slack.com/t/dpdkproject/shared_invite/zt-v6c9ef5z-FqgOAS7BDAYqev_a~pbXdw




LTS
---

* v19.11 (next version is v19.11.10)
  * 20.11.3 released on Monday Sept 6

  * 19.11.10 released on Monday Sept 6



* Distros
  * v20.11 in Debian 11

  * v20.11 in Ubuntu 21.04




Defects
-------

* Bugzilla links, 'Bugs',  added for hosted projects
  - https://www.dpdk.org/hosted-projects/


Opens
-----

* Release of OpenSSL 3.0 will cause some compilation issues in cryptodev: https://www.openssl.org/blog/blog/2021/09/07/OpenSSL3.Final/


DPDK Release Status Meetings
----------------------------

The DPDK Release Status Meeting is intended for DPDK Committers to discuss the status of the master tree and sub-trees, and for project managers to track progress or milestone dates.

The meeting occurs on every Thursdays at 8:30 UTC. on https://meet.jit.si/DPDK

If you wish to attend just send an email to "John McNamara john.mcnamara@intel.com<mailto:john.mcnamara@intel.com>" for the invite.

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v9] eal: remove sys/queue.h from public headers
  2021-09-20 20:11  0%   ` Narcisa Ana Maria Vasile
@ 2021-09-30 22:16  0%     ` William Tu
  2021-10-01  7:27  0%       ` David Marchand
  2021-10-01 10:34  0%     ` Thomas Monjalon
  1 sibling, 1 reply; 200+ results
From: William Tu @ 2021-09-30 22:16 UTC (permalink / raw)
  To: Narcisa Ana Maria Vasile, Thomas Monjalon
  Cc: dpdk-dev, Dmitry Kozliuk, Nick Connolly, Omar Cardona, Pallavi Kadam

On Mon, Sep 20, 2021 at 1:11 PM Narcisa Ana Maria Vasile
<navasile@linux.microsoft.com> wrote:
>
> On Tue, Aug 24, 2021 at 04:21:03PM +0000, William Tu wrote:
> > Currently there are some public headers that include 'sys/queue.h', which
> > is not POSIX, but usually provided by the Linux/BSD system library.
> > (Not in POSIX.1, POSIX.1-2001, or POSIX.1-2008. Present on the BSDs.)
> > The file is missing on Windows. During the Windows build, DPDK uses a
> > bundled copy, so building a DPDK library works fine.  But when OVS or other
> > applications use DPDK as a library, because some DPDK public headers
> > include 'sys/queue.h', on Windows, it triggers an error due to no such
> > file.
> >
> > One solution is to install the 'lib/eal/windows/include/sys/queue.h' into
> > Windows environment, such as [1]. However, this means DPDK exports the
> > functionalities of 'sys/queue.h' into the environment, which might cause
> > symbols, macros, headers clashing with other applications.
> >
> > The patch fixes it by removing the "#include <sys/queue.h>" from
> > DPDK public headers, so programs including DPDK headers don't depend
> > on the system to provide 'sys/queue.h'. When these public headers use
> > macros such as TAILQ_xxx, we replace it by the ones with RTE_ prefix.
> > For Windows, we copy the definitions from <sys/queue.h> to rte_os.h
> > in Windows EAL. Note that these RTE_ macros are compatible with
> > <sys/queue.h>, both at the level of API (to use with <sys/queue.h>
> > macros in C files) and ABI (to avoid breaking it).
> >
> > Additionally, the TAILQ_FOREACH_SAFE is not part of <sys/queue.h>,
> > the patch replaces it with RTE_TAILQ_FOREACH_SAFE.
> >
> > [1] http://mails.dpdk.org/archives/dev/2021-August/216304.html
> >
> > Suggested-by: Nick Connolly <nick.connolly@mayadata.io>
> > Suggested-by: Dmitry Kozliuk <Dmitry.Kozliuk@gmail.com>
> > Acked-by: Dmitry Kozliuk <Dmitry.Kozliuk@gmail.com>
> > Signed-off-by: William Tu <u9012063@gmail.com>
> > ---
> Acked-by: Narcisa Vasile <navasile@linux.microsoft.com>
Hi Thomas,
Ping to see if the patch can be applied?
Thanks
William

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v9] eal: remove sys/queue.h from public headers
  2021-09-30 22:16  0%     ` William Tu
@ 2021-10-01  7:27  0%       ` David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2021-10-01  7:27 UTC (permalink / raw)
  To: William Tu
  Cc: Narcisa Ana Maria Vasile, Thomas Monjalon, dpdk-dev,
	Dmitry Kozliuk, Nick Connolly, Omar Cardona, Pallavi Kadam

Hello William,

On Fri, Oct 1, 2021 at 12:17 AM William Tu <u9012063@gmail.com> wrote:
>
> On Mon, Sep 20, 2021 at 1:11 PM Narcisa Ana Maria Vasile
> <navasile@linux.microsoft.com> wrote:
> >
> > On Tue, Aug 24, 2021 at 04:21:03PM +0000, William Tu wrote:
> > > Currently there are some public headers that include 'sys/queue.h', which
> > > is not POSIX, but usually provided by the Linux/BSD system library.
> > > (Not in POSIX.1, POSIX.1-2001, or POSIX.1-2008. Present on the BSDs.)
> > > The file is missing on Windows. During the Windows build, DPDK uses a
> > > bundled copy, so building a DPDK library works fine.  But when OVS or other
> > > applications use DPDK as a library, because some DPDK public headers
> > > include 'sys/queue.h', on Windows, it triggers an error due to no such
> > > file.
> > >
> > > One solution is to install the 'lib/eal/windows/include/sys/queue.h' into
> > > Windows environment, such as [1]. However, this means DPDK exports the
> > > functionalities of 'sys/queue.h' into the environment, which might cause
> > > symbols, macros, headers clashing with other applications.
> > >
> > > The patch fixes it by removing the "#include <sys/queue.h>" from
> > > DPDK public headers, so programs including DPDK headers don't depend
> > > on the system to provide 'sys/queue.h'. When these public headers use
> > > macros such as TAILQ_xxx, we replace it by the ones with RTE_ prefix.
> > > For Windows, we copy the definitions from <sys/queue.h> to rte_os.h
> > > in Windows EAL. Note that these RTE_ macros are compatible with
> > > <sys/queue.h>, both at the level of API (to use with <sys/queue.h>
> > > macros in C files) and ABI (to avoid breaking it).
> > >
> > > Additionally, the TAILQ_FOREACH_SAFE is not part of <sys/queue.h>,
> > > the patch replaces it with RTE_TAILQ_FOREACH_SAFE.
> > >
> > > [1] http://mails.dpdk.org/archives/dev/2021-August/216304.html
> > >
> > > Suggested-by: Nick Connolly <nick.connolly@mayadata.io>
> > > Suggested-by: Dmitry Kozliuk <Dmitry.Kozliuk@gmail.com>
> > > Acked-by: Dmitry Kozliuk <Dmitry.Kozliuk@gmail.com>
> > > Signed-off-by: William Tu <u9012063@gmail.com>
> > > ---
> > Acked-by: Narcisa Vasile <navasile@linux.microsoft.com>
> Hi Thomas,
> Ping to see if the patch can be applied?

$ grep -rE '\<(|S)(CIRCLEQ|LIST|SIMPLEQ|TAILQ)_' build/install/include/
build/install/include/rte_tailq.h: *   first parameter passed to
TAILQ_HEAD macro)
build/install/include/rte_crypto_sym.h: * - LIST_END should not be
added to this enum
build/install/include/rte_crypto_sym.h: * - LIST_END should not be
added to this enum
build/install/include/rte_crypto_sym.h: * - LIST_END should not be
added to this enum
build/install/include/rte_os.h:#define RTE_TAILQ_HEAD(name, type)
TAILQ_HEAD(name, type)
build/install/include/rte_os.h:#define RTE_TAILQ_ENTRY(type) TAILQ_ENTRY(type)
build/install/include/rte_os.h:#define RTE_TAILQ_FOREACH(var, head,
field) TAILQ_FOREACH(var, head, field)
build/install/include/rte_os.h:#define RTE_TAILQ_FIRST(head) TAILQ_FIRST(head)
build/install/include/rte_os.h:#define RTE_TAILQ_NEXT(elem, field)
TAILQ_NEXT(elem, field)
build/install/include/rte_os.h:#define RTE_STAILQ_HEAD(name, type)
STAILQ_HEAD(name, type)
build/install/include/rte_os.h:#define RTE_STAILQ_ENTRY(type) STAILQ_ENTRY(type)

LGTM.

I just have a concern that headers get broken again if we have no check.
Could buildtools/chkincs do the job (if we make this check work on Windows)?


-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v9] eal: remove sys/queue.h from public headers
  2021-09-20 20:11  0%   ` Narcisa Ana Maria Vasile
  2021-09-30 22:16  0%     ` William Tu
@ 2021-10-01 10:34  0%     ` Thomas Monjalon
  1 sibling, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-10-01 10:34 UTC (permalink / raw)
  To: William Tu
  Cc: dev, Dmitry.Kozliuk, Nick Connolly, Omar Cardona, Pallavi Kadam,
	Narcisa Ana Maria Vasile, david.marchand

20/09/2021 22:11, Narcisa Ana Maria Vasile:
> On Tue, Aug 24, 2021 at 04:21:03PM +0000, William Tu wrote:
> > Currently there are some public headers that include 'sys/queue.h', which
> > is not POSIX, but usually provided by the Linux/BSD system library.
> > (Not in POSIX.1, POSIX.1-2001, or POSIX.1-2008. Present on the BSDs.)
> > The file is missing on Windows. During the Windows build, DPDK uses a
> > bundled copy, so building a DPDK library works fine.  But when OVS or other
> > applications use DPDK as a library, because some DPDK public headers
> > include 'sys/queue.h', on Windows, it triggers an error due to no such
> > file.
> > 
> > One solution is to install the 'lib/eal/windows/include/sys/queue.h' into
> > Windows environment, such as [1]. However, this means DPDK exports the
> > functionalities of 'sys/queue.h' into the environment, which might cause
> > symbols, macros, headers clashing with other applications.
> > 
> > The patch fixes it by removing the "#include <sys/queue.h>" from
> > DPDK public headers, so programs including DPDK headers don't depend
> > on the system to provide 'sys/queue.h'. When these public headers use
> > macros such as TAILQ_xxx, we replace it by the ones with RTE_ prefix.
> > For Windows, we copy the definitions from <sys/queue.h> to rte_os.h
> > in Windows EAL. Note that these RTE_ macros are compatible with
> > <sys/queue.h>, both at the level of API (to use with <sys/queue.h>
> > macros in C files) and ABI (to avoid breaking it).
> > 
> > Additionally, the TAILQ_FOREACH_SAFE is not part of <sys/queue.h>,
> > the patch replaces it with RTE_TAILQ_FOREACH_SAFE.
> > 
> > [1] http://mails.dpdk.org/archives/dev/2021-August/216304.html
> > 
> > Suggested-by: Nick Connolly <nick.connolly@mayadata.io>
> > Suggested-by: Dmitry Kozliuk <Dmitry.Kozliuk@gmail.com>
> > Acked-by: Dmitry Kozliuk <Dmitry.Kozliuk@gmail.com>
> > Signed-off-by: William Tu <u9012063@gmail.com>
> > ---
> Acked-by: Narcisa Vasile <navasile@linux.microsoft.com>

Applied, thanks.

Note: I had to add a sys/queue.h include in iavf which is now enabled for Windows.



^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v5] ethdev: fix representor port ID search by name
    2021-09-29 11:13  0%   ` Singh, Aman Deep
@ 2021-10-01 11:39  0%   ` Andrew Rybchenko
  2021-10-08  8:39  0%     ` Xueming(Steven) Li
  1 sibling, 1 reply; 200+ results
From: Andrew Rybchenko @ 2021-10-01 11:39 UTC (permalink / raw)
  To: Ajit Khaparde, Somnath Kotur, John Daley, Hyong Youb Kim,
	Qiming Yang, Qi Zhang, Matan Azrad, Viacheslav Ovsiienko,
	Xueming Li
  Cc: dev, Viacheslav Galaktionov, Haiyue Wang, Beilei Xing,
	Ferruh Yigit, Thomas Monjalon

Hello PMD maintainers,

please, review the patch.

It is especially important for net/mlx5 since changes there are
not trivial.

Thanks,
Andrew.

On 9/13/21 2:26 PM, Andrew Rybchenko wrote:
> From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
> 
> Getting a list of representors from a representor does not make sense.
> Instead, a parent device should be used.
> 
> To this end, extend the rte_eth_dev_data structure to include the port ID
> of the backing device for representors.
> 
> Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Acked-by: Haiyue Wang <haiyue.wang@intel.com>
> Acked-by: Beilei Xing <beilei.xing@intel.com>
> ---
> The new field is added into the hole in rte_eth_dev_data structure.
> The patch does not change ABI, but extra care is required since ABI
> check is disabled for the structure because of the libabigail bug [1].
> It should not be a problem anyway since 21.11 is a ABI breaking release.
> 
> Potentially it is bad for out-of-tree drivers which implement
> representors but do not fill in a new parert_port_id field in
> rte_eth_dev_data structure. Get ID by name will not work.
> 
> mlx5 changes should be reviwed by maintainers very carefully, since
> we are not sure if we patch it correctly.
> 
> [1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060
> 
> v5:
>     - try to improve name: backer_port_id instead of parent_port_id
>     - init new field to RTE_MAX_ETHPORTS on allocation to avoid
>       zero port usage by default
> 
> v4:
>     - apply mlx5 review notes: remove fallback from generic ethdev
>       code and add fallback to mlx5 code to handle legacy usecase
> 
> v3:
>     - fix mlx5 build breakage
> 
> v2:
>     - fix mlx5 review notes
>     - try device port ID first before parent in order to address
>       backward compatibility issue
> 
>  drivers/net/bnxt/bnxt_reps.c             |  1 +
>  drivers/net/enic/enic_vf_representor.c   |  1 +
>  drivers/net/i40e/i40e_vf_representor.c   |  1 +
>  drivers/net/ice/ice_dcf_vf_representor.c |  1 +
>  drivers/net/ixgbe/ixgbe_vf_representor.c |  1 +
>  drivers/net/mlx5/linux/mlx5_os.c         | 13 +++++++++++++
>  drivers/net/mlx5/windows/mlx5_os.c       | 13 +++++++++++++
>  lib/ethdev/ethdev_driver.h               |  6 +++---
>  lib/ethdev/rte_class_eth.c               |  2 +-
>  lib/ethdev/rte_ethdev.c                  |  9 +++++----
>  lib/ethdev/rte_ethdev_core.h             |  6 ++++++
>  11 files changed, 46 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
> index bdbad53b7d..0d50c0f1da 100644
> --- a/drivers/net/bnxt/bnxt_reps.c
> +++ b/drivers/net/bnxt/bnxt_reps.c
> @@ -187,6 +187,7 @@ int bnxt_representor_init(struct rte_eth_dev *eth_dev, void *params)
>  	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
>  					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
>  	eth_dev->data->representor_id = rep_params->vf_id;
> +	eth_dev->data->backer_port_id = rep_params->parent_dev->data->port_id;
>  
>  	rte_eth_random_addr(vf_rep_bp->dflt_mac_addr);
>  	memcpy(vf_rep_bp->mac_addr, vf_rep_bp->dflt_mac_addr,
> diff --git a/drivers/net/enic/enic_vf_representor.c b/drivers/net/enic/enic_vf_representor.c
> index 79dd6e5640..fedb09ecd6 100644
> --- a/drivers/net/enic/enic_vf_representor.c
> +++ b/drivers/net/enic/enic_vf_representor.c
> @@ -662,6 +662,7 @@ int enic_vf_representor_init(struct rte_eth_dev *eth_dev, void *init_params)
>  	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
>  					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
>  	eth_dev->data->representor_id = vf->vf_id;
> +	eth_dev->data->backer_port_id = pf->port_id;
>  	eth_dev->data->mac_addrs = rte_zmalloc("enic_mac_addr_vf",
>  		sizeof(struct rte_ether_addr) *
>  		ENIC_UNICAST_PERFECT_FILTERS, 0);
> diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
> index 0481b55381..d65b821a01 100644
> --- a/drivers/net/i40e/i40e_vf_representor.c
> +++ b/drivers/net/i40e/i40e_vf_representor.c
> @@ -514,6 +514,7 @@ i40e_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
>  	ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
>  					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
>  	ethdev->data->representor_id = representor->vf_id;
> +	ethdev->data->backer_port_id = pf->dev_data->port_id;
>  
>  	/* Setting the number queues allocated to the VF */
>  	ethdev->data->nb_rx_queues = vf->vsi->nb_qps;
> diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
> index 970461f3e9..e51d0aa6b9 100644
> --- a/drivers/net/ice/ice_dcf_vf_representor.c
> +++ b/drivers/net/ice/ice_dcf_vf_representor.c
> @@ -418,6 +418,7 @@ ice_dcf_vf_repr_init(struct rte_eth_dev *vf_rep_eth_dev, void *init_param)
>  
>  	vf_rep_eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
>  	vf_rep_eth_dev->data->representor_id = repr->vf_id;
> +	vf_rep_eth_dev->data->backer_port_id = repr->dcf_eth_dev->data->port_id;
>  
>  	vf_rep_eth_dev->data->mac_addrs = &repr->mac_addr;
>  
> diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
> index d5b636a194..9fa75984fb 100644
> --- a/drivers/net/ixgbe/ixgbe_vf_representor.c
> +++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
> @@ -197,6 +197,7 @@ ixgbe_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
>  
>  	ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
>  	ethdev->data->representor_id = representor->vf_id;
> +	ethdev->data->backer_port_id = representor->pf_ethdev->data->port_id;
>  
>  	/* Set representor device ops */
>  	ethdev->dev_ops = &ixgbe_vf_representor_dev_ops;
> diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
> index 470b16cb9a..1cddaaba1a 100644
> --- a/drivers/net/mlx5/linux/mlx5_os.c
> +++ b/drivers/net/mlx5/linux/mlx5_os.c
> @@ -1677,6 +1677,19 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
>  	if (priv->representor) {
>  		eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
>  		eth_dev->data->representor_id = priv->representor_id;
> +		MLX5_ETH_FOREACH_DEV(port_id, &priv->pci_dev->device) {
> +			struct mlx5_priv *opriv =
> +				rte_eth_devices[port_id].data->dev_private;
> +			if (opriv &&
> +			    opriv->master &&
> +			    opriv->domain_id == priv->domain_id &&
> +			    opriv->sh == priv->sh) {
> +				eth_dev->data->backer_port_id = port_id;
> +				break;
> +			}
> +		}
> +		if (port_id >= RTE_MAX_ETHPORTS)
> +			eth_dev->data->backer_port_id = eth_dev->data->port_id;
>  	}
>  	priv->mp_id.port_id = eth_dev->data->port_id;
>  	strlcpy(priv->mp_id.name, MLX5_MP_NAME, RTE_MP_MAX_NAME_LEN);
> diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
> index 26fa927039..a9c244c7dc 100644
> --- a/drivers/net/mlx5/windows/mlx5_os.c
> +++ b/drivers/net/mlx5/windows/mlx5_os.c
> @@ -543,6 +543,19 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
>  	if (priv->representor) {
>  		eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
>  		eth_dev->data->representor_id = priv->representor_id;
> +		MLX5_ETH_FOREACH_DEV(port_id, &priv->pci_dev->device) {
> +			struct mlx5_priv *opriv =
> +				rte_eth_devices[port_id].data->dev_private;
> +			if (opriv &&
> +			    opriv->master &&
> +			    opriv->domain_id == priv->domain_id &&
> +			    opriv->sh == priv->sh) {
> +				eth_dev->data->backer_port_id = port_id;
> +				break;
> +			}
> +		}
> +		if (port_id >= RTE_MAX_ETHPORTS)
> +			eth_dev->data->backer_port_id = eth_dev->data->port_id;
>  	}
>  	/*
>  	 * Store associated network device interface index. This index
> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
> index 40e474aa7e..b940e6cb38 100644
> --- a/lib/ethdev/ethdev_driver.h
> +++ b/lib/ethdev/ethdev_driver.h
> @@ -1248,8 +1248,8 @@ struct rte_eth_devargs {
>   * For backward compatibility, if no representor info, direct
>   * map legacy VF (no controller and pf).
>   *
> - * @param ethdev
> - *  Handle of ethdev port.
> + * @param port_id
> + *  Port ID of the backing device.
>   * @param type
>   *  Representor type.
>   * @param controller
> @@ -1266,7 +1266,7 @@ struct rte_eth_devargs {
>   */
>  __rte_internal
>  int
> -rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
> +rte_eth_representor_id_get(uint16_t port_id,
>  			   enum rte_eth_representor_type type,
>  			   int controller, int pf, int representor_port,
>  			   uint16_t *repr_id);
> diff --git a/lib/ethdev/rte_class_eth.c b/lib/ethdev/rte_class_eth.c
> index 1fe5fa1f36..eda216ced5 100644
> --- a/lib/ethdev/rte_class_eth.c
> +++ b/lib/ethdev/rte_class_eth.c
> @@ -95,7 +95,7 @@ eth_representor_cmp(const char *key __rte_unused,
>  		c = i / (np * nf);
>  		p = (i / nf) % np;
>  		f = i % nf;
> -		if (rte_eth_representor_id_get(edev,
> +		if (rte_eth_representor_id_get(edev->data->backer_port_id,
>  			eth_da.type,
>  			eth_da.nb_mh_controllers == 0 ? -1 :
>  					eth_da.mh_controllers[c],
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index daf5ca9242..7c9b0d6b3b 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -524,6 +524,7 @@ rte_eth_dev_allocate(const char *name)
>  	eth_dev = eth_dev_get(port_id);
>  	strlcpy(eth_dev->data->name, name, sizeof(eth_dev->data->name));
>  	eth_dev->data->port_id = port_id;
> +	eth_dev->data->backer_port_id = RTE_MAX_ETHPORTS;
>  	eth_dev->data->mtu = RTE_ETHER_MTU;
>  	pthread_mutex_init(&eth_dev->data->flow_ops_mutex, NULL);
>  
> @@ -5996,7 +5997,7 @@ rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da)
>  }
>  
>  int
> -rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
> +rte_eth_representor_id_get(uint16_t port_id,
>  			   enum rte_eth_representor_type type,
>  			   int controller, int pf, int representor_port,
>  			   uint16_t *repr_id)
> @@ -6012,7 +6013,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
>  		return -EINVAL;
>  
>  	/* Get PMD representor range info. */
> -	ret = rte_eth_representor_info_get(ethdev->data->port_id, NULL);
> +	ret = rte_eth_representor_info_get(port_id, NULL);
>  	if (ret == -ENOTSUP && type == RTE_ETH_REPRESENTOR_VF &&
>  	    controller == -1 && pf == -1) {
>  		/* Direct mapping for legacy VF representor. */
> @@ -6027,7 +6028,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
>  	if (info == NULL)
>  		return -ENOMEM;
>  	info->nb_ranges_alloc = n;
> -	ret = rte_eth_representor_info_get(ethdev->data->port_id, info);
> +	ret = rte_eth_representor_info_get(port_id, info);
>  	if (ret < 0)
>  		goto out;
>  
> @@ -6046,7 +6047,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
>  			continue;
>  		if (info->ranges[i].id_end < info->ranges[i].id_base) {
>  			RTE_LOG(WARNING, EAL, "Port %hu invalid representor ID Range %u - %u, entry %d\n",
> -				ethdev->data->port_id, info->ranges[i].id_base,
> +				port_id, info->ranges[i].id_base,
>  				info->ranges[i].id_end, i);
>  			continue;
>  
> diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
> index edf96de2dc..48b814e8a1 100644
> --- a/lib/ethdev/rte_ethdev_core.h
> +++ b/lib/ethdev/rte_ethdev_core.h
> @@ -185,6 +185,12 @@ struct rte_eth_dev_data {
>  			/**< Switch-specific identifier.
>  			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
>  			 */
> +	uint16_t backer_port_id;
> +			/**< Port ID of the backing device.
> +			 *   This device will be used to query representor
> +			 *   info and calculate representor IDs.
> +			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
> +			 */
>  
>  	pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
>  	uint64_t reserved_64s[4]; /**< Reserved for future fields */
> 


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v3 0/7] hide eth dev related structures
  2021-09-22 14:09  4% ` [dpdk-dev] [RFC v2 0/5] " Konstantin Ananyev
                     ` (2 preceding siblings ...)
  2021-09-22 14:09  2%   ` [dpdk-dev] [RFC v2 4/5] ethdev: make burst functions to use new flat array Konstantin Ananyev
@ 2021-10-01 14:02  4%   ` Konstantin Ananyev
  2021-10-01 14:02  6%     ` [dpdk-dev] [PATCH v3 2/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
                       ` (5 more replies)
  3 siblings, 6 replies; 200+ results
From: Konstantin Ananyev @ 2021-10-01 14:02 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

v3 changes:
 - Changes in public struct naming (Jerin/Haiyue)
 - Split patches
 - Update docs
 - Shamelessly included Andrew's patch:
   https://patches.dpdk.org/project/dpdk/patch/20210928154856.1015020-1-andrew.rybchenko@oktetlabs.ru/
   into these series.
   I have to do similar thing here, so decided to avoid duplicated effort.   

The aim of these patch series is to make rte_ethdev core data structures
(rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback, etc.) internal to
DPDK and not visible to the user.
That should allow future possible changes to core ethdev related structures
to be transparent to the user and help to improve ABI/API stability.
Note that current ethdev API is preserved, but it is a formal ABI break.

The work is based on previous discussions at:
https://www.mail-archive.com/dev@dpdk.org/msg211405.html
https://www.mail-archive.com/dev@dpdk.org/msg216685.html
and consists of the following main points:
1. Copy public 'fast' function pointers (rx_pkt_burst(), etc.) and
   related data pointer from rte_eth_dev into a separate flat array.
   We keep it public to still be able to use inline functions for these
   'fast' calls (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
   Note that apart from function pointers itself, each element of this
   flat array also contains two opaque pointers for each ethdev:
   1) a pointer to an array of internal queue data pointers
   2)  points to array of queue callback data pointers.
   Note that exposing this extra information allows us to avoid extra
   changes inside PMD level, plus should help to avoid possible
   performance degradation.
2. Change implementation of 'fast' inline ethdev functions
   (rte_eth_rx_burst(), etc.) to use new public flat array.
   While it is an ABI breakage, this change is intended to be transparent
   for both users (no changes in user app is required) and PMD developers
   (no changes in PMD is required).
   One extra note - with new implementation RX/TX callback invocation
   will cost one extra function call with this changes. That might cause
   some slowdown for code-path with RX/TX callbacks heavily involved.
   Hope such trade-off is acceptable for the community.
3. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
   things into internal header: <ethdev_driver.h>.

That approach was selected to:
  - Avoid(/minimize) possible performance losses.
  - Minimize required changes inside PMDs.
 
Performance testing results (ICX 2.0GHz, E810 (ice)):
 - testpmd macswap fwd mode, plus
   a) no RX/TX callbacks:
      no actual slowdown observed
   b) bpf-load rx 0 0 JM ./dpdk.org/examples/bpf/t3.o:
      ~2% slowdown
 - l3fwd: no actual slowdown observed

Would like to thank Ferruh and Jerrin for reviewing and testing previous
versions of these series. All other interested parties please don't be shy
and provide your feedback.

Konstantin Ananyev (7):
  ethdev: allocate max space for internal queue array
  ethdev: change input parameters for rx_queue_count
  ethdev: copy ethdev 'fast' API into separate structure
  ethdev: make burst functions to use new flat array
  ethdev: add API to retrieve multiple ethernet addresses
  ethdev: remove legacy Rx descriptor done API
  ethdev: hide eth dev related structures

 app/test-pmd/config.c                         |  23 +-
 doc/guides/nics/features.rst                  |   6 +-
 doc/guides/rel_notes/deprecation.rst          |   5 -
 doc/guides/rel_notes/release_21_11.rst        |  21 ++
 drivers/common/octeontx2/otx2_sec_idev.c      |   2 +-
 drivers/crypto/octeontx2/otx2_cryptodev_ops.c |   2 +-
 drivers/net/ark/ark_ethdev_rx.c               |   4 +-
 drivers/net/ark/ark_ethdev_rx.h               |   3 +-
 drivers/net/atlantic/atl_ethdev.h             |   2 +-
 drivers/net/atlantic/atl_rxtx.c               |   9 +-
 drivers/net/bnxt/bnxt_ethdev.c                |   8 +-
 drivers/net/cxgbe/base/adapter.h              |   2 +-
 drivers/net/dpaa/dpaa_ethdev.c                |   9 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |   9 +-
 drivers/net/dpaa2/dpaa2_ptp.c                 |   2 +-
 drivers/net/e1000/e1000_ethdev.h              |  10 +-
 drivers/net/e1000/em_ethdev.c                 |   1 -
 drivers/net/e1000/em_rxtx.c                   |  21 +-
 drivers/net/e1000/igb_ethdev.c                |   2 -
 drivers/net/e1000/igb_rxtx.c                  |  21 +-
 drivers/net/enic/enic_ethdev.c                |  12 +-
 drivers/net/fm10k/fm10k.h                     |   5 +-
 drivers/net/fm10k/fm10k_ethdev.c              |   1 -
 drivers/net/fm10k/fm10k_rxtx.c                |  29 +-
 drivers/net/hns3/hns3_rxtx.c                  |   7 +-
 drivers/net/hns3/hns3_rxtx.h                  |   2 +-
 drivers/net/i40e/i40e_ethdev.c                |   1 -
 drivers/net/i40e/i40e_ethdev_vf.c             |   1 -
 drivers/net/i40e/i40e_rxtx.c                  |  30 +-
 drivers/net/i40e/i40e_rxtx.h                  |   4 +-
 drivers/net/iavf/iavf_rxtx.c                  |   4 +-
 drivers/net/iavf/iavf_rxtx.h                  |   2 +-
 drivers/net/ice/ice_rxtx.c                    |   4 +-
 drivers/net/ice/ice_rxtx.h                    |   2 +-
 drivers/net/igc/igc_ethdev.c                  |   1 -
 drivers/net/igc/igc_txrx.c                    |  23 +-
 drivers/net/igc/igc_txrx.h                    |   5 +-
 drivers/net/ixgbe/ixgbe_ethdev.c              |   2 -
 drivers/net/ixgbe/ixgbe_ethdev.h              |   5 +-
 drivers/net/ixgbe/ixgbe_rxtx.c                |  22 +-
 drivers/net/mlx5/mlx5_rx.c                    |  26 +-
 drivers/net/mlx5/mlx5_rx.h                    |   2 +-
 drivers/net/netvsc/hn_rxtx.c                  |   4 +-
 drivers/net/netvsc/hn_var.h                   |   3 +-
 drivers/net/nfp/nfp_rxtx.c                    |   4 +-
 drivers/net/nfp/nfp_rxtx.h                    |   3 +-
 drivers/net/octeontx2/otx2_ethdev.c           |   1 -
 drivers/net/octeontx2/otx2_ethdev.h           |   3 +-
 drivers/net/octeontx2/otx2_ethdev_ops.c       |  20 +-
 drivers/net/sfc/sfc_ethdev.c                  |  29 +-
 drivers/net/thunderx/nicvf_ethdev.c           |   3 +-
 drivers/net/thunderx/nicvf_rxtx.c             |   4 +-
 drivers/net/thunderx/nicvf_rxtx.h             |   2 +-
 drivers/net/txgbe/txgbe_ethdev.h              |   3 +-
 drivers/net/txgbe/txgbe_rxtx.c                |   4 +-
 drivers/net/vhost/rte_eth_vhost.c             |   4 +-
 drivers/net/virtio/virtio_ethdev.c            |   1 -
 lib/ethdev/ethdev_driver.h                    | 149 +++++++++
 lib/ethdev/ethdev_private.c                   |  83 +++++
 lib/ethdev/ethdev_private.h                   |   7 +
 lib/ethdev/rte_ethdev.c                       |  79 +++--
 lib/ethdev/rte_ethdev.h                       | 288 ++++++++++++------
 lib/ethdev/rte_ethdev_core.h                  | 171 +++--------
 lib/ethdev/version.map                        |   6 +
 lib/eventdev/rte_event_eth_rx_adapter.c       |   2 +-
 lib/eventdev/rte_event_eth_tx_adapter.c       |   2 +-
 lib/eventdev/rte_eventdev.c                   |   2 +-
 67 files changed, 661 insertions(+), 568 deletions(-)

-- 
2.26.3


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v3 4/7] ethdev: make burst functions to use new flat array
  2021-10-01 14:02  4%   ` [dpdk-dev] [PATCH v3 0/7] hide eth dev related structures Konstantin Ananyev
  2021-10-01 14:02  6%     ` [dpdk-dev] [PATCH v3 2/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
  2021-10-01 14:02  2%     ` [dpdk-dev] [PATCH v3 3/7] ethdev: copy ethdev 'fast' API into separate structure Konstantin Ananyev
@ 2021-10-01 14:02  2%     ` Konstantin Ananyev
  2021-10-01 16:46  3%       ` Ferruh Yigit
  2021-10-01 14:02  9%     ` [dpdk-dev] [PATCH v3 7/7] ethdev: hide eth dev related structures Konstantin Ananyev
                       ` (2 subsequent siblings)
  5 siblings, 1 reply; 200+ results
From: Konstantin Ananyev @ 2021-10-01 14:02 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Rework 'fast' burst functions to use rte_eth_fp_ops[].
While it is an API/ABI breakage, this change is intended to be
transparent for both users (no changes in user app is required) and
PMD developers (no changes in PMD is required).
One extra thing to note - RX/TX callback invocation will cause extra
function call with these changes. That might cause some insignificant
slowdown for code-path where RX/TX callbacks are heavily involved.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/ethdev/ethdev_private.c |  31 +++++
 lib/ethdev/rte_ethdev.h     | 244 ++++++++++++++++++++++++++----------
 lib/ethdev/version.map      |   5 +
 3 files changed, 212 insertions(+), 68 deletions(-)

diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 3eeda6e9f9..27d29b2ac6 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -226,3 +226,34 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
 	fpo->txq.data = dev->data->tx_queues;
 	fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
 }
+
+uint16_t
+__rte_eth_rx_epilog(uint16_t port_id, uint16_t queue_id,
+	struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
+	void *opaque)
+{
+	const struct rte_eth_rxtx_callback *cb = opaque;
+
+	while (cb != NULL) {
+		nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
+				nb_pkts, cb->param);
+		cb = cb->next;
+	}
+
+	return nb_rx;
+}
+
+uint16_t
+__rte_eth_tx_prolog(uint16_t port_id, uint16_t queue_id,
+	struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque)
+{
+	const struct rte_eth_rxtx_callback *cb = opaque;
+
+	while (cb != NULL) {
+		nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
+				cb->param);
+		cb = cb->next;
+	}
+
+	return nb_pkts;
+}
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 9642b7c00f..20ac5ba88d 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -4904,6 +4904,34 @@ int rte_eth_representor_info_get(uint16_t port_id,
 
 #include <rte_ethdev_core.h>
 
+/**
+ * @internal
+ * Helper routine for eth driver rx_burst API.
+ * Should be called at exit from PMD's rte_eth_rx_bulk implementation.
+ * Does necessary post-processing - invokes RX callbacks if any, etc.
+ *
+ * @param port_id
+ *  The port identifier of the Ethernet device.
+ * @param queue_id
+ *  The index of the receive queue from which to retrieve input packets.
+ * @param rx_pkts
+ *   The address of an array of pointers to *rte_mbuf* structures that
+ *   have been retrieved from the device.
+ * @param nb_pkts
+ *   The number of packets that were retrieved from the device.
+ * @param nb_pkts
+ *   The number of elements in *rx_pkts* array.
+ * @param opaque
+ *   Opaque pointer of RX queue callback related data.
+ *
+ * @return
+ *  The number of packets effectively supplied to the *rx_pkts* array.
+ */
+__rte_experimental
+uint16_t __rte_eth_rx_epilog(uint16_t port_id, uint16_t queue_id,
+		struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
+		void *opaque);
+
 /**
  *
  * Retrieve a burst of input packets from a receive queue of an Ethernet
@@ -4995,23 +5023,37 @@ static inline uint16_t
 rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
 		 struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
 {
-	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 	uint16_t nb_rx;
+	struct rte_eth_fp_ops *p;
+	void *cb, *qd;
+
+#ifdef RTE_ETHDEV_DEBUG_RX
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return 0;
+	}
+#endif
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->rxq.data[queue_id];
 
 #ifdef RTE_ETHDEV_DEBUG_RX
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
 
-	if (queue_id >= dev->data->nb_rx_queues) {
-		RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u\n", queue_id);
+	if (qd == NULL) {
+		RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u for port_id=%u\n",
+			queue_id, port_id);
 		return 0;
 	}
 #endif
-	nb_rx = (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id],
-				     rx_pkts, nb_pkts);
+
+	nb_rx = p->rx_pkt_burst(qd, rx_pkts, nb_pkts);
 
 #ifdef RTE_ETHDEV_RXTX_CALLBACKS
-	struct rte_eth_rxtx_callback *cb;
 
 	/* __ATOMIC_RELEASE memory order was used when the
 	 * call back was inserted into the list.
@@ -5019,16 +5061,10 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
 	 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
 	 * not required.
 	 */
-	cb = __atomic_load_n(&dev->post_rx_burst_cbs[queue_id],
-				__ATOMIC_RELAXED);
-
-	if (unlikely(cb != NULL)) {
-		do {
-			nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
-						nb_pkts, cb->param);
-			cb = cb->next;
-		} while (cb != NULL);
-	}
+	cb = __atomic_load_n((void **)&p->rxq.clbk[queue_id], __ATOMIC_RELAXED);
+	if (unlikely(cb != NULL))
+		nb_rx = __rte_eth_rx_epilog(port_id, queue_id, rx_pkts, nb_rx,
+				nb_pkts, cb);
 #endif
 
 	rte_ethdev_trace_rx_burst(port_id, queue_id, (void **)rx_pkts, nb_rx);
@@ -5051,16 +5087,27 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
 static inline int
 rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
 {
-	struct rte_eth_dev *dev;
+	struct rte_eth_fp_ops *p;
+	void *qd;
+
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return -EINVAL;
+	}
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->rxq.data[queue_id];
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
-	dev = &rte_eth_devices[port_id];
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_queue_count, -ENOTSUP);
-	if (queue_id >= dev->data->nb_rx_queues ||
-	    dev->data->rx_queues[queue_id] == NULL)
+	RTE_FUNC_PTR_OR_ERR_RET(*p->rx_queue_count, -ENOTSUP);
+	if (qd == NULL)
 		return -EINVAL;
 
-	return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]);
+	return (int)(*p->rx_queue_count)(qd);
 }
 
 /**
@@ -5133,21 +5180,30 @@ static inline int
 rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
 	uint16_t offset)
 {
-	struct rte_eth_dev *dev;
-	void *rxq;
+	struct rte_eth_fp_ops *p;
+	void *qd;
 
 #ifdef RTE_ETHDEV_DEBUG_RX
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return -EINVAL;
+	}
 #endif
-	dev = &rte_eth_devices[port_id];
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->rxq.data[queue_id];
+
 #ifdef RTE_ETHDEV_DEBUG_RX
-	if (queue_id >= dev->data->nb_rx_queues)
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (qd == NULL)
 		return -ENODEV;
 #endif
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_descriptor_status, -ENOTSUP);
-	rxq = dev->data->rx_queues[queue_id];
-
-	return (*dev->rx_descriptor_status)(rxq, offset);
+	RTE_FUNC_PTR_OR_ERR_RET(*p->rx_descriptor_status, -ENOTSUP);
+	return (*p->rx_descriptor_status)(qd, offset);
 }
 
 /**@{@name Tx hardware descriptor states
@@ -5194,23 +5250,55 @@ rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
 static inline int rte_eth_tx_descriptor_status(uint16_t port_id,
 	uint16_t queue_id, uint16_t offset)
 {
-	struct rte_eth_dev *dev;
-	void *txq;
+	struct rte_eth_fp_ops *p;
+	void *qd;
 
 #ifdef RTE_ETHDEV_DEBUG_TX
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return -EINVAL;
+	}
 #endif
-	dev = &rte_eth_devices[port_id];
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->txq.data[queue_id];
+
 #ifdef RTE_ETHDEV_DEBUG_TX
-	if (queue_id >= dev->data->nb_tx_queues)
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (qd == NULL)
 		return -ENODEV;
 #endif
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_descriptor_status, -ENOTSUP);
-	txq = dev->data->tx_queues[queue_id];
-
-	return (*dev->tx_descriptor_status)(txq, offset);
+	RTE_FUNC_PTR_OR_ERR_RET(*p->tx_descriptor_status, -ENOTSUP);
+	return (*p->tx_descriptor_status)(qd, offset);
 }
 
+/**
+ * @internal
+ * Helper routine for eth driver tx_burst API.
+ * Should be called before entry PMD's rte_eth_tx_bulk implementation.
+ * Does necessary pre-processing - invokes TX callbacks if any, etc.
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device.
+ * @param queue_id
+ *   The index of the transmit queue through which output packets must be
+ *   sent.
+ * @param tx_pkts
+ *   The address of an array of *nb_pkts* pointers to *rte_mbuf* structures
+ *   which contain the output packets.
+ * @param nb_pkts
+ *   The maximum number of packets to transmit.
+ * @return
+ *   The number of output packets to transmit.
+ */
+__rte_experimental
+uint16_t __rte_eth_tx_prolog(uint16_t port_id, uint16_t queue_id,
+	struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque);
+
 /**
  * Send a burst of output packets on a transmit queue of an Ethernet device.
  *
@@ -5281,20 +5369,34 @@ static inline uint16_t
 rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
 		 struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 {
-	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	struct rte_eth_fp_ops *p;
+	void *cb, *qd;
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return 0;
+	}
+#endif
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->txq.data[queue_id];
 
 #ifdef RTE_ETHDEV_DEBUG_TX
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0);
 
-	if (queue_id >= dev->data->nb_tx_queues) {
-		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
+	if (qd == NULL) {
+		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
+			queue_id, port_id);
 		return 0;
 	}
 #endif
 
 #ifdef RTE_ETHDEV_RXTX_CALLBACKS
-	struct rte_eth_rxtx_callback *cb;
 
 	/* __ATOMIC_RELEASE memory order was used when the
 	 * call back was inserted into the list.
@@ -5302,21 +5404,16 @@ rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
 	 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
 	 * not required.
 	 */
-	cb = __atomic_load_n(&dev->pre_tx_burst_cbs[queue_id],
-				__ATOMIC_RELAXED);
-
-	if (unlikely(cb != NULL)) {
-		do {
-			nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
-					cb->param);
-			cb = cb->next;
-		} while (cb != NULL);
-	}
+	cb = __atomic_load_n((void **)&p->txq.clbk[queue_id], __ATOMIC_RELAXED);
+	if (unlikely(cb != NULL))
+		nb_pkts = __rte_eth_tx_prolog(port_id, queue_id, tx_pkts,
+				nb_pkts, cb);
 #endif
 
-	rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts,
-		nb_pkts);
-	return (*dev->tx_pkt_burst)(dev->data->tx_queues[queue_id], tx_pkts, nb_pkts);
+	nb_pkts = p->tx_pkt_burst(qd, tx_pkts, nb_pkts);
+
+	rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts, nb_pkts);
+	return nb_pkts;
 }
 
 /**
@@ -5379,31 +5476,42 @@ static inline uint16_t
 rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id,
 		struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 {
-	struct rte_eth_dev *dev;
+	struct rte_eth_fp_ops *p;
+	void *qd;
 
 #ifdef RTE_ETHDEV_DEBUG_TX
-	if (!rte_eth_dev_is_valid_port(port_id)) {
-		RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
 		rte_errno = ENODEV;
 		return 0;
 	}
 #endif
 
-	dev = &rte_eth_devices[port_id];
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->txq.data[queue_id];
 
 #ifdef RTE_ETHDEV_DEBUG_TX
-	if (queue_id >= dev->data->nb_tx_queues) {
-		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
+	if (!rte_eth_dev_is_valid_port(port_id)) {
+		RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
+		rte_errno = ENODEV;
+		return 0;
+	}
+	if (qd == NULL) {
+		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
+			queue_id, port_id);
 		rte_errno = EINVAL;
 		return 0;
 	}
 #endif
 
-	if (!dev->tx_pkt_prepare)
+	if (!p->tx_pkt_prepare)
 		return nb_pkts;
 
-	return (*dev->tx_pkt_prepare)(dev->data->tx_queues[queue_id],
-			tx_pkts, nb_pkts);
+	return p->tx_pkt_prepare(qd, tx_pkts, nb_pkts);
 }
 
 #else
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 904bce6ea1..3a1278b448 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -247,11 +247,16 @@ EXPERIMENTAL {
 	rte_mtr_meter_policy_delete;
 	rte_mtr_meter_policy_update;
 	rte_mtr_meter_policy_validate;
+
+	# added in 21.05
+	__rte_eth_rx_epilog;
+	__rte_eth_tx_prolog;
 };
 
 INTERNAL {
 	global:
 
+	rte_eth_fp_ops;
 	rte_eth_dev_allocate;
 	rte_eth_dev_allocated;
 	rte_eth_dev_attach_secondary;
-- 
2.26.3


^ permalink raw reply	[relevance 2%]

* [dpdk-dev] [PATCH v3 2/7] ethdev: change input parameters for rx_queue_count
  2021-10-01 14:02  4%   ` [dpdk-dev] [PATCH v3 0/7] hide eth dev related structures Konstantin Ananyev
@ 2021-10-01 14:02  6%     ` Konstantin Ananyev
  2021-10-01 14:02  2%     ` [dpdk-dev] [PATCH v3 3/7] ethdev: copy ethdev 'fast' API into separate structure Konstantin Ananyev
                       ` (4 subsequent siblings)
  5 siblings, 0 replies; 200+ results
From: Konstantin Ananyev @ 2021-10-01 14:02 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Currently majority of 'fast' ethdev ops take pointers to internal
queue data structures as an input parameter.
While eth_rx_queue_count() takes a pointer to rte_eth_dev and queue
index.
For future work to hide rte_eth_devices[] and friends it would be
plausible to unify parameters list of all 'fast' ethdev ops.
This patch changes eth_rx_queue_count() to accept pointer to internal
queue data as input parameter.
While this change is transparent to user, it still counts as an ABI change,
as eth_rx_queue_count_t is used by ethdev public inline function
rte_eth_rx_queue_count().

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/release_21_11.rst  |  6 ++++++
 drivers/net/ark/ark_ethdev_rx.c         |  4 ++--
 drivers/net/ark/ark_ethdev_rx.h         |  3 +--
 drivers/net/atlantic/atl_ethdev.h       |  2 +-
 drivers/net/atlantic/atl_rxtx.c         |  9 ++-------
 drivers/net/bnxt/bnxt_ethdev.c          |  8 +++++---
 drivers/net/dpaa/dpaa_ethdev.c          |  9 ++++-----
 drivers/net/dpaa2/dpaa2_ethdev.c        |  9 ++++-----
 drivers/net/e1000/e1000_ethdev.h        |  6 ++----
 drivers/net/e1000/em_rxtx.c             |  4 ++--
 drivers/net/e1000/igb_rxtx.c            |  4 ++--
 drivers/net/enic/enic_ethdev.c          | 12 ++++++------
 drivers/net/fm10k/fm10k.h               |  2 +-
 drivers/net/fm10k/fm10k_rxtx.c          |  4 ++--
 drivers/net/hns3/hns3_rxtx.c            |  7 +++++--
 drivers/net/hns3/hns3_rxtx.h            |  2 +-
 drivers/net/i40e/i40e_rxtx.c            |  4 ++--
 drivers/net/i40e/i40e_rxtx.h            |  3 +--
 drivers/net/iavf/iavf_rxtx.c            |  4 ++--
 drivers/net/iavf/iavf_rxtx.h            |  2 +-
 drivers/net/ice/ice_rxtx.c              |  4 ++--
 drivers/net/ice/ice_rxtx.h              |  2 +-
 drivers/net/igc/igc_txrx.c              |  5 ++---
 drivers/net/igc/igc_txrx.h              |  3 +--
 drivers/net/ixgbe/ixgbe_ethdev.h        |  3 +--
 drivers/net/ixgbe/ixgbe_rxtx.c          |  4 ++--
 drivers/net/mlx5/mlx5_rx.c              | 26 ++++++++++++-------------
 drivers/net/mlx5/mlx5_rx.h              |  2 +-
 drivers/net/netvsc/hn_rxtx.c            |  4 ++--
 drivers/net/netvsc/hn_var.h             |  2 +-
 drivers/net/nfp/nfp_rxtx.c              |  4 ++--
 drivers/net/nfp/nfp_rxtx.h              |  3 +--
 drivers/net/octeontx2/otx2_ethdev.h     |  2 +-
 drivers/net/octeontx2/otx2_ethdev_ops.c |  8 ++++----
 drivers/net/sfc/sfc_ethdev.c            | 12 ++++++------
 drivers/net/thunderx/nicvf_ethdev.c     |  3 +--
 drivers/net/thunderx/nicvf_rxtx.c       |  4 ++--
 drivers/net/thunderx/nicvf_rxtx.h       |  2 +-
 drivers/net/txgbe/txgbe_ethdev.h        |  3 +--
 drivers/net/txgbe/txgbe_rxtx.c          |  4 ++--
 drivers/net/vhost/rte_eth_vhost.c       |  4 ++--
 lib/ethdev/rte_ethdev.h                 |  2 +-
 lib/ethdev/rte_ethdev_core.h            |  3 +--
 43 files changed, 103 insertions(+), 110 deletions(-)

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 73e377a007..5e73c19e07 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -205,6 +205,12 @@ ABI Changes
   ``rte_security_ipsec_xform`` to allow applications to configure SA soft
   and hard expiry limits. Limits can be either in number of packets or bytes.
 
+* ethdev: Input parameters for ``eth_rx_queue_count_t`` was changed.
+  Instead of pointer to ``rte_eth_dev`` and queue index, now it accepts pointer
+  to internal queue data as input parameter. While this change is transparent
+  to user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
+  is used by  public inline function ``rte_eth_rx_queue_count``.
+
 
 Known Issues
 ------------
diff --git a/drivers/net/ark/ark_ethdev_rx.c b/drivers/net/ark/ark_ethdev_rx.c
index d255f0177b..98658ce621 100644
--- a/drivers/net/ark/ark_ethdev_rx.c
+++ b/drivers/net/ark/ark_ethdev_rx.c
@@ -388,11 +388,11 @@ eth_ark_rx_queue_drain(struct ark_rx_queue *queue)
 }
 
 uint32_t
-eth_ark_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id)
+eth_ark_dev_rx_queue_count(void *rx_queue)
 {
 	struct ark_rx_queue *queue;
 
-	queue = dev->data->rx_queues[queue_id];
+	queue = rx_queue;
 	return (queue->prod_index - queue->cons_index);	/* mod arith */
 }
 
diff --git a/drivers/net/ark/ark_ethdev_rx.h b/drivers/net/ark/ark_ethdev_rx.h
index c8dc340a8a..859fcf1e6f 100644
--- a/drivers/net/ark/ark_ethdev_rx.h
+++ b/drivers/net/ark/ark_ethdev_rx.h
@@ -17,8 +17,7 @@ int eth_ark_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			       unsigned int socket_id,
 			       const struct rte_eth_rxconf *rx_conf,
 			       struct rte_mempool *mp);
-uint32_t eth_ark_dev_rx_queue_count(struct rte_eth_dev *dev,
-				    uint16_t rx_queue_id);
+uint32_t eth_ark_dev_rx_queue_count(void *rx_queue);
 int eth_ark_rx_stop_queue(struct rte_eth_dev *dev, uint16_t queue_id);
 int eth_ark_rx_start_queue(struct rte_eth_dev *dev, uint16_t queue_id);
 uint16_t eth_ark_recv_pkts_noop(void *rx_queue, struct rte_mbuf **rx_pkts,
diff --git a/drivers/net/atlantic/atl_ethdev.h b/drivers/net/atlantic/atl_ethdev.h
index f547571b5c..e808460520 100644
--- a/drivers/net/atlantic/atl_ethdev.h
+++ b/drivers/net/atlantic/atl_ethdev.h
@@ -66,7 +66,7 @@ int atl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
 
-uint32_t atl_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t atl_rx_queue_count(void *rx_queue);
 
 int atl_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int atl_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c
index 7d367c9306..35bb13044e 100644
--- a/drivers/net/atlantic/atl_rxtx.c
+++ b/drivers/net/atlantic/atl_rxtx.c
@@ -689,18 +689,13 @@ atl_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 /* Return Rx queue avail count */
 
 uint32_t
-atl_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+atl_rx_queue_count(void *rx_queue)
 {
 	struct atl_rx_queue *rxq;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (rx_queue_id >= dev->data->nb_rx_queues) {
-		PMD_DRV_LOG(ERR, "Invalid RX queue id=%d", rx_queue_id);
-		return 0;
-	}
-
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 
 	if (rxq == NULL)
 		return 0;
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 097dd10de9..e07242e961 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3130,20 +3130,22 @@ bnxt_dev_led_off_op(struct rte_eth_dev *dev)
 }
 
 static uint32_t
-bnxt_rx_queue_count_op(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+bnxt_rx_queue_count_op(void *rx_queue)
 {
-	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
+	struct bnxt *bp;
 	struct bnxt_cp_ring_info *cpr;
 	uint32_t desc = 0, raw_cons, cp_ring_size;
 	struct bnxt_rx_queue *rxq;
 	struct rx_pkt_cmpl *rxcmp;
 	int rc;
 
+	rxq = rx_queue;
+	bp = rxq->bp;
+
 	rc = is_bnxt_in_error(bp);
 	if (rc)
 		return rc;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
 	cpr = rxq->cp_ring;
 	raw_cons = cpr->cp_raw_cons;
 	cp_ring_size = cpr->cp_ring_struct->ring_size;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 36d8f9249d..b5589300c9 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1278,17 +1278,16 @@ static void dpaa_eth_tx_queue_release(void *txq __rte_unused)
 }
 
 static uint32_t
-dpaa_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+dpaa_dev_rx_queue_count(void *rx_queue)
 {
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
-	struct qman_fq *rxq = &dpaa_intf->rx_queues[rx_queue_id];
+	struct qman_fq *rxq = rx_queue;
 	u32 frm_cnt = 0;
 
 	PMD_INIT_FUNC_TRACE();
 
 	if (qman_query_fq_frm_cnt(rxq, &frm_cnt) == 0) {
-		DPAA_PMD_DEBUG("RX frame count for q(%d) is %u",
-			       rx_queue_id, frm_cnt);
+		DPAA_PMD_DEBUG("RX frame count for q(%p) is %u",
+			       rx_queue, frm_cnt);
 	}
 	return frm_cnt;
 }
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index c12169578e..b295af2a57 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1011,10 +1011,9 @@ dpaa2_dev_tx_queue_release(void *q __rte_unused)
 }
 
 static uint32_t
-dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+dpaa2_dev_rx_queue_count(void *rx_queue)
 {
 	int32_t ret;
-	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	struct dpaa2_queue *dpaa2_q;
 	struct qbman_swp *swp;
 	struct qbman_fq_query_np_rslt state;
@@ -1031,12 +1030,12 @@ dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 	swp = DPAA2_PER_LCORE_PORTAL;
 
-	dpaa2_q = (struct dpaa2_queue *)priv->rx_vq[rx_queue_id];
+	dpaa2_q = rx_queue;
 
 	if (qbman_fq_query_state(swp, dpaa2_q->fqid, &state) == 0) {
 		frame_cnt = qbman_fq_state_frame_count(&state);
-		DPAA2_PMD_DP_DEBUG("RX frame count for q(%d) is %u",
-				rx_queue_id, frame_cnt);
+		DPAA2_PMD_DP_DEBUG("RX frame count for q(%p) is %u",
+				rx_queue, frame_cnt);
 	}
 	return frame_cnt;
 }
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 3b4d9c3ee6..460e130a83 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -399,8 +399,7 @@ int eth_igb_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		const struct rte_eth_rxconf *rx_conf,
 		struct rte_mempool *mb_pool);
 
-uint32_t eth_igb_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t eth_igb_rx_queue_count(void *rx_queue);
 
 int eth_igb_rx_descriptor_done(void *rx_queue, uint16_t offset);
 
@@ -476,8 +475,7 @@ int eth_em_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		const struct rte_eth_rxconf *rx_conf,
 		struct rte_mempool *mb_pool);
 
-uint32_t eth_em_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t eth_em_rx_queue_count(void *rx_queue);
 
 int eth_em_rx_descriptor_done(void *rx_queue, uint16_t offset);
 
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index dfd8f2fd00..40de36cb20 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -1489,14 +1489,14 @@ eth_em_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-eth_em_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_em_rx_queue_count(void *rx_queue)
 {
 #define EM_RXQ_SCAN_INTERVAL 4
 	volatile struct e1000_rx_desc *rxdp;
 	struct em_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 278d5d2712..3210a0e008 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -1769,14 +1769,14 @@ eth_igb_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-eth_igb_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_igb_rx_queue_count(void *rx_queue)
 {
 #define IGB_RXQ_SCAN_INTERVAL 4
 	volatile union e1000_adv_rx_desc *rxdp;
 	struct igb_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 8d5797523b..5b2d60ad9c 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -233,18 +233,18 @@ static void enicpmd_dev_rx_queue_release(void *rxq)
 	enic_free_rq(rxq);
 }
 
-static uint32_t enicpmd_dev_rx_queue_count(struct rte_eth_dev *dev,
-					   uint16_t rx_queue_id)
+static uint32_t enicpmd_dev_rx_queue_count(void *rx_queue)
 {
-	struct enic *enic = pmd_priv(dev);
+	struct enic *enic;
+	struct vnic_rq *sop_rq;
 	uint32_t queue_count = 0;
 	struct vnic_cq *cq;
 	uint32_t cq_tail;
 	uint16_t cq_idx;
-	int rq_num;
 
-	rq_num = enic_rte_rq_idx_to_sop_idx(rx_queue_id);
-	cq = &enic->cq[enic_cq_rq(enic, rq_num)];
+	sop_rq = rx_queue;
+	enic = vnic_dev_priv(sop_rq->vdev);
+	cq = &enic->cq[enic_cq_rq(enic, sop_rq->index)];
 	cq_idx = cq->to_clean;
 
 	cq_tail = ioread32(&cq->ctrl->cq_tail);
diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h
index 916b856acc..648d12a1b4 100644
--- a/drivers/net/fm10k/fm10k.h
+++ b/drivers/net/fm10k/fm10k.h
@@ -324,7 +324,7 @@ uint16_t fm10k_recv_scattered_pkts(void *rx_queue,
 		struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
 
 uint32_t
-fm10k_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+fm10k_dev_rx_queue_count(void *rx_queue);
 
 int
 fm10k_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index 0a9a27aa5a..eab798e52c 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -367,14 +367,14 @@ fm10k_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 }
 
 uint32_t
-fm10k_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+fm10k_dev_rx_queue_count(void *rx_queue)
 {
 #define FM10K_RXQ_SCAN_INTERVAL 4
 	volatile union fm10k_rx_desc *rxdp;
 	struct fm10k_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->hw_ring[rxq->next_dd];
 	while ((desc < rxq->nb_desc) &&
 		rxdp->w.status & rte_cpu_to_le_16(FM10K_RXD_STATUS_DD)) {
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 481872e395..04791ae7d0 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -4673,7 +4673,7 @@ hns3_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
 }
 
 uint32_t
-hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+hns3_rx_queue_count(void *rx_queue)
 {
 	/*
 	 * Number of BDs that have been processed by the driver
@@ -4681,9 +4681,12 @@ hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	 */
 	uint32_t driver_hold_bd_num;
 	struct hns3_rx_queue *rxq;
+	const struct rte_eth_dev *dev;
 	uint32_t fbd_num;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
+	dev = &rte_eth_devices[rxq->port_id];
+
 	fbd_num = hns3_read_dev(rxq, HNS3_RING_RX_FBDNUM_REG);
 	if (dev->rx_pkt_burst == hns3_recv_pkts_vec ||
 	    dev->rx_pkt_burst == hns3_recv_pkts_vec_sve)
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index cd7c21c1d0..34a028701f 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -696,7 +696,7 @@ int hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 			struct rte_mempool *mp);
 int hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 			unsigned int socket, const struct rte_eth_txconf *conf);
-uint32_t hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t hns3_rx_queue_count(void *rx_queue);
 int hns3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int hns3_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int hns3_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 3eb82578b0..5493ae6bba 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2117,14 +2117,14 @@ i40e_dev_rx_queue_release(void *rxq)
 }
 
 uint32_t
-i40e_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+i40e_dev_rx_queue_count(void *rx_queue)
 {
 #define I40E_RXQ_SCAN_INTERVAL 4
 	volatile union i40e_rx_desc *rxdp;
 	struct i40e_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 	while ((desc < rxq->nb_rx_desc) &&
 		((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 5ccf5773e8..a08b80f020 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -225,8 +225,7 @@ int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
 int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
 void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
 
-uint32_t i40e_dev_rx_queue_count(struct rte_eth_dev *dev,
-				 uint16_t rx_queue_id);
+uint32_t i40e_dev_rx_queue_count(void *rx_queue);
 int i40e_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
 int i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 87afc0b4cb..3dc1f04380 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -2799,14 +2799,14 @@ iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 
 /* Get the number of used descriptors of a rx queue */
 uint32_t
-iavf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id)
+iavf_dev_rxq_count(void *rx_queue)
 {
 #define IAVF_RXQ_SCAN_INTERVAL 4
 	volatile union iavf_rx_desc *rxdp;
 	struct iavf_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index e210b913d6..2f7bec2b63 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -453,7 +453,7 @@ void iavf_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			  struct rte_eth_rxq_info *qinfo);
 void iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			  struct rte_eth_txq_info *qinfo);
-uint32_t iavf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id);
+uint32_t iavf_dev_rxq_count(void *rx_queue);
 int iavf_dev_rx_desc_status(void *rx_queue, uint16_t offset);
 int iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset);
 
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 5d7ab4f047..61936b0ab1 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1427,14 +1427,14 @@ ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 }
 
 uint32_t
-ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+ice_rx_queue_count(void *rx_queue)
 {
 #define ICE_RXQ_SCAN_INTERVAL 4
 	volatile union ice_rx_flex_desc *rxdp;
 	struct ice_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 	while ((desc < rxq->nb_rx_desc) &&
 	       rte_le_to_cpu_16(rxdp->wb.status_error0) &
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index b10db0874d..b45abec91a 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -222,7 +222,7 @@ uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 void ice_set_tx_function_flag(struct rte_eth_dev *dev,
 			      struct ice_tx_queue *txq);
 void ice_set_tx_function(struct rte_eth_dev *dev);
-uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t ice_rx_queue_count(void *rx_queue);
 void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 		      struct rte_eth_rxq_info *qinfo);
 void ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index b5489eedd2..437992ecdf 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -722,8 +722,7 @@ void eth_igc_rx_queue_release(void *rxq)
 		igc_rx_queue_release(rxq);
 }
 
-uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id)
+uint32_t eth_igc_rx_queue_count(void *rx_queue)
 {
 	/**
 	 * Check the DD bit of a rx descriptor of each 4 in a group,
@@ -736,7 +735,7 @@ uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
 	struct igc_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 
 	while (desc < rxq->nb_rx_desc - rxq->rx_tail) {
diff --git a/drivers/net/igc/igc_txrx.h b/drivers/net/igc/igc_txrx.h
index f2b2d75bbc..b0c4b3ebd9 100644
--- a/drivers/net/igc/igc_txrx.h
+++ b/drivers/net/igc/igc_txrx.h
@@ -22,8 +22,7 @@ int eth_igc_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		const struct rte_eth_rxconf *rx_conf,
 		struct rte_mempool *mb_pool);
 
-uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t eth_igc_rx_queue_count(void *rx_queue);
 
 int eth_igc_rx_descriptor_done(void *rx_queue, uint16_t offset);
 
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index a0ce18ca24..c5027be1dc 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -602,8 +602,7 @@ int  ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
 
-uint32_t ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t ixgbe_dev_rx_queue_count(void *rx_queue);
 
 int ixgbe_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
 
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index bfdfd5e755..1f802851e3 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -3258,14 +3258,14 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+ixgbe_dev_rx_queue_count(void *rx_queue)
 {
 #define IXGBE_RXQ_SCAN_INTERVAL 4
 	volatile union ixgbe_adv_rx_desc *rxdp;
 	struct ixgbe_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index e3b1051ba4..1a9eb35acc 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -240,32 +240,32 @@ mlx5_rx_burst_mode_get(struct rte_eth_dev *dev,
 /**
  * DPDK callback to get the number of used descriptors in a RX queue.
  *
- * @param dev
- *   Pointer to the device structure.
- *
- * @param rx_queue_id
- *   The Rx queue.
+ * @param rx_queue
+ *   The Rx queue pointer.
  *
  * @return
  *   The number of used rx descriptor.
  *   -EINVAL if the queue is invalid
  */
 uint32_t
-mlx5_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+mlx5_rx_queue_count(void *rx_queue)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq;
+	struct mlx5_rxq_data *rxq = rx_queue;
+	struct rte_eth_dev *dev;
+
+	if (!rxq) {
+		rte_errno = EINVAL;
+		return -rte_errno;
+	}
+
+	dev = &rte_eth_devices[rxq->port_id];
 
 	if (dev->rx_pkt_burst == NULL ||
 	    dev->rx_pkt_burst == removed_rx_burst) {
 		rte_errno = ENOTSUP;
 		return -rte_errno;
 	}
-	rxq = (*priv->rxqs)[rx_queue_id];
-	if (!rxq) {
-		rte_errno = EINVAL;
-		return -rte_errno;
-	}
+
 	return rx_queue_count(rxq);
 }
 
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 3f2b99fb65..5e4ac7324d 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -260,7 +260,7 @@ uint16_t mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts,
 uint16_t removed_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts,
 			  uint16_t pkts_n);
 int mlx5_rx_descriptor_status(void *rx_queue, uint16_t offset);
-uint32_t mlx5_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t mlx5_rx_queue_count(void *rx_queue);
 void mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 		       struct rte_eth_rxq_info *qinfo);
 int mlx5_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t rx_queue_id,
diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c
index c6bf7cc132..30aac371c8 100644
--- a/drivers/net/netvsc/hn_rxtx.c
+++ b/drivers/net/netvsc/hn_rxtx.c
@@ -1018,9 +1018,9 @@ hn_dev_rx_queue_release(void *arg)
  * For this device that means how many packets are pending in the ring.
  */
 uint32_t
-hn_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id)
+hn_dev_rx_queue_count(void *rx_queue)
 {
-	struct hn_rx_queue *rxq = dev->data->rx_queues[queue_id];
+	struct hn_rx_queue *rxq = rx_queue;
 
 	return rte_ring_count(rxq->rx_ring);
 }
diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h
index 43642408bc..2a2bac9338 100644
--- a/drivers/net/netvsc/hn_var.h
+++ b/drivers/net/netvsc/hn_var.h
@@ -215,7 +215,7 @@ int	hn_dev_rx_queue_setup(struct rte_eth_dev *dev,
 void	hn_dev_rx_queue_info(struct rte_eth_dev *dev, uint16_t queue_id,
 			     struct rte_eth_rxq_info *qinfo);
 void	hn_dev_rx_queue_release(void *arg);
-uint32_t hn_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id);
+uint32_t hn_dev_rx_queue_count(void *rx_queue);
 int	hn_dev_rx_queue_status(void *rxq, uint16_t offset);
 void	hn_dev_free_queues(struct rte_eth_dev *dev);
 
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 1402c5f84a..4b2ac4cc43 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -97,14 +97,14 @@ nfp_net_rx_freelist_setup(struct rte_eth_dev *dev)
 }
 
 uint32_t
-nfp_net_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
+nfp_net_rx_queue_count(void *rx_queue)
 {
 	struct nfp_net_rxq *rxq;
 	struct nfp_net_rx_desc *rxds;
 	uint32_t idx;
 	uint32_t count;
 
-	rxq = (struct nfp_net_rxq *)dev->data->rx_queues[queue_idx];
+	rxq = rx_queue;
 
 	idx = rxq->rd_p;
 
diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h
index b0a8bf81b0..0fd50a6c22 100644
--- a/drivers/net/nfp/nfp_rxtx.h
+++ b/drivers/net/nfp/nfp_rxtx.h
@@ -275,8 +275,7 @@ struct nfp_net_rxq {
 } __rte_aligned(64);
 
 int nfp_net_rx_freelist_setup(struct rte_eth_dev *dev);
-uint32_t nfp_net_rx_queue_count(struct rte_eth_dev *dev,
-				       uint16_t queue_idx);
+uint32_t nfp_net_rx_queue_count(void *rx_queue);
 uint16_t nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 				  uint16_t nb_pkts);
 void nfp_net_rx_queue_release(void *rxq);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 7871e3d30b..6696db6f6f 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -431,7 +431,7 @@ int otx2_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			   struct rte_eth_burst_mode *mode);
 int otx2_tx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			   struct rte_eth_burst_mode *mode);
-uint32_t otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t qidx);
+uint32_t otx2_nix_rx_queue_count(void *rx_queue);
 int otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt);
 int otx2_nix_rx_descriptor_done(void *rxq, uint16_t offset);
 int otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 552e6bd43d..e6f8e5bfc1 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -342,13 +342,13 @@ nix_rx_head_tail_get(struct otx2_eth_dev *dev,
 }
 
 uint32_t
-otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t queue_idx)
+otx2_nix_rx_queue_count(void *rx_queue)
 {
-	struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[queue_idx];
-	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_eth_rxq *rxq = rx_queue;
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(rxq->eth_dev);
 	uint32_t head, tail;
 
-	nix_rx_head_tail_get(dev, &head, &tail, queue_idx);
+	nix_rx_head_tail_get(dev, &head, &tail, rxq->rq);
 	return (tail - head) % rxq->qlen;
 }
 
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 2db0d000c3..4b5713f3ec 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1281,19 +1281,19 @@ sfc_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
  * use any process-local pointers from the adapter data.
  */
 static uint32_t
-sfc_rx_queue_count(struct rte_eth_dev *dev, uint16_t ethdev_qid)
+sfc_rx_queue_count(void *rx_queue)
 {
-	const struct sfc_adapter_priv *sap = sfc_adapter_priv_by_eth_dev(dev);
-	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
-	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
+	struct sfc_dp_rxq *dp_rxq = rx_queue;
+	const struct sfc_dp_rx *dp_rx;
 	struct sfc_rxq_info *rxq_info;
 
-	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
+	dp_rx = sfc_dp_rx_by_dp_rxq(dp_rxq);
+	rxq_info = sfc_rxq_info_by_dp_rxq(dp_rxq);
 
 	if ((rxq_info->state & SFC_RXQ_STARTED) == 0)
 		return 0;
 
-	return sap->dp_rx->qdesc_npending(rxq_info->dp);
+	return dp_rx->qdesc_npending(dp_rxq);
 }
 
 /*
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 561a98fc81..0e87620e42 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -1060,8 +1060,7 @@ nicvf_rx_queue_release_mbufs(struct rte_eth_dev *dev, struct nicvf_rxq *rxq)
 	if (dev->rx_pkt_burst == NULL)
 		return;
 
-	while ((rxq_cnt = nicvf_dev_rx_queue_count(dev,
-				nicvf_netdev_qidx(rxq->nic, rxq->queue_id)))) {
+	while ((rxq_cnt = nicvf_dev_rx_queue_count(rxq))) {
 		nb_pkts = dev->rx_pkt_burst(rxq, rx_pkts,
 					NICVF_MAX_RX_FREE_THRESH);
 		PMD_DRV_LOG(INFO, "nb_pkts=%d  rxq_cnt=%d", nb_pkts, rxq_cnt);
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index 91e09ff8d5..0d4f4ae87e 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -649,11 +649,11 @@ nicvf_recv_pkts_multiseg_cksum_vlan_strip(void *rx_queue,
 }
 
 uint32_t
-nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
+nicvf_dev_rx_queue_count(void *rx_queue)
 {
 	struct nicvf_rxq *rxq;
 
-	rxq = dev->data->rx_queues[queue_idx];
+	rxq = rx_queue;
 	return nicvf_addr_read(rxq->cq_status) & NICVF_CQ_CQE_COUNT_MASK;
 }
 
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index d6ed660b4e..271f329dc4 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -83,7 +83,7 @@ nicvf_mbuff_init_mseg_update(struct rte_mbuf *pkt, const uint64_t mbuf_init,
 	*(uint64_t *)(&pkt->rearm_data) = init.value;
 }
 
-uint32_t nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx);
+uint32_t nicvf_dev_rx_queue_count(void *rx_queue);
 uint32_t nicvf_dev_rbdr_refill(struct rte_eth_dev *dev, uint16_t queue_idx);
 
 uint16_t nicvf_recv_pkts_no_offload(void *rxq, struct rte_mbuf **rx_pkts,
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 3021933965..569cd6a48f 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -446,8 +446,7 @@ int  txgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
 
-uint32_t txgbe_dev_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t txgbe_dev_rx_queue_count(void *rx_queue);
 
 int txgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int txgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 1a261287d1..2a7cfdeedb 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -2688,14 +2688,14 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-txgbe_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+txgbe_dev_rx_queue_count(void *rx_queue)
 {
 #define TXGBE_RXQ_SCAN_INTERVAL 4
 	volatile struct txgbe_rx_desc *rxdp;
 	struct txgbe_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index a202931e9a..f2b3f142d8 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -1369,11 +1369,11 @@ eth_link_update(struct rte_eth_dev *dev __rte_unused,
 }
 
 static uint32_t
-eth_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_rx_queue_count(void *rx_queue)
 {
 	struct vhost_queue *vq;
 
-	vq = dev->data->rx_queues[rx_queue_id];
+	vq = rx_queue;
 	if (vq == NULL)
 		return 0;
 
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index afdc53b674..9642b7c00f 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -5060,7 +5060,7 @@ rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
 	    dev->data->rx_queues[queue_id] == NULL)
 		return -EINVAL;
 
-	return (int)(*dev->rx_queue_count)(dev, queue_id);
+	return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]);
 }
 
 /**
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index edf96de2dc..00f27c643a 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -41,8 +41,7 @@ typedef uint16_t (*eth_tx_prep_t)(void *txq,
 /**< @internal Prepare output packets on a transmit queue of an Ethernet device. */
 
 
-typedef uint32_t (*eth_rx_queue_count_t)(struct rte_eth_dev *dev,
-					 uint16_t rx_queue_id);
+typedef uint32_t (*eth_rx_queue_count_t)(void *rxq);
 /**< @internal Get number of used descriptors on a receive queue. */
 
 typedef int (*eth_rx_descriptor_done_t)(void *rxq, uint16_t offset);
-- 
2.26.3


^ permalink raw reply	[relevance 6%]

* [dpdk-dev] [PATCH v3 7/7] ethdev: hide eth dev related structures
  2021-10-01 14:02  4%   ` [dpdk-dev] [PATCH v3 0/7] hide eth dev related structures Konstantin Ananyev
                       ` (2 preceding siblings ...)
  2021-10-01 14:02  2%     ` [dpdk-dev] [PATCH v3 4/7] ethdev: make burst functions to use new flat array Konstantin Ananyev
@ 2021-10-01 14:02  9%     ` Konstantin Ananyev
  2021-10-01 17:02  0%     ` [dpdk-dev] [PATCH v3 0/7] " Ferruh Yigit
  2021-10-04 13:55  4%     ` [dpdk-dev] [PATCH v4 " Konstantin Ananyev
  5 siblings, 0 replies; 200+ results
From: Konstantin Ananyev @ 2021-10-01 14:02 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
data into private header (ethdev_driver.h).
Few minor changes to keep DPDK building after that.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/release_21_11.rst        |   6 +
 drivers/common/octeontx2/otx2_sec_idev.c      |   2 +-
 drivers/crypto/octeontx2/otx2_cryptodev_ops.c |   2 +-
 drivers/net/cxgbe/base/adapter.h              |   2 +-
 drivers/net/dpaa2/dpaa2_ptp.c                 |   2 +-
 drivers/net/netvsc/hn_var.h                   |   1 +
 lib/ethdev/ethdev_driver.h                    | 149 ++++++++++++++++++
 lib/ethdev/rte_ethdev_core.h                  | 143 -----------------
 lib/eventdev/rte_event_eth_rx_adapter.c       |   2 +-
 lib/eventdev/rte_event_eth_tx_adapter.c       |   2 +-
 lib/eventdev/rte_eventdev.c                   |   2 +-
 11 files changed, 163 insertions(+), 150 deletions(-)

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 532be57ae9..6aabcbcb26 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -220,6 +220,12 @@ ABI Changes
   to user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
   is used by  public inline function ``rte_eth_rx_queue_count``.
 
+* ethdev: Made ``rte_eth_dev``, ``rte_eth_dev_data``, ``rte_eth_rxtx_callback``
+  private data structures. ``rte_eth_devices[]`` can't be accessible directly
+  by user any more. While it is an ABI breakage, this change is intended
+  to be transparent for both users (no changes in user app is required) and
+  PMD developers (no changes in PMD is required).
+
 
 Known Issues
 ------------
diff --git a/drivers/common/octeontx2/otx2_sec_idev.c b/drivers/common/octeontx2/otx2_sec_idev.c
index 6e9643c383..b561b67174 100644
--- a/drivers/common/octeontx2/otx2_sec_idev.c
+++ b/drivers/common/octeontx2/otx2_sec_idev.c
@@ -4,7 +4,7 @@
 
 #include <rte_atomic.h>
 #include <rte_bus_pci.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_spinlock.h>
 
 #include "otx2_common.h"
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
index 37fad11d91..f0b72e05c2 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
@@ -6,7 +6,7 @@
 
 #include <cryptodev_pmd.h>
 #include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_event_crypto_adapter.h>
 
 #include "otx2_cryptodev.h"
diff --git a/drivers/net/cxgbe/base/adapter.h b/drivers/net/cxgbe/base/adapter.h
index 01a2a9d147..1c7c8afe16 100644
--- a/drivers/net/cxgbe/base/adapter.h
+++ b/drivers/net/cxgbe/base/adapter.h
@@ -12,7 +12,7 @@
 #include <rte_mbuf.h>
 #include <rte_io.h>
 #include <rte_rwlock.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 
 #include "../cxgbe_compat.h"
 #include "../cxgbe_ofld.h"
diff --git a/drivers/net/dpaa2/dpaa2_ptp.c b/drivers/net/dpaa2/dpaa2_ptp.c
index 899dd5d442..8d79e39244 100644
--- a/drivers/net/dpaa2/dpaa2_ptp.c
+++ b/drivers/net/dpaa2/dpaa2_ptp.c
@@ -10,7 +10,7 @@
 #include <unistd.h>
 #include <stdarg.h>
 
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_log.h>
 #include <rte_eth_ctrl.h>
 #include <rte_malloc.h>
diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h
index 2a2bac9338..74e6e6010d 100644
--- a/drivers/net/netvsc/hn_var.h
+++ b/drivers/net/netvsc/hn_var.h
@@ -7,6 +7,7 @@
  */
 
 #include <rte_eal_paging.h>
+#include <ethdev_driver.h>
 
 /*
  * Tunable ethdev params
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index cc2c75261c..63b04dce32 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -17,6 +17,155 @@
 
 #include <rte_ethdev.h>
 
+/**
+ * @internal
+ * Structure used to hold information about the callbacks to be called for a
+ * queue on RX and TX.
+ */
+struct rte_eth_rxtx_callback {
+	struct rte_eth_rxtx_callback *next;
+	union{
+		rte_rx_callback_fn rx;
+		rte_tx_callback_fn tx;
+	} fn;
+	void *param;
+};
+
+/**
+ * @internal
+ * The generic data structure associated with each ethernet device.
+ *
+ * Pointers to burst-oriented packet receive and transmit functions are
+ * located at the beginning of the structure, along with the pointer to
+ * where all the data elements for the particular device are stored in shared
+ * memory. This split allows the function pointer and driver data to be per-
+ * process, while the actual configuration data for the device is shared.
+ */
+struct rte_eth_dev {
+	eth_rx_burst_t rx_pkt_burst; /**< Pointer to PMD receive function. */
+	eth_tx_burst_t tx_pkt_burst; /**< Pointer to PMD transmit function. */
+	eth_tx_prep_t tx_pkt_prepare;
+	/**< Pointer to PMD transmit prepare function. */
+	eth_rx_queue_count_t rx_queue_count;
+	/**< Get the number of used RX descriptors. */
+	eth_rx_descriptor_status_t rx_descriptor_status;
+	/**< Check the status of a Rx descriptor. */
+	eth_tx_descriptor_status_t tx_descriptor_status;
+	/**< Check the status of a Tx descriptor. */
+
+	/**
+	 * Next two fields are per-device data but *data is shared between
+	 * primary and secondary processes and *process_private is per-process
+	 * private. The second one is managed by PMDs if necessary.
+	 */
+	struct rte_eth_dev_data *data;  /**< Pointer to device data. */
+	void *process_private; /**< Pointer to per-process device data. */
+	const struct eth_dev_ops *dev_ops; /**< Functions exported by PMD */
+	struct rte_device *device; /**< Backing device */
+	struct rte_intr_handle *intr_handle; /**< Device interrupt handle */
+	/** User application callbacks for NIC interrupts */
+	struct rte_eth_dev_cb_list link_intr_cbs;
+	/**
+	 * User-supplied functions called from rx_burst to post-process
+	 * received packets before passing them to the user
+	 */
+	struct rte_eth_rxtx_callback *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
+	/**
+	 * User-supplied functions called from tx_burst to pre-process
+	 * received packets before passing them to the driver for transmission.
+	 */
+	struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
+	enum rte_eth_dev_state state; /**< Flag indicating the port state */
+	void *security_ctx; /**< Context for security ops */
+
+	uint64_t reserved_64s[4]; /**< Reserved for future fields */
+	void *reserved_ptrs[4];   /**< Reserved for future fields */
+} __rte_cache_aligned;
+
+struct rte_eth_dev_sriov;
+struct rte_eth_dev_owner;
+
+/**
+ * @internal
+ * The data part, with no function pointers, associated with each ethernet
+ * device. This structure is safe to place in shared memory to be common
+ * among different processes in a multi-process configuration.
+ */
+struct rte_eth_dev_data {
+	char name[RTE_ETH_NAME_MAX_LEN]; /**< Unique identifier name */
+
+	void **rx_queues; /**< Array of pointers to RX queues. */
+	void **tx_queues; /**< Array of pointers to TX queues. */
+	uint16_t nb_rx_queues; /**< Number of RX queues. */
+	uint16_t nb_tx_queues; /**< Number of TX queues. */
+
+	struct rte_eth_dev_sriov sriov;    /**< SRIOV data */
+
+	void *dev_private;
+			/**< PMD-specific private data.
+			 *   @see rte_eth_dev_release_port()
+			 */
+
+	struct rte_eth_link dev_link;   /**< Link-level information & status. */
+	struct rte_eth_conf dev_conf;   /**< Configuration applied to device. */
+	uint16_t mtu;                   /**< Maximum Transmission Unit. */
+	uint32_t min_rx_buf_size;
+			/**< Common RX buffer size handled by all queues. */
+
+	uint64_t rx_mbuf_alloc_failed; /**< RX ring mbuf allocation failures. */
+	struct rte_ether_addr *mac_addrs;
+			/**< Device Ethernet link address.
+			 *   @see rte_eth_dev_release_port()
+			 */
+	uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
+			/**< Bitmap associating MAC addresses to pools. */
+	struct rte_ether_addr *hash_mac_addrs;
+			/**< Device Ethernet MAC addresses of hash filtering.
+			 *   @see rte_eth_dev_release_port()
+			 */
+	uint16_t port_id;           /**< Device [external] port identifier. */
+
+	__extension__
+	uint8_t promiscuous   : 1,
+		/**< RX promiscuous mode ON(1) / OFF(0). */
+		scattered_rx : 1,
+		/**< RX of scattered packets is ON(1) / OFF(0) */
+		all_multicast : 1,
+		/**< RX all multicast mode ON(1) / OFF(0). */
+		dev_started : 1,
+		/**< Device state: STARTED(1) / STOPPED(0). */
+		lro         : 1,
+		/**< RX LRO is ON(1) / OFF(0) */
+		dev_configured : 1;
+		/**< Indicates whether the device is configured.
+		 *   CONFIGURED(1) / NOT CONFIGURED(0).
+		 */
+	uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
+		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
+	uint8_t tx_queue_state[RTE_MAX_QUEUES_PER_PORT];
+		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
+	uint32_t dev_flags;             /**< Capabilities. */
+	int numa_node;                  /**< NUMA node connection. */
+	struct rte_vlan_filter_conf vlan_filter_conf;
+			/**< VLAN filter configuration. */
+	struct rte_eth_dev_owner owner; /**< The port owner. */
+	uint16_t representor_id;
+			/**< Switch-specific identifier.
+			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
+			 */
+
+	pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
+	uint64_t reserved_64s[4]; /**< Reserved for future fields */
+	void *reserved_ptrs[4];   /**< Reserved for future fields */
+} __rte_cache_aligned;
+
+/**
+ * @internal
+ * The pool of *rte_eth_dev* structures. The size of the pool
+ * is configured at compile-time in the <rte_ethdev.c> file.
+ */
+extern struct rte_eth_dev rte_eth_devices[];
+
 /**< @internal Declaration of the hairpin peer queue information structure. */
 struct rte_hairpin_peer_info;
 
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index 140350314d..691985b15a 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -95,147 +95,4 @@ struct rte_eth_fp_ops {
 
 extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
 
-
-/**
- * @internal
- * Structure used to hold information about the callbacks to be called for a
- * queue on RX and TX.
- */
-struct rte_eth_rxtx_callback {
-	struct rte_eth_rxtx_callback *next;
-	union{
-		rte_rx_callback_fn rx;
-		rte_tx_callback_fn tx;
-	} fn;
-	void *param;
-};
-
-/**
- * @internal
- * The generic data structure associated with each ethernet device.
- *
- * Pointers to burst-oriented packet receive and transmit functions are
- * located at the beginning of the structure, along with the pointer to
- * where all the data elements for the particular device are stored in shared
- * memory. This split allows the function pointer and driver data to be per-
- * process, while the actual configuration data for the device is shared.
- */
-struct rte_eth_dev {
-	eth_rx_burst_t rx_pkt_burst; /**< Pointer to PMD receive function. */
-	eth_tx_burst_t tx_pkt_burst; /**< Pointer to PMD transmit function. */
-	eth_tx_prep_t tx_pkt_prepare; /**< Pointer to PMD transmit prepare function. */
-
-	eth_rx_queue_count_t       rx_queue_count; /**< Get the number of used RX descriptors. */
-	eth_rx_descriptor_status_t rx_descriptor_status; /**< Check the status of a Rx descriptor. */
-	eth_tx_descriptor_status_t tx_descriptor_status; /**< Check the status of a Tx descriptor. */
-
-	/**
-	 * Next two fields are per-device data but *data is shared between
-	 * primary and secondary processes and *process_private is per-process
-	 * private. The second one is managed by PMDs if necessary.
-	 */
-	struct rte_eth_dev_data *data;  /**< Pointer to device data. */
-	void *process_private; /**< Pointer to per-process device data. */
-	const struct eth_dev_ops *dev_ops; /**< Functions exported by PMD */
-	struct rte_device *device; /**< Backing device */
-	struct rte_intr_handle *intr_handle; /**< Device interrupt handle */
-	/** User application callbacks for NIC interrupts */
-	struct rte_eth_dev_cb_list link_intr_cbs;
-	/**
-	 * User-supplied functions called from rx_burst to post-process
-	 * received packets before passing them to the user
-	 */
-	struct rte_eth_rxtx_callback *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
-	/**
-	 * User-supplied functions called from tx_burst to pre-process
-	 * received packets before passing them to the driver for transmission.
-	 */
-	struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
-	enum rte_eth_dev_state state; /**< Flag indicating the port state */
-	void *security_ctx; /**< Context for security ops */
-
-	uint64_t reserved_64s[4]; /**< Reserved for future fields */
-	void *reserved_ptrs[4];   /**< Reserved for future fields */
-} __rte_cache_aligned;
-
-struct rte_eth_dev_sriov;
-struct rte_eth_dev_owner;
-
-/**
- * @internal
- * The data part, with no function pointers, associated with each ethernet device.
- *
- * This structure is safe to place in shared memory to be common among different
- * processes in a multi-process configuration.
- */
-struct rte_eth_dev_data {
-	char name[RTE_ETH_NAME_MAX_LEN]; /**< Unique identifier name */
-
-	void **rx_queues; /**< Array of pointers to RX queues. */
-	void **tx_queues; /**< Array of pointers to TX queues. */
-	uint16_t nb_rx_queues; /**< Number of RX queues. */
-	uint16_t nb_tx_queues; /**< Number of TX queues. */
-
-	struct rte_eth_dev_sriov sriov;    /**< SRIOV data */
-
-	void *dev_private;
-			/**< PMD-specific private data.
-			 *   @see rte_eth_dev_release_port()
-			 */
-
-	struct rte_eth_link dev_link;   /**< Link-level information & status. */
-	struct rte_eth_conf dev_conf;   /**< Configuration applied to device. */
-	uint16_t mtu;                   /**< Maximum Transmission Unit. */
-	uint32_t min_rx_buf_size;
-			/**< Common RX buffer size handled by all queues. */
-
-	uint64_t rx_mbuf_alloc_failed; /**< RX ring mbuf allocation failures. */
-	struct rte_ether_addr *mac_addrs;
-			/**< Device Ethernet link address.
-			 *   @see rte_eth_dev_release_port()
-			 */
-	uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
-			/**< Bitmap associating MAC addresses to pools. */
-	struct rte_ether_addr *hash_mac_addrs;
-			/**< Device Ethernet MAC addresses of hash filtering.
-			 *   @see rte_eth_dev_release_port()
-			 */
-	uint16_t port_id;           /**< Device [external] port identifier. */
-
-	__extension__
-	uint8_t promiscuous   : 1, /**< RX promiscuous mode ON(1) / OFF(0). */
-		scattered_rx : 1,  /**< RX of scattered packets is ON(1) / OFF(0) */
-		all_multicast : 1, /**< RX all multicast mode ON(1) / OFF(0). */
-		dev_started : 1,   /**< Device state: STARTED(1) / STOPPED(0). */
-		lro         : 1,   /**< RX LRO is ON(1) / OFF(0) */
-		dev_configured : 1;
-		/**< Indicates whether the device is configured.
-		 *   CONFIGURED(1) / NOT CONFIGURED(0).
-		 */
-	uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
-		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
-	uint8_t tx_queue_state[RTE_MAX_QUEUES_PER_PORT];
-		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
-	uint32_t dev_flags;             /**< Capabilities. */
-	int numa_node;                  /**< NUMA node connection. */
-	struct rte_vlan_filter_conf vlan_filter_conf;
-			/**< VLAN filter configuration. */
-	struct rte_eth_dev_owner owner; /**< The port owner. */
-	uint16_t representor_id;
-			/**< Switch-specific identifier.
-			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
-			 */
-
-	pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
-	uint64_t reserved_64s[4]; /**< Reserved for future fields */
-	void *reserved_ptrs[4];   /**< Reserved for future fields */
-} __rte_cache_aligned;
-
-/**
- * @internal
- * The pool of *rte_eth_dev* structures. The size of the pool
- * is configured at compile-time in the <rte_ethdev.c> file.
- */
-extern struct rte_eth_dev rte_eth_devices[];
-
 #endif /* _RTE_ETHDEV_CORE_H_ */
diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c
index 13dfb28401..89c4ca5d40 100644
--- a/lib/eventdev/rte_event_eth_rx_adapter.c
+++ b/lib/eventdev/rte_event_eth_rx_adapter.c
@@ -11,7 +11,7 @@
 #include <rte_common.h>
 #include <rte_dev.h>
 #include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_log.h>
 #include <rte_malloc.h>
 #include <rte_service_component.h>
diff --git a/lib/eventdev/rte_event_eth_tx_adapter.c b/lib/eventdev/rte_event_eth_tx_adapter.c
index 18c0359db7..1c06c8707c 100644
--- a/lib/eventdev/rte_event_eth_tx_adapter.c
+++ b/lib/eventdev/rte_event_eth_tx_adapter.c
@@ -3,7 +3,7 @@
  */
 #include <rte_spinlock.h>
 #include <rte_service_component.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 
 #include "eventdev_pmd.h"
 #include "rte_eventdev_trace.h"
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index e347d6dfd5..ebef5f0906 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -29,7 +29,7 @@
 #include <rte_common.h>
 #include <rte_malloc.h>
 #include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_cryptodev.h>
 #include <cryptodev_pmd.h>
 #include <rte_telemetry.h>
-- 
2.26.3


^ permalink raw reply	[relevance 9%]

* [dpdk-dev] [PATCH v3 3/7] ethdev: copy ethdev 'fast' API into separate structure
  2021-10-01 14:02  4%   ` [dpdk-dev] [PATCH v3 0/7] hide eth dev related structures Konstantin Ananyev
  2021-10-01 14:02  6%     ` [dpdk-dev] [PATCH v3 2/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
@ 2021-10-01 14:02  2%     ` Konstantin Ananyev
  2021-10-01 14:02  2%     ` [dpdk-dev] [PATCH v3 4/7] ethdev: make burst functions to use new flat array Konstantin Ananyev
                       ` (3 subsequent siblings)
  5 siblings, 0 replies; 200+ results
From: Konstantin Ananyev @ 2021-10-01 14:02 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Copy public function pointers (rx_pkt_burst(), etc.) and related
pointers to internal data from rte_eth_dev structure into a
separate flat array. That array will remain in a public header.
The intention here is to make rte_eth_dev and related structures internal.
That should allow future possible changes to core eth_dev structures
to be transparent to the user and help to avoid ABI/API breakages.
The plan is to keep minimal part of data from rte_eth_dev public,
so we still can use inline functions for 'fast' calls
(like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/ethdev/ethdev_private.c  | 52 ++++++++++++++++++++++++++++++++++++
 lib/ethdev/ethdev_private.h  |  7 +++++
 lib/ethdev/rte_ethdev.c      | 17 ++++++++++++
 lib/ethdev/rte_ethdev_core.h | 45 +++++++++++++++++++++++++++++++
 4 files changed, 121 insertions(+)

diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 012cf73ca2..3eeda6e9f9 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -174,3 +174,55 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
 		RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
 	return str == NULL ? -1 : 0;
 }
+
+static uint16_t
+dummy_eth_rx_burst(__rte_unused void *rxq,
+		__rte_unused struct rte_mbuf **rx_pkts,
+		__rte_unused uint16_t nb_pkts)
+{
+	RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+static uint16_t
+dummy_eth_tx_burst(__rte_unused void *txq,
+		__rte_unused struct rte_mbuf **tx_pkts,
+		__rte_unused uint16_t nb_pkts)
+{
+	RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n");
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+void
+eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo)
+{
+	static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
+	static const struct rte_eth_fp_ops dummy_ops = {
+		.rx_pkt_burst = dummy_eth_rx_burst,
+		.tx_pkt_burst = dummy_eth_tx_burst,
+		.rxq = {.data = dummy_data, .clbk = dummy_data,},
+		.txq = {.data = dummy_data, .clbk = dummy_data,},
+	};
+
+	*fpo = dummy_ops;
+}
+
+void
+eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
+		const struct rte_eth_dev *dev)
+{
+	fpo->rx_pkt_burst = dev->rx_pkt_burst;
+	fpo->tx_pkt_burst = dev->tx_pkt_burst;
+	fpo->tx_pkt_prepare = dev->tx_pkt_prepare;
+	fpo->rx_queue_count = dev->rx_queue_count;
+	fpo->rx_descriptor_status = dev->rx_descriptor_status;
+	fpo->tx_descriptor_status = dev->tx_descriptor_status;
+
+	fpo->rxq.data = dev->data->rx_queues;
+	fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
+
+	fpo->txq.data = dev->data->tx_queues;
+	fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
+}
diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
index 3724429577..40333e7651 100644
--- a/lib/ethdev/ethdev_private.h
+++ b/lib/ethdev/ethdev_private.h
@@ -26,4 +26,11 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
 /* Parse devargs value for representor parameter. */
 int rte_eth_devargs_parse_representor_ports(char *str, void *data);
 
+/* reset eth 'fast' API to dummy values */
+void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
+
+/* setup eth 'fast' API to ethdev values */
+void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
+		const struct rte_eth_dev *dev);
+
 #endif /* _ETH_PRIVATE_H_ */
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 424bc260fa..9fbb1bc3db 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -44,6 +44,9 @@
 static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
 struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
 
+/* public 'fast' API */
+struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
+
 /* spinlock for eth device callbacks */
 static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
 
@@ -1788,6 +1791,9 @@ rte_eth_dev_start(uint16_t port_id)
 		(*dev->dev_ops->link_update)(dev, 0);
 	}
 
+	/* expose selection of PMD rx/tx function */
+	eth_dev_fp_ops_setup(rte_eth_fp_ops + port_id, dev);
+
 	rte_ethdev_trace_start(port_id);
 	return 0;
 }
@@ -1810,6 +1816,9 @@ rte_eth_dev_stop(uint16_t port_id)
 		return 0;
 	}
 
+	/* point rx/tx functions to dummy ones */
+	eth_dev_fp_ops_reset(rte_eth_fp_ops + port_id);
+
 	dev->data->dev_started = 0;
 	ret = (*dev->dev_ops->dev_stop)(dev);
 	rte_ethdev_trace_stop(port_id, ret);
@@ -4568,6 +4577,14 @@ rte_eth_mirror_rule_reset(uint16_t port_id, uint8_t rule_id)
 	return eth_err(port_id, (*dev->dev_ops->mirror_rule_reset)(dev, rule_id));
 }
 
+RTE_INIT(eth_dev_init_fp_ops)
+{
+	uint32_t i;
+
+	for (i = 0; i != RTE_DIM(rte_eth_fp_ops); i++)
+		eth_dev_fp_ops_reset(rte_eth_fp_ops + i);
+}
+
 RTE_INIT(eth_dev_init_cb_lists)
 {
 	uint16_t i;
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index 00f27c643a..1335ef8bb2 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -53,6 +53,51 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
 typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
 /**< @internal Check the status of a Tx descriptor */
 
+/**
+ * @internal
+ * Structure used to hold opaque pointernals to internal ethdev RX/TXi
+ * queues data.
+ * The main purpose to expose these pointers at all - allow compiler
+ * to fetch this data for 'fast' ethdev inline functions in advance.
+ */
+struct rte_ethdev_qdata {
+	void **data;
+	/**< points to array of internal queue data pointers */
+	void **clbk;
+	/**< points to array of queue callback data pointers */
+};
+
+/**
+ * @internal
+ * 'fast' ethdev funcions and related data are hold in a flat array.
+ * one entry per ethdev.
+ */
+struct rte_eth_fp_ops {
+
+	/** first 64B line */
+	eth_rx_burst_t rx_pkt_burst;
+	/**< PMD receive function. */
+	eth_tx_burst_t tx_pkt_burst;
+	/**< PMD transmit function. */
+	eth_tx_prep_t tx_pkt_prepare;
+	/**< PMD transmit prepare function. */
+	eth_rx_queue_count_t rx_queue_count;
+	/**< Get the number of used RX descriptors. */
+	eth_rx_descriptor_status_t rx_descriptor_status;
+	/**< Check the status of a Rx descriptor. */
+	eth_tx_descriptor_status_t tx_descriptor_status;
+	/**< Check the status of a Tx descriptor. */
+	uintptr_t reserved[2];
+
+	/** second 64B line */
+	struct rte_ethdev_qdata rxq;
+	struct rte_ethdev_qdata txq;
+	uintptr_t reserved2[4];
+
+} __rte_cache_aligned;
+
+extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
+
 
 /**
  * @internal
-- 
2.26.3


^ permalink raw reply	[relevance 2%]

* Re: [dpdk-dev] [PATCH 3/3] cryptodev: rework session framework
  2021-09-30 14:50  1% ` [dpdk-dev] [PATCH 3/3] cryptodev: " Akhil Goyal
@ 2021-10-01 15:53  0%   ` Zhang, Roy Fan
  2021-10-04 19:07  0%     ` Akhil Goyal
  0 siblings, 1 reply; 200+ results
From: Zhang, Roy Fan @ 2021-10-01 15:53 UTC (permalink / raw)
  To: Akhil Goyal, dev
  Cc: thomas, david.marchand, hemant.agrawal, anoobj, De Lara Guarch,
	Pablo, Trahe, Fiona, Doherty, Declan, matan, g.singh,
	jianjay.zhou, asomalap, ruifeng.wang, Ananyev, Konstantin,
	Nicolau, Radu, ajit.khaparde, rnagadheeraj, adwivedi, Power,
	Ciara

Hi Akhil,

Your patch failed all QAT tests - maybe it will fail on all PMDs requiring to know the session's physical address (such as nitrox, caam).

The reason is QAT PMD driver cannot know the physical address of the session private data passed into it - as the private data is computed by 
rte_cryptodev_sym_session_init() from the start of the session (which has a mempool obj header with valid physical address) to the offset for that driver type. The session private data pointer - although is a valid buffer - does not have the obj header that contains the physical address.

I think the solution can be - instead of passing the session private data, the rte_cryptodev_sym_session pointer should be passed and a macro shall be provided to the drivers to manually compute the offset and then find the correct session private data from the session for itself.

Regards,
Fan

> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Thursday, September 30, 2021 3:50 PM
> To: dev@dpdk.org
> Cc: thomas@monjalon.net; david.marchand@redhat.com;
> hemant.agrawal@nxp.com; anoobj@marvell.com; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>; Trahe, Fiona <fiona.trahe@intel.com>;
> Doherty, Declan <declan.doherty@intel.com>; matan@nvidia.com;
> g.singh@nxp.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> jianjay.zhou@huawei.com; asomalap@amd.com; ruifeng.wang@arm.com;
> Ananyev, Konstantin <konstantin.ananyev@intel.com>; Nicolau, Radu
> <radu.nicolau@intel.com>; ajit.khaparde@broadcom.com;
> rnagadheeraj@marvell.com; adwivedi@marvell.com; Power, Ciara
> <ciara.power@intel.com>; Akhil Goyal <gakhil@marvell.com>
> Subject: [PATCH 3/3] cryptodev: rework session framework
> 
> As per current design, rte_cryptodev_sym_session_create() and
> rte_cryptodev_sym_session_init() use separate mempool objects
> for a single session.
> And structure rte_cryptodev_sym_session is not directly used
> by the application, it may cause ABI breakage if the structure
> is modified in future.
> 
> To address these two issues, the rte_cryptodev_sym_session_create
> will take one mempool object for both the session and session
> private data. The API rte_cryptodev_sym_session_init will now not
> take mempool object.
> rte_cryptodev_sym_session_create will now return an opaque session
> pointer which will be used by the app in rte_cryptodev_sym_session_init
> and other APIs.
> 
> With this change, rte_cryptodev_sym_session_init will send
> pointer to session private data of corresponding driver to the PMD
> based on the driver_id for filling the PMD data.
> 
> In data path, opaque session pointer is attached to rte_crypto_op
> and the PMD can call an internal library API to get the session
> private data pointer based on the driver id.
> 
> TODO:
> - inline APIs for opaque data
> - move rte_cryptodev_sym_session struct to cryptodev_pmd.h
> - currently nb_drivers are getting updated in RTE_INIT which
> result in increasing the memory requirements for session.
> This will be moved to PMD probe so that memory is created
> only for those PMDs which are probed and not just compiled in.
> 
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> ---
>  app/test-crypto-perf/cperf.h                  |   1 -
>  app/test-crypto-perf/cperf_ops.c              |  33 ++---
>  app/test-crypto-perf/cperf_ops.h              |   6 +-
>  app/test-crypto-perf/cperf_test_latency.c     |   5 +-
>  app/test-crypto-perf/cperf_test_latency.h     |   1 -
>  .../cperf_test_pmd_cyclecount.c               |   5 +-
>  .../cperf_test_pmd_cyclecount.h               |   1 -
>  app/test-crypto-perf/cperf_test_throughput.c  |   5 +-
>  app/test-crypto-perf/cperf_test_throughput.h  |   1 -
>  app/test-crypto-perf/cperf_test_verify.c      |   5 +-
>  app/test-crypto-perf/cperf_test_verify.h      |   1 -
>  app/test-crypto-perf/main.c                   |  29 +---
>  app/test/test_cryptodev.c                     | 130 +++++-------------
>  app/test/test_cryptodev.h                     |   1 -
>  app/test/test_cryptodev_asym.c                |   1 -
>  app/test/test_cryptodev_blockcipher.c         |   6 +-
>  app/test/test_event_crypto_adapter.c          |  28 +---
>  app/test/test_ipsec.c                         |  22 +--
>  drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c  |  33 +----
>  .../crypto/aesni_mb/rte_aesni_mb_pmd_ops.c    |  34 +----
>  drivers/crypto/armv8/rte_armv8_pmd_ops.c      |  34 +----
>  drivers/crypto/bcmfs/bcmfs_sym_session.c      |  36 +----
>  drivers/crypto/bcmfs/bcmfs_sym_session.h      |   6 +-
>  drivers/crypto/caam_jr/caam_jr.c              |  32 ++---
>  drivers/crypto/ccp/ccp_pmd_ops.c              |  32 +----
>  drivers/crypto/cnxk/cn10k_cryptodev_ops.c     |  18 ++-
>  drivers/crypto/cnxk/cn9k_cryptodev_ops.c      |  18 +--
>  drivers/crypto/cnxk/cnxk_cryptodev_ops.c      |  61 +++-----
>  drivers/crypto/cnxk/cnxk_cryptodev_ops.h      |  13 +-
>  drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c   |  29 +---
>  drivers/crypto/dpaa_sec/dpaa_sec.c            |  31 +----
>  drivers/crypto/kasumi/rte_kasumi_pmd_ops.c    |  34 +----
>  drivers/crypto/mlx5/mlx5_crypto.c             |  24 +---
>  drivers/crypto/mvsam/rte_mrvl_pmd_ops.c       |  36 ++---
>  drivers/crypto/nitrox/nitrox_sym.c            |  31 +----
>  drivers/crypto/null/null_crypto_pmd_ops.c     |  34 +----
>  .../crypto/octeontx/otx_cryptodev_hw_access.h |   1 -
>  drivers/crypto/octeontx/otx_cryptodev_ops.c   |  60 +++-----
>  drivers/crypto/octeontx2/otx2_cryptodev_ops.c |  52 +++----
>  .../octeontx2/otx2_cryptodev_ops_helper.h     |  16 +--
>  drivers/crypto/openssl/rte_openssl_pmd_ops.c  |  35 +----
>  drivers/crypto/qat/qat_sym_session.c          |  29 +---
>  drivers/crypto/qat/qat_sym_session.h          |   6 +-
>  drivers/crypto/scheduler/scheduler_pmd_ops.c  |   9 +-
>  drivers/crypto/snow3g/rte_snow3g_pmd_ops.c    |  34 +----
>  drivers/crypto/virtio/virtio_cryptodev.c      |  31 ++---
>  drivers/crypto/zuc/rte_zuc_pmd_ops.c          |  35 +----
>  .../octeontx2/otx2_evdev_crypto_adptr_rx.h    |   3 +-
>  examples/fips_validation/fips_dev_self_test.c |  32 ++---
>  examples/fips_validation/main.c               |  20 +--
>  examples/ipsec-secgw/ipsec-secgw.c            |  72 +++++-----
>  examples/ipsec-secgw/ipsec.c                  |   3 +-
>  examples/ipsec-secgw/ipsec.h                  |   1 -
>  examples/ipsec-secgw/ipsec_worker.c           |   4 -
>  examples/l2fwd-crypto/main.c                  |  41 +-----
>  examples/vhost_crypto/main.c                  |  16 +--
>  lib/cryptodev/cryptodev_pmd.h                 |   7 +-
>  lib/cryptodev/rte_crypto.h                    |   2 +-
>  lib/cryptodev/rte_crypto_sym.h                |   2 +-
>  lib/cryptodev/rte_cryptodev.c                 |  73 ++++++----
>  lib/cryptodev/rte_cryptodev.h                 |  23 ++--
>  lib/cryptodev/rte_cryptodev_trace.h           |   5 +-
>  lib/pipeline/rte_table_action.c               |   8 +-
>  lib/pipeline/rte_table_action.h               |   2 +-
>  lib/vhost/rte_vhost_crypto.h                  |   3 -
>  lib/vhost/vhost_crypto.c                      |   7 +-
>  66 files changed, 399 insertions(+), 1050 deletions(-)
> 
> diff --git a/app/test-crypto-perf/cperf.h b/app/test-crypto-perf/cperf.h
> index 2b0aad095c..db58228dce 100644
> --- a/app/test-crypto-perf/cperf.h
> +++ b/app/test-crypto-perf/cperf.h
> @@ -15,7 +15,6 @@ struct cperf_op_fns;
> 
>  typedef void  *(*cperf_constructor_t)(
>  		struct rte_mempool *sess_mp,
> -		struct rte_mempool *sess_priv_mp,
>  		uint8_t dev_id,
>  		uint16_t qp_id,
>  		const struct cperf_options *options,
> diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-
> perf/cperf_ops.c
> index 1b3cbe77b9..f094bc656d 100644
> --- a/app/test-crypto-perf/cperf_ops.c
> +++ b/app/test-crypto-perf/cperf_ops.c
> @@ -12,7 +12,7 @@ static int
>  cperf_set_ops_asym(struct rte_crypto_op **ops,
>  		   uint32_t src_buf_offset __rte_unused,
>  		   uint32_t dst_buf_offset __rte_unused, uint16_t nb_ops,
> -		   struct rte_cryptodev_sym_session *sess,
> +		   void *sess,
>  		   const struct cperf_options *options __rte_unused,
>  		   const struct cperf_test_vector *test_vector __rte_unused,
>  		   uint16_t iv_offset __rte_unused,
> @@ -40,7 +40,7 @@ static int
>  cperf_set_ops_security(struct rte_crypto_op **ops,
>  		uint32_t src_buf_offset __rte_unused,
>  		uint32_t dst_buf_offset __rte_unused,
> -		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
> +		uint16_t nb_ops, void *sess,
>  		const struct cperf_options *options __rte_unused,
>  		const struct cperf_test_vector *test_vector __rte_unused,
>  		uint16_t iv_offset __rte_unused, uint32_t *imix_idx)
> @@ -106,7 +106,7 @@ cperf_set_ops_security(struct rte_crypto_op **ops,
>  static int
>  cperf_set_ops_null_cipher(struct rte_crypto_op **ops,
>  		uint32_t src_buf_offset, uint32_t dst_buf_offset,
> -		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
> +		uint16_t nb_ops, void *sess,
>  		const struct cperf_options *options,
>  		const struct cperf_test_vector *test_vector __rte_unused,
>  		uint16_t iv_offset __rte_unused, uint32_t *imix_idx)
> @@ -145,7 +145,7 @@ cperf_set_ops_null_cipher(struct rte_crypto_op
> **ops,
>  static int
>  cperf_set_ops_null_auth(struct rte_crypto_op **ops,
>  		uint32_t src_buf_offset, uint32_t dst_buf_offset,
> -		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
> +		uint16_t nb_ops, void *sess,
>  		const struct cperf_options *options,
>  		const struct cperf_test_vector *test_vector __rte_unused,
>  		uint16_t iv_offset __rte_unused, uint32_t *imix_idx)
> @@ -184,7 +184,7 @@ cperf_set_ops_null_auth(struct rte_crypto_op **ops,
>  static int
>  cperf_set_ops_cipher(struct rte_crypto_op **ops,
>  		uint32_t src_buf_offset, uint32_t dst_buf_offset,
> -		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
> +		uint16_t nb_ops, void *sess,
>  		const struct cperf_options *options,
>  		const struct cperf_test_vector *test_vector,
>  		uint16_t iv_offset, uint32_t *imix_idx)
> @@ -240,7 +240,7 @@ cperf_set_ops_cipher(struct rte_crypto_op **ops,
>  static int
>  cperf_set_ops_auth(struct rte_crypto_op **ops,
>  		uint32_t src_buf_offset, uint32_t dst_buf_offset,
> -		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
> +		uint16_t nb_ops, void *sess,
>  		const struct cperf_options *options,
>  		const struct cperf_test_vector *test_vector,
>  		uint16_t iv_offset, uint32_t *imix_idx)
> @@ -340,7 +340,7 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
>  static int
>  cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
>  		uint32_t src_buf_offset, uint32_t dst_buf_offset,
> -		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
> +		uint16_t nb_ops, void *sess,
>  		const struct cperf_options *options,
>  		const struct cperf_test_vector *test_vector,
>  		uint16_t iv_offset, uint32_t *imix_idx)
> @@ -455,7 +455,7 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op
> **ops,
>  static int
>  cperf_set_ops_aead(struct rte_crypto_op **ops,
>  		uint32_t src_buf_offset, uint32_t dst_buf_offset,
> -		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
> +		uint16_t nb_ops, void *sess,
>  		const struct cperf_options *options,
>  		const struct cperf_test_vector *test_vector,
>  		uint16_t iv_offset, uint32_t *imix_idx)
> @@ -563,9 +563,8 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
>  	return 0;
>  }
> 
> -static struct rte_cryptodev_sym_session *
> +static void *
>  cperf_create_session(struct rte_mempool *sess_mp,
> -	struct rte_mempool *priv_mp,
>  	uint8_t dev_id,
>  	const struct cperf_options *options,
>  	const struct cperf_test_vector *test_vector,
> @@ -590,7 +589,7 @@ cperf_create_session(struct rte_mempool *sess_mp,
>  		if (sess == NULL)
>  			return NULL;
>  		rc = rte_cryptodev_asym_session_init(dev_id, (void *)sess,
> -						     &xform, priv_mp);
> +						     &xform, sess_mp);
>  		if (rc < 0) {
>  			if (sess != NULL) {
>  				rte_cryptodev_asym_session_clear(dev_id,
> @@ -742,8 +741,7 @@ cperf_create_session(struct rte_mempool *sess_mp,
>  			cipher_xform.cipher.iv.length = 0;
>  		}
>  		/* create crypto session */
> -		rte_cryptodev_sym_session_init(dev_id, sess,
> &cipher_xform,
> -				priv_mp);
> +		rte_cryptodev_sym_session_init(dev_id, sess,
> &cipher_xform);
>  	/*
>  	 *  auth only
>  	 */
> @@ -770,8 +768,7 @@ cperf_create_session(struct rte_mempool *sess_mp,
>  			auth_xform.auth.iv.length = 0;
>  		}
>  		/* create crypto session */
> -		rte_cryptodev_sym_session_init(dev_id, sess, &auth_xform,
> -				priv_mp);
> +		rte_cryptodev_sym_session_init(dev_id, sess,
> &auth_xform);
>  	/*
>  	 * cipher and auth
>  	 */
> @@ -830,12 +827,12 @@ cperf_create_session(struct rte_mempool
> *sess_mp,
>  			cipher_xform.next = &auth_xform;
>  			/* create crypto session */
>  			rte_cryptodev_sym_session_init(dev_id,
> -					sess, &cipher_xform, priv_mp);
> +					sess, &cipher_xform);
>  		} else { /* auth then cipher */
>  			auth_xform.next = &cipher_xform;
>  			/* create crypto session */
>  			rte_cryptodev_sym_session_init(dev_id,
> -					sess, &auth_xform, priv_mp);
> +					sess, &auth_xform);
>  		}
>  	} else { /* options->op_type == CPERF_AEAD */
>  		aead_xform.type = RTE_CRYPTO_SYM_XFORM_AEAD;
> @@ -856,7 +853,7 @@ cperf_create_session(struct rte_mempool *sess_mp,
> 
>  		/* Create crypto session */
>  		rte_cryptodev_sym_session_init(dev_id,
> -					sess, &aead_xform, priv_mp);
> +					sess, &aead_xform);
>  	}
> 
>  	return sess;
> diff --git a/app/test-crypto-perf/cperf_ops.h b/app/test-crypto-
> perf/cperf_ops.h
> index ff125d12cd..3ff10491a0 100644
> --- a/app/test-crypto-perf/cperf_ops.h
> +++ b/app/test-crypto-perf/cperf_ops.h
> @@ -12,15 +12,15 @@
>  #include "cperf_test_vectors.h"
> 
> 
> -typedef struct rte_cryptodev_sym_session *(*cperf_sessions_create_t)(
> -		struct rte_mempool *sess_mp, struct rte_mempool
> *sess_priv_mp,
> +typedef void *(*cperf_sessions_create_t)(
> +		struct rte_mempool *sess_mp,
>  		uint8_t dev_id, const struct cperf_options *options,
>  		const struct cperf_test_vector *test_vector,
>  		uint16_t iv_offset);
> 
>  typedef int (*cperf_populate_ops_t)(struct rte_crypto_op **ops,
>  		uint32_t src_buf_offset, uint32_t dst_buf_offset,
> -		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
> +		uint16_t nb_ops, void *sess,
>  		const struct cperf_options *options,
>  		const struct cperf_test_vector *test_vector,
>  		uint16_t iv_offset, uint32_t *imix_idx);
> diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-
> perf/cperf_test_latency.c
> index 159fe8492b..4193f7e777 100644
> --- a/app/test-crypto-perf/cperf_test_latency.c
> +++ b/app/test-crypto-perf/cperf_test_latency.c
> @@ -24,7 +24,7 @@ struct cperf_latency_ctx {
> 
>  	struct rte_mempool *pool;
> 
> -	struct rte_cryptodev_sym_session *sess;
> +	void *sess;
> 
>  	cperf_populate_ops_t populate_ops;
> 
> @@ -59,7 +59,6 @@ cperf_latency_test_free(struct cperf_latency_ctx *ctx)
> 
>  void *
>  cperf_latency_test_constructor(struct rte_mempool *sess_mp,
> -		struct rte_mempool *sess_priv_mp,
>  		uint8_t dev_id, uint16_t qp_id,
>  		const struct cperf_options *options,
>  		const struct cperf_test_vector *test_vector,
> @@ -84,7 +83,7 @@ cperf_latency_test_constructor(struct rte_mempool
> *sess_mp,
>  		sizeof(struct rte_crypto_sym_op) +
>  		sizeof(struct cperf_op_result *);
> 
> -	ctx->sess = op_fns->sess_create(sess_mp, sess_priv_mp, dev_id,
> options,
> +	ctx->sess = op_fns->sess_create(sess_mp, dev_id, options,
>  			test_vector, iv_offset);
>  	if (ctx->sess == NULL)
>  		goto err;
> diff --git a/app/test-crypto-perf/cperf_test_latency.h b/app/test-crypto-
> perf/cperf_test_latency.h
> index ed5b0a07bb..d3fc3218d7 100644
> --- a/app/test-crypto-perf/cperf_test_latency.h
> +++ b/app/test-crypto-perf/cperf_test_latency.h
> @@ -17,7 +17,6 @@
>  void *
>  cperf_latency_test_constructor(
>  		struct rte_mempool *sess_mp,
> -		struct rte_mempool *sess_priv_mp,
>  		uint8_t dev_id,
>  		uint16_t qp_id,
>  		const struct cperf_options *options,
> diff --git a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c b/app/test-
> crypto-perf/cperf_test_pmd_cyclecount.c
> index cbbbedd9ba..3dd489376f 100644
> --- a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
> +++ b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
> @@ -27,7 +27,7 @@ struct cperf_pmd_cyclecount_ctx {
>  	struct rte_crypto_op **ops;
>  	struct rte_crypto_op **ops_processed;
> 
> -	struct rte_cryptodev_sym_session *sess;
> +	void *sess;
> 
>  	cperf_populate_ops_t populate_ops;
> 
> @@ -93,7 +93,6 @@ cperf_pmd_cyclecount_test_free(struct
> cperf_pmd_cyclecount_ctx *ctx)
> 
>  void *
>  cperf_pmd_cyclecount_test_constructor(struct rte_mempool *sess_mp,
> -		struct rte_mempool *sess_priv_mp,
>  		uint8_t dev_id, uint16_t qp_id,
>  		const struct cperf_options *options,
>  		const struct cperf_test_vector *test_vector,
> @@ -120,7 +119,7 @@ cperf_pmd_cyclecount_test_constructor(struct
> rte_mempool *sess_mp,
>  	uint16_t iv_offset = sizeof(struct rte_crypto_op) +
>  			sizeof(struct rte_crypto_sym_op);
> 
> -	ctx->sess = op_fns->sess_create(sess_mp, sess_priv_mp, dev_id,
> options,
> +	ctx->sess = op_fns->sess_create(sess_mp, dev_id, options,
>  			test_vector, iv_offset);
>  	if (ctx->sess == NULL)
>  		goto err;
> diff --git a/app/test-crypto-perf/cperf_test_pmd_cyclecount.h b/app/test-
> crypto-perf/cperf_test_pmd_cyclecount.h
> index 3084038a18..beb4419910 100644
> --- a/app/test-crypto-perf/cperf_test_pmd_cyclecount.h
> +++ b/app/test-crypto-perf/cperf_test_pmd_cyclecount.h
> @@ -18,7 +18,6 @@
>  void *
>  cperf_pmd_cyclecount_test_constructor(
>  		struct rte_mempool *sess_mp,
> -		struct rte_mempool *sess_priv_mp,
>  		uint8_t dev_id,
>  		uint16_t qp_id,
>  		const struct cperf_options *options,
> diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-
> perf/cperf_test_throughput.c
> index 76fcda47ff..dc5c48b4da 100644
> --- a/app/test-crypto-perf/cperf_test_throughput.c
> +++ b/app/test-crypto-perf/cperf_test_throughput.c
> @@ -18,7 +18,7 @@ struct cperf_throughput_ctx {
> 
>  	struct rte_mempool *pool;
> 
> -	struct rte_cryptodev_sym_session *sess;
> +	void *sess;
> 
>  	cperf_populate_ops_t populate_ops;
> 
> @@ -64,7 +64,6 @@ cperf_throughput_test_free(struct
> cperf_throughput_ctx *ctx)
> 
>  void *
>  cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
> -		struct rte_mempool *sess_priv_mp,
>  		uint8_t dev_id, uint16_t qp_id,
>  		const struct cperf_options *options,
>  		const struct cperf_test_vector *test_vector,
> @@ -87,7 +86,7 @@ cperf_throughput_test_constructor(struct
> rte_mempool *sess_mp,
>  	uint16_t iv_offset = sizeof(struct rte_crypto_op) +
>  		sizeof(struct rte_crypto_sym_op);
> 
> -	ctx->sess = op_fns->sess_create(sess_mp, sess_priv_mp, dev_id,
> options,
> +	ctx->sess = op_fns->sess_create(sess_mp, dev_id, options,
>  			test_vector, iv_offset);
>  	if (ctx->sess == NULL)
>  		goto err;
> diff --git a/app/test-crypto-perf/cperf_test_throughput.h b/app/test-
> crypto-perf/cperf_test_throughput.h
> index 91e1a4b708..439ec8e559 100644
> --- a/app/test-crypto-perf/cperf_test_throughput.h
> +++ b/app/test-crypto-perf/cperf_test_throughput.h
> @@ -18,7 +18,6 @@
>  void *
>  cperf_throughput_test_constructor(
>  		struct rte_mempool *sess_mp,
> -		struct rte_mempool *sess_priv_mp,
>  		uint8_t dev_id,
>  		uint16_t qp_id,
>  		const struct cperf_options *options,
> diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-
> perf/cperf_test_verify.c
> index 2939aeaa93..cf561dd700 100644
> --- a/app/test-crypto-perf/cperf_test_verify.c
> +++ b/app/test-crypto-perf/cperf_test_verify.c
> @@ -18,7 +18,7 @@ struct cperf_verify_ctx {
> 
>  	struct rte_mempool *pool;
> 
> -	struct rte_cryptodev_sym_session *sess;
> +	void *sess;
> 
>  	cperf_populate_ops_t populate_ops;
> 
> @@ -51,7 +51,6 @@ cperf_verify_test_free(struct cperf_verify_ctx *ctx)
> 
>  void *
>  cperf_verify_test_constructor(struct rte_mempool *sess_mp,
> -		struct rte_mempool *sess_priv_mp,
>  		uint8_t dev_id, uint16_t qp_id,
>  		const struct cperf_options *options,
>  		const struct cperf_test_vector *test_vector,
> @@ -74,7 +73,7 @@ cperf_verify_test_constructor(struct rte_mempool
> *sess_mp,
>  	uint16_t iv_offset = sizeof(struct rte_crypto_op) +
>  		sizeof(struct rte_crypto_sym_op);
> 
> -	ctx->sess = op_fns->sess_create(sess_mp, sess_priv_mp, dev_id,
> options,
> +	ctx->sess = op_fns->sess_create(sess_mp, dev_id, options,
>  			test_vector, iv_offset);
>  	if (ctx->sess == NULL)
>  		goto err;
> diff --git a/app/test-crypto-perf/cperf_test_verify.h b/app/test-crypto-
> perf/cperf_test_verify.h
> index ac2192ba99..9f70ad87ba 100644
> --- a/app/test-crypto-perf/cperf_test_verify.h
> +++ b/app/test-crypto-perf/cperf_test_verify.h
> @@ -18,7 +18,6 @@
>  void *
>  cperf_verify_test_constructor(
>  		struct rte_mempool *sess_mp,
> -		struct rte_mempool *sess_priv_mp,
>  		uint8_t dev_id,
>  		uint16_t qp_id,
>  		const struct cperf_options *options,
> diff --git a/app/test-crypto-perf/main.c b/app/test-crypto-perf/main.c
> index 390380898e..c3327b7e55 100644
> --- a/app/test-crypto-perf/main.c
> +++ b/app/test-crypto-perf/main.c
> @@ -118,35 +118,14 @@ fill_session_pool_socket(int32_t socket_id,
> uint32_t session_priv_size,
>  	char mp_name[RTE_MEMPOOL_NAMESIZE];
>  	struct rte_mempool *sess_mp;
> 
> -	if (session_pool_socket[socket_id].priv_mp == NULL) {
> -		snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
> -			"priv_sess_mp_%u", socket_id);
> -
> -		sess_mp = rte_mempool_create(mp_name,
> -					nb_sessions,
> -					session_priv_size,
> -					0, 0, NULL, NULL, NULL,
> -					NULL, socket_id,
> -					0);
> -
> -		if (sess_mp == NULL) {
> -			printf("Cannot create pool \"%s\" on socket %d\n",
> -				mp_name, socket_id);
> -			return -ENOMEM;
> -		}
> -
> -		printf("Allocated pool \"%s\" on socket %d\n",
> -			mp_name, socket_id);
> -		session_pool_socket[socket_id].priv_mp = sess_mp;
> -	}
> -
>  	if (session_pool_socket[socket_id].sess_mp == NULL) {
> 
>  		snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
>  			"sess_mp_%u", socket_id);
> 
>  		sess_mp =
> rte_cryptodev_sym_session_pool_create(mp_name,
> -					nb_sessions, 0, 0, 0, socket_id);
> +					nb_sessions, session_priv_size,
> +					0, 0, socket_id);
> 
>  		if (sess_mp == NULL) {
>  			printf("Cannot create pool \"%s\" on socket %d\n",
> @@ -344,12 +323,9 @@ cperf_initialize_cryptodev(struct cperf_options
> *opts, uint8_t *enabled_cdevs)
>  			return ret;
> 
>  		qp_conf.mp_session =
> session_pool_socket[socket_id].sess_mp;
> -		qp_conf.mp_session_private =
> -				session_pool_socket[socket_id].priv_mp;
> 
>  		if (opts->op_type == CPERF_ASYM_MODEX) {
>  			qp_conf.mp_session = NULL;
> -			qp_conf.mp_session_private = NULL;
>  		}
> 
>  		ret = rte_cryptodev_configure(cdev_id, &conf);
> @@ -704,7 +680,6 @@ main(int argc, char **argv)
> 
>  		ctx[i] = cperf_testmap[opts.test].constructor(
>  				session_pool_socket[socket_id].sess_mp,
> -				session_pool_socket[socket_id].priv_mp,
>  				cdev_id, qp_id,
>  				&opts, t_vec, &op_fns);
>  		if (ctx[i] == NULL) {
> diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
> index 82f819211a..e5c7930c63 100644
> --- a/app/test/test_cryptodev.c
> +++ b/app/test/test_cryptodev.c
> @@ -79,7 +79,7 @@ struct crypto_unittest_params {
>  #endif
> 
>  	union {
> -		struct rte_cryptodev_sym_session *sess;
> +		void *sess;
>  #ifdef RTE_LIB_SECURITY
>  		void *sec_session;
>  #endif
> @@ -119,7 +119,7 @@
> test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
>  		uint8_t *hmac_key);
> 
>  static int
> -test_AES_CBC_HMAC_SHA512_decrypt_perform(struct
> rte_cryptodev_sym_session *sess,
> +test_AES_CBC_HMAC_SHA512_decrypt_perform(void *sess,
>  		struct crypto_unittest_params *ut_params,
>  		struct crypto_testsuite_params *ts_param,
>  		const uint8_t *cipher,
> @@ -596,23 +596,11 @@ testsuite_setup(void)
>  	}
> 
>  	ts_params->session_mpool =
> rte_cryptodev_sym_session_pool_create(
> -			"test_sess_mp", MAX_NB_SESSIONS, 0, 0, 0,
> +			"test_sess_mp", MAX_NB_SESSIONS, session_size, 0,
> 0,
>  			SOCKET_ID_ANY);
>  	TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
>  			"session mempool allocation failed");
> 
> -	ts_params->session_priv_mpool = rte_mempool_create(
> -			"test_sess_mp_priv",
> -			MAX_NB_SESSIONS,
> -			session_size,
> -			0, 0, NULL, NULL, NULL,
> -			NULL, SOCKET_ID_ANY,
> -			0);
> -	TEST_ASSERT_NOT_NULL(ts_params->session_priv_mpool,
> -			"session mempool allocation failed");
> -
> -
> -
>  	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id,
>  			&ts_params->conf),
>  			"Failed to configure cryptodev %u with %u qps",
> @@ -620,7 +608,6 @@ testsuite_setup(void)
> 
>  	ts_params->qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
>  	ts_params->qp_conf.mp_session = ts_params->session_mpool;
> -	ts_params->qp_conf.mp_session_private = ts_params-
> >session_priv_mpool;
> 
>  	for (qp_id = 0; qp_id < info.max_nb_queue_pairs; qp_id++) {
>  		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
> @@ -650,11 +637,6 @@ testsuite_teardown(void)
>  	}
> 
>  	/* Free session mempools */
> -	if (ts_params->session_priv_mpool != NULL) {
> -		rte_mempool_free(ts_params->session_priv_mpool);
> -		ts_params->session_priv_mpool = NULL;
> -	}
> -
>  	if (ts_params->session_mpool != NULL) {
>  		rte_mempool_free(ts_params->session_mpool);
>  		ts_params->session_mpool = NULL;
> @@ -1330,7 +1312,6 @@ dev_configure_and_start(uint64_t ff_disable)
>  	ts_params->conf.ff_disable = ff_disable;
>  	ts_params->qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
>  	ts_params->qp_conf.mp_session = ts_params->session_mpool;
> -	ts_params->qp_conf.mp_session_private = ts_params-
> >session_priv_mpool;
> 
>  	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params-
> >valid_devs[0],
>  			&ts_params->conf),
> @@ -1552,7 +1533,6 @@ test_queue_pair_descriptor_setup(void)
>  	 */
>  	qp_conf.nb_descriptors = MIN_NUM_OPS_INFLIGHT; /* min size*/
>  	qp_conf.mp_session = ts_params->session_mpool;
> -	qp_conf.mp_session_private = ts_params->session_priv_mpool;
> 
>  	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
>  		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
> @@ -2146,8 +2126,7 @@ test_AES_CBC_HMAC_SHA1_encrypt_digest(void)
> 
>  	/* Create crypto session*/
>  	rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
> -			ut_params->sess, &ut_params->cipher_xform,
> -			ts_params->session_priv_mpool);
> +			ut_params->sess, &ut_params->cipher_xform);
>  	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation
> failed");
> 
>  	/* Generate crypto op data structure */
> @@ -2247,7 +2226,7 @@
> test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
>  		uint8_t *hmac_key);
> 
>  static int
> -test_AES_CBC_HMAC_SHA512_decrypt_perform(struct
> rte_cryptodev_sym_session *sess,
> +test_AES_CBC_HMAC_SHA512_decrypt_perform(void *sess,
>  		struct crypto_unittest_params *ut_params,
>  		struct crypto_testsuite_params *ts_params,
>  		const uint8_t *cipher,
> @@ -2288,7 +2267,7 @@
> test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
> 
> 
>  static int
> -test_AES_CBC_HMAC_SHA512_decrypt_perform(struct
> rte_cryptodev_sym_session *sess,
> +test_AES_CBC_HMAC_SHA512_decrypt_perform(void *sess,
>  		struct crypto_unittest_params *ut_params,
>  		struct crypto_testsuite_params *ts_params,
>  		const uint8_t *cipher,
> @@ -2401,8 +2380,7 @@ create_wireless_algo_hash_session(uint8_t dev_id,
>  			ts_params->session_mpool);
> 
>  	status = rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
> -			&ut_params->auth_xform,
> -			ts_params->session_priv_mpool);
> +			&ut_params->auth_xform);
>  	if (status == -ENOTSUP)
>  		return TEST_SKIPPED;
> 
> @@ -2443,8 +2421,7 @@ create_wireless_algo_cipher_session(uint8_t
> dev_id,
>  			ts_params->session_mpool);
> 
>  	status = rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
> -			&ut_params->cipher_xform,
> -			ts_params->session_priv_mpool);
> +			&ut_params->cipher_xform);
>  	if (status == -ENOTSUP)
>  		return TEST_SKIPPED;
> 
> @@ -2566,8 +2543,7 @@ create_wireless_algo_cipher_auth_session(uint8_t
> dev_id,
>  	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation
> failed");
> 
>  	status = rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
> -			&ut_params->cipher_xform,
> -			ts_params->session_priv_mpool);
> +			&ut_params->cipher_xform);
>  	if (status == -ENOTSUP)
>  		return TEST_SKIPPED;
> 
> @@ -2629,8 +2605,7 @@ create_wireless_cipher_auth_session(uint8_t
> dev_id,
>  			ts_params->session_mpool);
> 
>  	status = rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
> -			&ut_params->cipher_xform,
> -			ts_params->session_priv_mpool);
> +			&ut_params->cipher_xform);
>  	if (status == -ENOTSUP)
>  		return TEST_SKIPPED;
> 
> @@ -2699,13 +2674,11 @@
> create_wireless_algo_auth_cipher_session(uint8_t dev_id,
>  		ut_params->auth_xform.next = NULL;
>  		ut_params->cipher_xform.next = &ut_params->auth_xform;
>  		status = rte_cryptodev_sym_session_init(dev_id,
> ut_params->sess,
> -				&ut_params->cipher_xform,
> -				ts_params->session_priv_mpool);
> +				&ut_params->cipher_xform);
> 
>  	} else
>  		status = rte_cryptodev_sym_session_init(dev_id,
> ut_params->sess,
> -				&ut_params->auth_xform,
> -				ts_params->session_priv_mpool);
> +				&ut_params->auth_xform);
> 
>  	if (status == -ENOTSUP)
>  		return TEST_SKIPPED;
> @@ -7838,8 +7811,7 @@ create_aead_session(uint8_t dev_id, enum
> rte_crypto_aead_algorithm algo,
>  			ts_params->session_mpool);
> 
>  	rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
> -			&ut_params->aead_xform,
> -			ts_params->session_priv_mpool);
> +			&ut_params->aead_xform);
> 
>  	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation
> failed");
> 
> @@ -10992,8 +10964,7 @@ static int MD5_HMAC_create_session(struct
> crypto_testsuite_params *ts_params,
>  			ts_params->session_mpool);
> 
>  	rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
> -			ut_params->sess, &ut_params->auth_xform,
> -			ts_params->session_priv_mpool);
> +			ut_params->sess, &ut_params->auth_xform);
> 
>  	if (ut_params->sess == NULL)
>  		return TEST_FAILED;
> @@ -11206,7 +11177,7 @@ test_multi_session(void)
>  	struct crypto_unittest_params *ut_params = &unittest_params;
> 
>  	struct rte_cryptodev_info dev_info;
> -	struct rte_cryptodev_sym_session **sessions;
> +	void **sessions;
> 
>  	uint16_t i;
> 
> @@ -11229,9 +11200,7 @@ test_multi_session(void)
> 
>  	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
> 
> -	sessions = rte_malloc(NULL,
> -			sizeof(struct rte_cryptodev_sym_session *) *
> -			(MAX_NB_SESSIONS + 1), 0);
> +	sessions = rte_malloc(NULL, sizeof(void *) * (MAX_NB_SESSIONS +
> 1), 0);
> 
>  	/* Create multiple crypto sessions*/
>  	for (i = 0; i < MAX_NB_SESSIONS; i++) {
> @@ -11240,8 +11209,7 @@ test_multi_session(void)
>  				ts_params->session_mpool);
> 
>  		rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
> -				sessions[i], &ut_params->auth_xform,
> -				ts_params->session_priv_mpool);
> +				sessions[i], &ut_params->auth_xform);
>  		TEST_ASSERT_NOT_NULL(sessions[i],
>  				"Session creation failed at session
> number %u",
>  				i);
> @@ -11279,8 +11247,7 @@ test_multi_session(void)
>  	sessions[i] = NULL;
>  	/* Next session create should fail */
>  	rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
> -			sessions[i], &ut_params->auth_xform,
> -			ts_params->session_priv_mpool);
> +			sessions[i], &ut_params->auth_xform);
>  	TEST_ASSERT_NULL(sessions[i],
>  			"Session creation succeeded unexpectedly!");
> 
> @@ -11311,7 +11278,7 @@ test_multi_session_random_usage(void)
>  {
>  	struct crypto_testsuite_params *ts_params = &testsuite_params;
>  	struct rte_cryptodev_info dev_info;
> -	struct rte_cryptodev_sym_session **sessions;
> +	void **sessions;
>  	uint32_t i, j;
>  	struct multi_session_params ut_paramz[] = {
> 
> @@ -11355,8 +11322,7 @@ test_multi_session_random_usage(void)
>  	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
> 
>  	sessions = rte_malloc(NULL,
> -			(sizeof(struct rte_cryptodev_sym_session *)
> -					* MAX_NB_SESSIONS) + 1, 0);
> +			(sizeof(void *) * MAX_NB_SESSIONS) + 1, 0);
> 
>  	for (i = 0; i < MB_SESSION_NUMBER; i++) {
>  		sessions[i] = rte_cryptodev_sym_session_create(
> @@ -11373,8 +11339,7 @@ test_multi_session_random_usage(void)
>  		rte_cryptodev_sym_session_init(
>  				ts_params->valid_devs[0],
>  				sessions[i],
> -				&ut_paramz[i].ut_params.auth_xform,
> -				ts_params->session_priv_mpool);
> +				&ut_paramz[i].ut_params.auth_xform);
> 
>  		TEST_ASSERT_NOT_NULL(sessions[i],
>  				"Session creation failed at session
> number %u",
> @@ -11457,8 +11422,7 @@ test_null_invalid_operation(void)
> 
>  	/* Create Crypto session*/
>  	ret = rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
> -			ut_params->sess, &ut_params->cipher_xform,
> -			ts_params->session_priv_mpool);
> +			ut_params->sess, &ut_params->cipher_xform);
>  	TEST_ASSERT(ret < 0,
>  			"Session creation succeeded unexpectedly");
> 
> @@ -11475,8 +11439,7 @@ test_null_invalid_operation(void)
> 
>  	/* Create Crypto session*/
>  	ret = rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
> -			ut_params->sess, &ut_params->auth_xform,
> -			ts_params->session_priv_mpool);
> +			ut_params->sess, &ut_params->auth_xform);
>  	TEST_ASSERT(ret < 0,
>  			"Session creation succeeded unexpectedly");
> 
> @@ -11521,8 +11484,7 @@ test_null_burst_operation(void)
> 
>  	/* Create Crypto session*/
>  	rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
> -			ut_params->sess, &ut_params->cipher_xform,
> -			ts_params->session_priv_mpool);
> +			ut_params->sess, &ut_params->cipher_xform);
>  	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation
> failed");
> 
>  	TEST_ASSERT_EQUAL(rte_crypto_op_bulk_alloc(ts_params-
> >op_mpool,
> @@ -11634,7 +11596,6 @@ test_enq_callback_setup(void)
> 
>  	qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
>  	qp_conf.mp_session = ts_params->session_mpool;
> -	qp_conf.mp_session_private = ts_params->session_priv_mpool;
> 
>  	TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
>  			ts_params->valid_devs[0], qp_id, &qp_conf,
> @@ -11734,7 +11695,6 @@ test_deq_callback_setup(void)
> 
>  	qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
>  	qp_conf.mp_session = ts_params->session_mpool;
> -	qp_conf.mp_session_private = ts_params->session_priv_mpool;
> 
>  	TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
>  			ts_params->valid_devs[0], qp_id, &qp_conf,
> @@ -11943,8 +11903,7 @@ static int create_gmac_session(uint8_t dev_id,
>  			ts_params->session_mpool);
> 
>  	rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
> -			&ut_params->auth_xform,
> -			ts_params->session_priv_mpool);
> +			&ut_params->auth_xform);
> 
>  	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation
> failed");
> 
> @@ -12588,8 +12547,7 @@ create_auth_session(struct
> crypto_unittest_params *ut_params,
>  			ts_params->session_mpool);
> 
>  	rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
> -				&ut_params->auth_xform,
> -				ts_params->session_priv_mpool);
> +				&ut_params->auth_xform);
> 
>  	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation
> failed");
> 
> @@ -12641,8 +12599,7 @@ create_auth_cipher_session(struct
> crypto_unittest_params *ut_params,
>  			ts_params->session_mpool);
> 
>  	rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
> -				&ut_params->auth_xform,
> -				ts_params->session_priv_mpool);
> +				&ut_params->auth_xform);
> 
>  	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation
> failed");
> 
> @@ -13149,8 +13106,7 @@ test_authenticated_encrypt_with_esn(
> 
>  	rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
>  				ut_params->sess,
> -				&ut_params->cipher_xform,
> -				ts_params->session_priv_mpool);
> +				&ut_params->cipher_xform);
> 
>  	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation
> failed");
> 
> @@ -13281,8 +13237,7 @@ test_authenticated_decrypt_with_esn(
> 
>  	rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
>  				ut_params->sess,
> -				&ut_params->auth_xform,
> -				ts_params->session_priv_mpool);
> +				&ut_params->auth_xform);
> 
>  	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation
> failed");
> 
> @@ -14003,11 +13958,6 @@ test_scheduler_attach_worker_op(void)
>  			rte_mempool_free(ts_params->session_mpool);
>  			ts_params->session_mpool = NULL;
>  		}
> -		if (ts_params->session_priv_mpool) {
> -			rte_mempool_free(ts_params-
> >session_priv_mpool);
> -			ts_params->session_priv_mpool = NULL;
> -		}
> -
>  		if (info.sym.max_nb_sessions != 0 &&
>  				info.sym.max_nb_sessions <
> MAX_NB_SESSIONS) {
>  			RTE_LOG(ERR, USER1,
> @@ -14024,32 +13974,14 @@ test_scheduler_attach_worker_op(void)
>  			ts_params->session_mpool =
>  				rte_cryptodev_sym_session_pool_create(
>  						"test_sess_mp",
> -						MAX_NB_SESSIONS, 0, 0, 0,
> +						MAX_NB_SESSIONS,
> +						session_size, 0, 0,
>  						SOCKET_ID_ANY);
>  			TEST_ASSERT_NOT_NULL(ts_params-
> >session_mpool,
>  					"session mempool allocation failed");
>  		}
> 
> -		/*
> -		 * Create mempool with maximum number of sessions,
> -		 * to include device specific session private data
> -		 */
> -		if (ts_params->session_priv_mpool == NULL) {
> -			ts_params->session_priv_mpool =
> rte_mempool_create(
> -					"test_sess_mp_priv",
> -					MAX_NB_SESSIONS,
> -					session_size,
> -					0, 0, NULL, NULL, NULL,
> -					NULL, SOCKET_ID_ANY,
> -					0);
> -
> -			TEST_ASSERT_NOT_NULL(ts_params-
> >session_priv_mpool,
> -					"session mempool allocation failed");
> -		}
> -
>  		ts_params->qp_conf.mp_session = ts_params-
> >session_mpool;
> -		ts_params->qp_conf.mp_session_private =
> -				ts_params->session_priv_mpool;
> 
>  		ret = rte_cryptodev_scheduler_worker_attach(sched_id,
>  				(uint8_t)i);
> diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
> index 1cdd84d01f..a3a10d484b 100644
> --- a/app/test/test_cryptodev.h
> +++ b/app/test/test_cryptodev.h
> @@ -89,7 +89,6 @@ struct crypto_testsuite_params {
>  	struct rte_mempool *large_mbuf_pool;
>  	struct rte_mempool *op_mpool;
>  	struct rte_mempool *session_mpool;
> -	struct rte_mempool *session_priv_mpool;
>  	struct rte_cryptodev_config conf;
>  	struct rte_cryptodev_qp_conf qp_conf;
> 
> diff --git a/app/test/test_cryptodev_asym.c
> b/app/test/test_cryptodev_asym.c
> index 9d19a6d6d9..35da574da8 100644
> --- a/app/test/test_cryptodev_asym.c
> +++ b/app/test/test_cryptodev_asym.c
> @@ -924,7 +924,6 @@ testsuite_setup(void)
>  	/* configure qp */
>  	ts_params->qp_conf.nb_descriptors =
> DEFAULT_NUM_OPS_INFLIGHT;
>  	ts_params->qp_conf.mp_session = ts_params->session_mpool;
> -	ts_params->qp_conf.mp_session_private = ts_params-
> >session_mpool;
>  	for (qp_id = 0; qp_id < info.max_nb_queue_pairs; qp_id++) {
>  		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
>  			dev_id, qp_id, &ts_params->qp_conf,
> diff --git a/app/test/test_cryptodev_blockcipher.c
> b/app/test/test_cryptodev_blockcipher.c
> index 3cdb2c96e8..9417803f18 100644
> --- a/app/test/test_cryptodev_blockcipher.c
> +++ b/app/test/test_cryptodev_blockcipher.c
> @@ -68,7 +68,6 @@ test_blockcipher_one_case(const struct
> blockcipher_test_case *t,
>  	struct rte_mempool *mbuf_pool,
>  	struct rte_mempool *op_mpool,
>  	struct rte_mempool *sess_mpool,
> -	struct rte_mempool *sess_priv_mpool,
>  	uint8_t dev_id,
>  	char *test_msg)
>  {
> @@ -81,7 +80,7 @@ test_blockcipher_one_case(const struct
> blockcipher_test_case *t,
>  	struct rte_crypto_sym_op *sym_op = NULL;
>  	struct rte_crypto_op *op = NULL;
>  	struct rte_cryptodev_info dev_info;
> -	struct rte_cryptodev_sym_session *sess = NULL;
> +	void *sess = NULL;
> 
>  	int status = TEST_SUCCESS;
>  	const struct blockcipher_test_data *tdata = t->test_data;
> @@ -514,7 +513,7 @@ test_blockcipher_one_case(const struct
> blockcipher_test_case *t,
>  		sess = rte_cryptodev_sym_session_create(sess_mpool);
> 
>  		status = rte_cryptodev_sym_session_init(dev_id, sess,
> -				init_xform, sess_priv_mpool);
> +				init_xform);
>  		if (status == -ENOTSUP) {
>  			snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
> "UNSUPPORTED");
>  			status = TEST_SKIPPED;
> @@ -831,7 +830,6 @@ blockcipher_test_case_run(const void *data)
>  			p_testsuite_params->mbuf_pool,
>  			p_testsuite_params->op_mpool,
>  			p_testsuite_params->session_mpool,
> -			p_testsuite_params->session_priv_mpool,
>  			p_testsuite_params->valid_devs[0],
>  			test_msg);
>  	return status;
> diff --git a/app/test/test_event_crypto_adapter.c
> b/app/test/test_event_crypto_adapter.c
> index 3ad20921e2..59229a1cde 100644
> --- a/app/test/test_event_crypto_adapter.c
> +++ b/app/test/test_event_crypto_adapter.c
> @@ -61,7 +61,6 @@ struct event_crypto_adapter_test_params {
>  	struct rte_mempool *mbuf_pool;
>  	struct rte_mempool *op_mpool;
>  	struct rte_mempool *session_mpool;
> -	struct rte_mempool *session_priv_mpool;
>  	struct rte_cryptodev_config *config;
>  	uint8_t crypto_event_port_id;
>  	uint8_t internal_port_op_fwd;
> @@ -167,7 +166,7 @@ static int
>  test_op_forward_mode(uint8_t session_less)
>  {
>  	struct rte_crypto_sym_xform cipher_xform;
> -	struct rte_cryptodev_sym_session *sess;
> +	void *sess;
>  	union rte_event_crypto_metadata m_data;
>  	struct rte_crypto_sym_op *sym_op;
>  	struct rte_crypto_op *op;
> @@ -203,7 +202,7 @@ test_op_forward_mode(uint8_t session_less)
> 
>  		/* Create Crypto session*/
>  		ret = rte_cryptodev_sym_session_init(TEST_CDEV_ID, sess,
> -				&cipher_xform, params.session_priv_mpool);
> +				&cipher_xform);
>  		TEST_ASSERT_SUCCESS(ret, "Failed to init session\n");
> 
>  		ret = rte_event_crypto_adapter_caps_get(evdev,
> TEST_CDEV_ID,
> @@ -367,7 +366,7 @@ static int
>  test_op_new_mode(uint8_t session_less)
>  {
>  	struct rte_crypto_sym_xform cipher_xform;
> -	struct rte_cryptodev_sym_session *sess;
> +	void *sess;
>  	union rte_event_crypto_metadata m_data;
>  	struct rte_crypto_sym_op *sym_op;
>  	struct rte_crypto_op *op;
> @@ -411,7 +410,7 @@ test_op_new_mode(uint8_t session_less)
>  						&m_data, sizeof(m_data));
>  		}
>  		ret = rte_cryptodev_sym_session_init(TEST_CDEV_ID, sess,
> -				&cipher_xform, params.session_priv_mpool);
> +				&cipher_xform);
>  		TEST_ASSERT_SUCCESS(ret, "Failed to init session\n");
> 
>  		rte_crypto_op_attach_sym_session(op, sess);
> @@ -553,22 +552,12 @@ configure_cryptodev(void)
> 
>  	params.session_mpool = rte_cryptodev_sym_session_pool_create(
>  			"CRYPTO_ADAPTER_SESSION_MP",
> -			MAX_NB_SESSIONS, 0, 0,
> +			MAX_NB_SESSIONS, session_size, 0,
>  			sizeof(union rte_event_crypto_metadata),
>  			SOCKET_ID_ANY);
>  	TEST_ASSERT_NOT_NULL(params.session_mpool,
>  			"session mempool allocation failed\n");
> 
> -	params.session_priv_mpool = rte_mempool_create(
> -				"CRYPTO_AD_SESS_MP_PRIV",
> -				MAX_NB_SESSIONS,
> -				session_size,
> -				0, 0, NULL, NULL, NULL,
> -				NULL, SOCKET_ID_ANY,
> -				0);
> -	TEST_ASSERT_NOT_NULL(params.session_priv_mpool,
> -			"session mempool allocation failed\n");
> -
>  	rte_cryptodev_info_get(TEST_CDEV_ID, &info);
>  	conf.nb_queue_pairs = info.max_nb_queue_pairs;
>  	conf.socket_id = SOCKET_ID_ANY;
> @@ -580,7 +569,6 @@ configure_cryptodev(void)
> 
>  	qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
>  	qp_conf.mp_session = params.session_mpool;
> -	qp_conf.mp_session_private = params.session_priv_mpool;
> 
>  	TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
>  			TEST_CDEV_ID, TEST_CDEV_QP_ID, &qp_conf,
> @@ -934,12 +922,6 @@ crypto_teardown(void)
>  		rte_mempool_free(params.session_mpool);
>  		params.session_mpool = NULL;
>  	}
> -	if (params.session_priv_mpool != NULL) {
> -		rte_mempool_avail_count(params.session_priv_mpool);
> -		rte_mempool_free(params.session_priv_mpool);
> -		params.session_priv_mpool = NULL;
> -	}
> -
>  	/* Free ops mempool */
>  	if (params.op_mpool != NULL) {
>  		RTE_LOG(DEBUG, USER1, "EVENT_CRYPTO_SYM_OP_POOL
> count %u\n",
> diff --git a/app/test/test_ipsec.c b/app/test/test_ipsec.c
> index 2ffa2a8e79..134545efe1 100644
> --- a/app/test/test_ipsec.c
> +++ b/app/test/test_ipsec.c
> @@ -355,20 +355,9 @@ testsuite_setup(void)
>  		return TEST_FAILED;
>  	}
> 
> -	ts_params->qp_conf.mp_session_private = rte_mempool_create(
> -				"test_priv_sess_mp",
> -				MAX_NB_SESSIONS,
> -				sess_sz,
> -				0, 0, NULL, NULL, NULL,
> -				NULL, SOCKET_ID_ANY,
> -				0);
> -
> -	TEST_ASSERT_NOT_NULL(ts_params->qp_conf.mp_session_private,
> -			"private session mempool allocation failed");
> -
>  	ts_params->qp_conf.mp_session =
>  		rte_cryptodev_sym_session_pool_create("test_sess_mp",
> -			MAX_NB_SESSIONS, 0, 0, 0, SOCKET_ID_ANY);
> +			MAX_NB_SESSIONS, sess_sz, 0, 0, SOCKET_ID_ANY);
> 
>  	TEST_ASSERT_NOT_NULL(ts_params->qp_conf.mp_session,
>  			"session mempool allocation failed");
> @@ -413,11 +402,6 @@ testsuite_teardown(void)
>  		rte_mempool_free(ts_params->qp_conf.mp_session);
>  		ts_params->qp_conf.mp_session = NULL;
>  	}
> -
> -	if (ts_params->qp_conf.mp_session_private != NULL) {
> -		rte_mempool_free(ts_params-
> >qp_conf.mp_session_private);
> -		ts_params->qp_conf.mp_session_private = NULL;
> -	}
>  }
> 
>  static int
> @@ -644,7 +628,7 @@ create_crypto_session(struct ipsec_unitest_params
> *ut,
>  	struct rte_cryptodev_qp_conf *qp, uint8_t dev_id, uint32_t j)
>  {
>  	int32_t rc;
> -	struct rte_cryptodev_sym_session *s;
> +	void *s;
> 
>  	s = rte_cryptodev_sym_session_create(qp->mp_session);
>  	if (s == NULL)
> @@ -652,7 +636,7 @@ create_crypto_session(struct ipsec_unitest_params
> *ut,
> 
>  	/* initiliaze SA crypto session for device */
>  	rc = rte_cryptodev_sym_session_init(dev_id, s,
> -			ut->crypto_xforms, qp->mp_session_private);
> +			ut->crypto_xforms);
>  	if (rc == 0) {
>  		ut->ss[j].crypto.ses = s;
>  		return 0;
> diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
> b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
> index edb7275e76..75330292af 100644
> --- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
> +++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
> @@ -235,7 +235,6 @@ aesni_gcm_pmd_qp_setup(struct rte_cryptodev
> *dev, uint16_t qp_id,
>  		goto qp_setup_cleanup;
> 
>  	qp->sess_mp = qp_conf->mp_session;
> -	qp->sess_mp_priv = qp_conf->mp_session_private;
> 
>  	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
> 
> @@ -259,10 +258,8 @@ aesni_gcm_pmd_sym_session_get_size(struct
> rte_cryptodev *dev __rte_unused)
>  static int
>  aesni_gcm_pmd_sym_session_configure(struct rte_cryptodev *dev
> __rte_unused,
>  		struct rte_crypto_sym_xform *xform,
> -		struct rte_cryptodev_sym_session *sess,
> -		struct rte_mempool *mempool)
> +		void *sess)
>  {
> -	void *sess_private_data;
>  	int ret;
>  	struct aesni_gcm_private *internals = dev->data->dev_private;
> 
> @@ -271,42 +268,24 @@ aesni_gcm_pmd_sym_session_configure(struct
> rte_cryptodev *dev __rte_unused,
>  		return -EINVAL;
>  	}
> 
> -	if (rte_mempool_get(mempool, &sess_private_data)) {
> -		AESNI_GCM_LOG(ERR,
> -				"Couldn't get object from session mempool");
> -		return -ENOMEM;
> -	}
>  	ret = aesni_gcm_set_session_parameters(internals->ops,
> -				sess_private_data, xform);
> +				sess, xform);
>  	if (ret != 0) {
>  		AESNI_GCM_LOG(ERR, "failed configure session
> parameters");
> -
> -		/* Return session to mempool */
> -		rte_mempool_put(mempool, sess_private_data);
>  		return ret;
>  	}
> 
> -	set_sym_session_private_data(sess, dev->driver_id,
> -			sess_private_data);
> -
>  	return 0;
>  }
> 
>  /** Clear the memory of session so it doesn't leave key material behind */
>  static void
> -aesni_gcm_pmd_sym_session_clear(struct rte_cryptodev *dev,
> -		struct rte_cryptodev_sym_session *sess)
> +aesni_gcm_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
>  {
> -	uint8_t index = dev->driver_id;
> -	void *sess_priv = get_sym_session_private_data(sess, index);
> -
> +	RTE_SET_USED(dev);
>  	/* Zero out the whole structure */
> -	if (sess_priv) {
> -		memset(sess_priv, 0, sizeof(struct aesni_gcm_session));
> -		struct rte_mempool *sess_mp =
> rte_mempool_from_obj(sess_priv);
> -		set_sym_session_private_data(sess, index, NULL);
> -		rte_mempool_put(sess_mp, sess_priv);
> -	}
> +	if (sess)
> +		memset(sess, 0, sizeof(struct aesni_gcm_session));
>  }
> 
>  struct rte_cryptodev_ops aesni_gcm_pmd_ops = {
> diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
> b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
> index 39c67e3952..efdc05c45f 100644
> --- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
> +++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
> @@ -944,7 +944,6 @@ aesni_mb_pmd_qp_setup(struct rte_cryptodev *dev,
> uint16_t qp_id,
>  	}
> 
>  	qp->sess_mp = qp_conf->mp_session;
> -	qp->sess_mp_priv = qp_conf->mp_session_private;
> 
>  	memset(&qp->stats, 0, sizeof(qp->stats));
> 
> @@ -974,11 +973,8 @@ aesni_mb_pmd_sym_session_get_size(struct
> rte_cryptodev *dev __rte_unused)
>  /** Configure a aesni multi-buffer session from a crypto xform chain */
>  static int
>  aesni_mb_pmd_sym_session_configure(struct rte_cryptodev *dev,
> -		struct rte_crypto_sym_xform *xform,
> -		struct rte_cryptodev_sym_session *sess,
> -		struct rte_mempool *mempool)
> +		struct rte_crypto_sym_xform *xform, void *sess)
>  {
> -	void *sess_private_data;
>  	struct aesni_mb_private *internals = dev->data->dev_private;
>  	int ret;
> 
> @@ -987,43 +983,25 @@ aesni_mb_pmd_sym_session_configure(struct
> rte_cryptodev *dev,
>  		return -EINVAL;
>  	}
> 
> -	if (rte_mempool_get(mempool, &sess_private_data)) {
> -		AESNI_MB_LOG(ERR,
> -				"Couldn't get object from session mempool");
> -		return -ENOMEM;
> -	}
> -
>  	ret = aesni_mb_set_session_parameters(internals->mb_mgr,
> -			sess_private_data, xform);
> +			sess, xform);
>  	if (ret != 0) {
>  		AESNI_MB_LOG(ERR, "failed configure session parameters");
> -
> -		/* Return session to mempool */
> -		rte_mempool_put(mempool, sess_private_data);
>  		return ret;
>  	}
> 
> -	set_sym_session_private_data(sess, dev->driver_id,
> -			sess_private_data);
> -
>  	return 0;
>  }
> 
>  /** Clear the memory of session so it doesn't leave key material behind */
>  static void
> -aesni_mb_pmd_sym_session_clear(struct rte_cryptodev *dev,
> -		struct rte_cryptodev_sym_session *sess)
> +aesni_mb_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
>  {
> -	uint8_t index = dev->driver_id;
> -	void *sess_priv = get_sym_session_private_data(sess, index);
> +	RTE_SET_USED(dev);
> 
>  	/* Zero out the whole structure */
> -	if (sess_priv) {
> -		memset(sess_priv, 0, sizeof(struct aesni_mb_session));
> -		struct rte_mempool *sess_mp =
> rte_mempool_from_obj(sess_priv);
> -		set_sym_session_private_data(sess, index, NULL);
> -		rte_mempool_put(sess_mp, sess_priv);
> -	}
> +	if (sess)
> +		memset(sess, 0, sizeof(struct aesni_mb_session));
>  }
> 
>  struct rte_cryptodev_ops aesni_mb_pmd_ops = {
> diff --git a/drivers/crypto/armv8/rte_armv8_pmd_ops.c
> b/drivers/crypto/armv8/rte_armv8_pmd_ops.c
> index 1b2749fe62..2d3b54b063 100644
> --- a/drivers/crypto/armv8/rte_armv8_pmd_ops.c
> +++ b/drivers/crypto/armv8/rte_armv8_pmd_ops.c
> @@ -244,7 +244,6 @@ armv8_crypto_pmd_qp_setup(struct rte_cryptodev
> *dev, uint16_t qp_id,
>  		goto qp_setup_cleanup;
> 
>  	qp->sess_mp = qp_conf->mp_session;
> -	qp->sess_mp_priv = qp_conf->mp_session_private;
> 
>  	memset(&qp->stats, 0, sizeof(qp->stats));
> 
> @@ -268,10 +267,8 @@ armv8_crypto_pmd_sym_session_get_size(struct
> rte_cryptodev *dev __rte_unused)
>  static int
>  armv8_crypto_pmd_sym_session_configure(struct rte_cryptodev *dev,
>  		struct rte_crypto_sym_xform *xform,
> -		struct rte_cryptodev_sym_session *sess,
> -		struct rte_mempool *mempool)
> +		void *sess)
>  {
> -	void *sess_private_data;
>  	int ret;
> 
>  	if (unlikely(sess == NULL)) {
> @@ -279,42 +276,23 @@ armv8_crypto_pmd_sym_session_configure(struct
> rte_cryptodev *dev,
>  		return -EINVAL;
>  	}
> 
> -	if (rte_mempool_get(mempool, &sess_private_data)) {
> -		CDEV_LOG_ERR(
> -			"Couldn't get object from session mempool");
> -		return -ENOMEM;
> -	}
> -
> -	ret = armv8_crypto_set_session_parameters(sess_private_data,
> xform);
> +	ret = armv8_crypto_set_session_parameters(sess, xform);
>  	if (ret != 0) {
>  		ARMV8_CRYPTO_LOG_ERR("failed configure session
> parameters");
> -
> -		/* Return session to mempool */
> -		rte_mempool_put(mempool, sess_private_data);
>  		return ret;
>  	}
> 
> -	set_sym_session_private_data(sess, dev->driver_id,
> -			sess_private_data);
> -
>  	return 0;
>  }
> 
>  /** Clear the memory of session so it doesn't leave key material behind */
>  static void
> -armv8_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev,
> -		struct rte_cryptodev_sym_session *sess)
> +armv8_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev, void
> *sess)
>  {
> -	uint8_t index = dev->driver_id;
> -	void *sess_priv = get_sym_session_private_data(sess, index);
> -
> +	RTE_SET_USED(dev);
>  	/* Zero out the whole structure */
> -	if (sess_priv) {
> -		memset(sess_priv, 0, sizeof(struct armv8_crypto_session));
> -		struct rte_mempool *sess_mp =
> rte_mempool_from_obj(sess_priv);
> -		set_sym_session_private_data(sess, index, NULL);
> -		rte_mempool_put(sess_mp, sess_priv);
> -	}
> +	if (sess)
> +		memset(sess, 0, sizeof(struct armv8_crypto_session));
>  }
> 
>  struct rte_cryptodev_ops armv8_crypto_pmd_ops = {
> diff --git a/drivers/crypto/bcmfs/bcmfs_sym_session.c
> b/drivers/crypto/bcmfs/bcmfs_sym_session.c
> index 675ed0ad55..b4b167d0c2 100644
> --- a/drivers/crypto/bcmfs/bcmfs_sym_session.c
> +++ b/drivers/crypto/bcmfs/bcmfs_sym_session.c
> @@ -224,10 +224,9 @@ bcmfs_sym_get_session(struct rte_crypto_op *op)
>  int
>  bcmfs_sym_session_configure(struct rte_cryptodev *dev,
>  			    struct rte_crypto_sym_xform *xform,
> -			    struct rte_cryptodev_sym_session *sess,
> -			    struct rte_mempool *mempool)
> +			    void *sess)
>  {
> -	void *sess_private_data;
> +	RTE_SET_USED(dev);
>  	int ret;
> 
>  	if (unlikely(sess == NULL)) {
> @@ -235,44 +234,23 @@ bcmfs_sym_session_configure(struct
> rte_cryptodev *dev,
>  		return -EINVAL;
>  	}
> 
> -	if (rte_mempool_get(mempool, &sess_private_data)) {
> -		BCMFS_DP_LOG(ERR,
> -			"Couldn't get object from session mempool");
> -		return -ENOMEM;
> -	}
> -
> -	ret = crypto_set_session_parameters(sess_private_data, xform);
> +	ret = crypto_set_session_parameters(sess, xform);
> 
>  	if (ret != 0) {
>  		BCMFS_DP_LOG(ERR, "Failed configure session parameters");
> -		/* Return session to mempool */
> -		rte_mempool_put(mempool, sess_private_data);
>  		return ret;
>  	}
> 
> -	set_sym_session_private_data(sess, dev->driver_id,
> -				     sess_private_data);
> -
>  	return 0;
>  }
> 
>  /* Clear the memory of session so it doesn't leave key material behind */
>  void
> -bcmfs_sym_session_clear(struct rte_cryptodev *dev,
> -			struct rte_cryptodev_sym_session  *sess)
> +bcmfs_sym_session_clear(struct rte_cryptodev *dev, void *sess)
>  {
> -	uint8_t index = dev->driver_id;
> -	void *sess_priv = get_sym_session_private_data(sess, index);
> -
> -	if (sess_priv) {
> -		struct rte_mempool *sess_mp;
> -
> -		memset(sess_priv, 0, sizeof(struct bcmfs_sym_session));
> -		sess_mp = rte_mempool_from_obj(sess_priv);
> -
> -		set_sym_session_private_data(sess, index, NULL);
> -		rte_mempool_put(sess_mp, sess_priv);
> -	}
> +	RTE_SET_USED(dev);
> +	if (sess)
> +		memset(sess, 0, sizeof(struct bcmfs_sym_session));
>  }
> 
>  unsigned int
> diff --git a/drivers/crypto/bcmfs/bcmfs_sym_session.h
> b/drivers/crypto/bcmfs/bcmfs_sym_session.h
> index d40595b4bd..7faafe2fd5 100644
> --- a/drivers/crypto/bcmfs/bcmfs_sym_session.h
> +++ b/drivers/crypto/bcmfs/bcmfs_sym_session.h
> @@ -93,12 +93,10 @@ bcmfs_process_crypto_op(struct rte_crypto_op *op,
>  int
>  bcmfs_sym_session_configure(struct rte_cryptodev *dev,
>  			    struct rte_crypto_sym_xform *xform,
> -			    struct rte_cryptodev_sym_session *sess,
> -			    struct rte_mempool *mempool);
> +			    void *sess);
> 
>  void
> -bcmfs_sym_session_clear(struct rte_cryptodev *dev,
> -			struct rte_cryptodev_sym_session  *sess);
> +bcmfs_sym_session_clear(struct rte_cryptodev *dev, void *sess);
> 
>  unsigned int
>  bcmfs_sym_session_get_private_size(struct rte_cryptodev *dev
> __rte_unused);
> diff --git a/drivers/crypto/caam_jr/caam_jr.c
> b/drivers/crypto/caam_jr/caam_jr.c
> index ce7a100778..8a04820fa6 100644
> --- a/drivers/crypto/caam_jr/caam_jr.c
> +++ b/drivers/crypto/caam_jr/caam_jr.c
> @@ -1692,52 +1692,36 @@ caam_jr_set_session_parameters(struct
> rte_cryptodev *dev,
>  static int
>  caam_jr_sym_session_configure(struct rte_cryptodev *dev,
>  			      struct rte_crypto_sym_xform *xform,
> -			      struct rte_cryptodev_sym_session *sess,
> -			      struct rte_mempool *mempool)
> +			      void *sess)
>  {
> -	void *sess_private_data;
>  	int ret;
> 
>  	PMD_INIT_FUNC_TRACE();
> 
> -	if (rte_mempool_get(mempool, &sess_private_data)) {
> -		CAAM_JR_ERR("Couldn't get object from session mempool");
> -		return -ENOMEM;
> -	}
> -
> -	memset(sess_private_data, 0, sizeof(struct caam_jr_session));
> -	ret = caam_jr_set_session_parameters(dev, xform,
> sess_private_data);
> +	memset(sess, 0, sizeof(struct caam_jr_session));
> +	ret = caam_jr_set_session_parameters(dev, xform, sess);
>  	if (ret != 0) {
>  		CAAM_JR_ERR("failed to configure session parameters");
> -		/* Return session to mempool */
> -		rte_mempool_put(mempool, sess_private_data);
>  		return ret;
>  	}
> 
> -	set_sym_session_private_data(sess, dev->driver_id,
> sess_private_data);
> -
>  	return 0;
>  }
> 
>  /* Clear the memory of session so it doesn't leave key material behind */
>  static void
> -caam_jr_sym_session_clear(struct rte_cryptodev *dev,
> -		struct rte_cryptodev_sym_session *sess)
> +caam_jr_sym_session_clear(struct rte_cryptodev *dev, void *sess)
>  {
> -	uint8_t index = dev->driver_id;
> -	void *sess_priv = get_sym_session_private_data(sess, index);
> -	struct caam_jr_session *s = (struct caam_jr_session *)sess_priv;
> +	RTE_SET_USED(dev);
> +
> +	struct caam_jr_session *s = (struct caam_jr_session *)sess;
> 
>  	PMD_INIT_FUNC_TRACE();
> 
> -	if (sess_priv) {
> -		struct rte_mempool *sess_mp =
> rte_mempool_from_obj(sess_priv);
> -
> +	if (sess) {
>  		rte_free(s->cipher_key.data);
>  		rte_free(s->auth_key.data);
>  		memset(s, 0, sizeof(struct caam_jr_session));
> -		set_sym_session_private_data(sess, index, NULL);
> -		rte_mempool_put(sess_mp, sess_priv);
>  	}
>  }
> 
> diff --git a/drivers/crypto/ccp/ccp_pmd_ops.c
> b/drivers/crypto/ccp/ccp_pmd_ops.c
> index 0d615d311c..cac1268130 100644
> --- a/drivers/crypto/ccp/ccp_pmd_ops.c
> +++ b/drivers/crypto/ccp/ccp_pmd_ops.c
> @@ -727,7 +727,6 @@ ccp_pmd_qp_setup(struct rte_cryptodev *dev,
> uint16_t qp_id,
>  	}
> 
>  	qp->sess_mp = qp_conf->mp_session;
> -	qp->sess_mp_priv = qp_conf->mp_session_private;
> 
>  	/* mempool for batch info */
>  	qp->batch_mp = rte_mempool_create(
> @@ -758,11 +757,9 @@ ccp_pmd_sym_session_get_size(struct
> rte_cryptodev *dev __rte_unused)
>  static int
>  ccp_pmd_sym_session_configure(struct rte_cryptodev *dev,
>  			  struct rte_crypto_sym_xform *xform,
> -			  struct rte_cryptodev_sym_session *sess,
> -			  struct rte_mempool *mempool)
> +			  void *sess)
>  {
>  	int ret;
> -	void *sess_private_data;
>  	struct ccp_private *internals;
> 
>  	if (unlikely(sess == NULL || xform == NULL)) {
> @@ -770,39 +767,22 @@ ccp_pmd_sym_session_configure(struct
> rte_cryptodev *dev,
>  		return -ENOMEM;
>  	}
> 
> -	if (rte_mempool_get(mempool, &sess_private_data)) {
> -		CCP_LOG_ERR("Couldn't get object from session mempool");
> -		return -ENOMEM;
> -	}
>  	internals = (struct ccp_private *)dev->data->dev_private;
> -	ret = ccp_set_session_parameters(sess_private_data, xform,
> internals);
> +	ret = ccp_set_session_parameters(sess, xform, internals);
>  	if (ret != 0) {
>  		CCP_LOG_ERR("failed configure session parameters");
> -
> -		/* Return session to mempool */
> -		rte_mempool_put(mempool, sess_private_data);
>  		return ret;
>  	}
> -	set_sym_session_private_data(sess, dev->driver_id,
> -				 sess_private_data);
> 
>  	return 0;
>  }
> 
>  static void
> -ccp_pmd_sym_session_clear(struct rte_cryptodev *dev,
> -		      struct rte_cryptodev_sym_session *sess)
> +ccp_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
>  {
> -	uint8_t index = dev->driver_id;
> -	void *sess_priv = get_sym_session_private_data(sess, index);
> -
> -	if (sess_priv) {
> -		struct rte_mempool *sess_mp =
> rte_mempool_from_obj(sess_priv);
> -
> -		rte_mempool_put(sess_mp, sess_priv);
> -		memset(sess_priv, 0, sizeof(struct ccp_session));
> -		set_sym_session_private_data(sess, index, NULL);
> -	}
> +	RTE_SET_USED(dev);
> +	if (sess)
> +		memset(sess, 0, sizeof(struct ccp_session));
>  }
> 
>  struct rte_cryptodev_ops ccp_ops = {
> diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
> b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
> index 99968cc353..50cae5e3d6 100644
> --- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
> +++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
> @@ -32,17 +32,18 @@ cn10k_cpt_sym_temp_sess_create(struct
> cnxk_cpt_qp *qp, struct rte_crypto_op *op)
>  	if (sess == NULL)
>  		return NULL;
> 
> -	ret = sym_session_configure(qp->lf.roc_cpt, driver_id, sym_op-
> >xform,
> -				    sess, qp->sess_mp_priv);
> +	sess->sess_data[driver_id].data =
> +			(void *)((uint8_t *)sess +
> +			rte_cryptodev_sym_get_header_session_size() +
> +			(driver_id * sess->priv_sz));
> +	priv = get_sym_session_private_data(sess, driver_id);
> +	ret = sym_session_configure(qp->lf.roc_cpt, sym_op->xform, (void
> *)priv);
>  	if (ret)
>  		goto sess_put;
> 
> -	priv = get_sym_session_private_data(sess, driver_id);
> -
>  	sym_op->session = sess;
> 
>  	return priv;
> -
>  sess_put:
>  	rte_mempool_put(qp->sess_mp, sess);
>  	return NULL;
> @@ -144,9 +145,7 @@ cn10k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct
> rte_crypto_op *ops[],
>  			ret = cpt_sym_inst_fill(qp, op, sess, infl_req,
>  						&inst[0]);
>  			if (unlikely(ret)) {
> -
> 	sym_session_clear(cn10k_cryptodev_driver_id,
> -						  op->sym->session);
> -				rte_mempool_put(qp->sess_mp, op->sym-
> >session);
> +				sym_session_clear(op->sym->session);
>  				return 0;
>  			}
>  			w7 = sess->cpt_inst_w7;
> @@ -437,8 +436,7 @@ cn10k_cpt_dequeue_post_process(struct
> cnxk_cpt_qp *qp,
>  temp_sess_free:
>  	if (unlikely(cop->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) {
>  		if (cop->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
> -			sym_session_clear(cn10k_cryptodev_driver_id,
> -					  cop->sym->session);
> +			sym_session_clear(cop->sym->session);
>  			sz =
> rte_cryptodev_sym_get_existing_header_session_size(
>  				cop->sym->session);
>  			memset(cop->sym->session, 0, sz);
> diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
> b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
> index 4c2dc5b080..5f83581131 100644
> --- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
> +++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
> @@ -81,17 +81,19 @@ cn9k_cpt_sym_temp_sess_create(struct
> cnxk_cpt_qp *qp, struct rte_crypto_op *op)
>  	if (sess == NULL)
>  		return NULL;
> 
> -	ret = sym_session_configure(qp->lf.roc_cpt, driver_id, sym_op-
> >xform,
> -				    sess, qp->sess_mp_priv);
> +	sess->sess_data[driver_id].data =
> +			(void *)((uint8_t *)sess +
> +			rte_cryptodev_sym_get_header_session_size() +
> +			(driver_id * sess->priv_sz));
> +	priv = get_sym_session_private_data(sess, driver_id);
> +	ret = sym_session_configure(qp->lf.roc_cpt, sym_op->xform,
> +			(void *)priv);
>  	if (ret)
>  		goto sess_put;
> 
> -	priv = get_sym_session_private_data(sess, driver_id);
> -
>  	sym_op->session = sess;
> 
>  	return priv;
> -
>  sess_put:
>  	rte_mempool_put(qp->sess_mp, sess);
>  	return NULL;
> @@ -126,8 +128,7 @@ cn9k_cpt_inst_prep(struct cnxk_cpt_qp *qp, struct
> rte_crypto_op *op,
>  			ret = cn9k_cpt_sym_inst_fill(qp, op, sess, infl_req,
>  						     inst);
>  			if (unlikely(ret)) {
> -
> 	sym_session_clear(cn9k_cryptodev_driver_id,
> -						  op->sym->session);
> +				sym_session_clear(op->sym->session);
>  				rte_mempool_put(qp->sess_mp, op->sym-
> >session);
>  			}
>  			inst->w7.u64 = sess->cpt_inst_w7;
> @@ -484,8 +485,7 @@ cn9k_cpt_dequeue_post_process(struct
> cnxk_cpt_qp *qp, struct rte_crypto_op *cop,
>  temp_sess_free:
>  	if (unlikely(cop->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) {
>  		if (cop->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
> -			sym_session_clear(cn9k_cryptodev_driver_id,
> -					  cop->sym->session);
> +			sym_session_clear(cop->sym->session);
>  			sz =
> rte_cryptodev_sym_get_existing_header_session_size(
>  				cop->sym->session);
>  			memset(cop->sym->session, 0, sz);
> diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
> b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
> index 41d8fe49e1..52d9cf0cf3 100644
> --- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
> +++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
> @@ -379,7 +379,6 @@ cnxk_cpt_queue_pair_setup(struct rte_cryptodev
> *dev, uint16_t qp_id,
>  	}
> 
>  	qp->sess_mp = conf->mp_session;
> -	qp->sess_mp_priv = conf->mp_session_private;
>  	dev->data->queue_pairs[qp_id] = qp;
> 
>  	return 0;
> @@ -493,27 +492,20 @@ cnxk_cpt_inst_w7_get(struct cnxk_se_sess *sess,
> struct roc_cpt *roc_cpt)
>  }
> 
>  int
> -sym_session_configure(struct roc_cpt *roc_cpt, int driver_id,
> +sym_session_configure(struct roc_cpt *roc_cpt,
>  		      struct rte_crypto_sym_xform *xform,
> -		      struct rte_cryptodev_sym_session *sess,
> -		      struct rte_mempool *pool)
> +		      void *sess)
>  {
>  	struct cnxk_se_sess *sess_priv;
> -	void *priv;
>  	int ret;
> 
>  	ret = sym_xform_verify(xform);
>  	if (unlikely(ret < 0))
>  		return ret;
> 
> -	if (unlikely(rte_mempool_get(pool, &priv))) {
> -		plt_dp_err("Could not allocate session private data");
> -		return -ENOMEM;
> -	}
> +	memset(sess, 0, sizeof(struct cnxk_se_sess));
> 
> -	memset(priv, 0, sizeof(struct cnxk_se_sess));
> -
> -	sess_priv = priv;
> +	sess_priv = sess;
> 
>  	switch (ret) {
>  	case CNXK_CPT_CIPHER:
> @@ -547,7 +539,7 @@ sym_session_configure(struct roc_cpt *roc_cpt, int
> driver_id,
>  	}
> 
>  	if (ret)
> -		goto priv_put;
> +		return -ENOTSUP;
> 
>  	if ((sess_priv->roc_se_ctx.fc_type == ROC_SE_HASH_HMAC) &&
>  	    cpt_mac_len_verify(&xform->auth)) {
> @@ -557,66 +549,45 @@ sym_session_configure(struct roc_cpt *roc_cpt, int
> driver_id,
>  			sess_priv->roc_se_ctx.auth_key = NULL;
>  		}
> 
> -		ret = -ENOTSUP;
> -		goto priv_put;
> +		return -ENOTSUP;
>  	}
> 
>  	sess_priv->cpt_inst_w7 = cnxk_cpt_inst_w7_get(sess_priv, roc_cpt);
> 
> -	set_sym_session_private_data(sess, driver_id, sess_priv);
> -
>  	return 0;
> -
> -priv_put:
> -	rte_mempool_put(pool, priv);
> -
> -	return -ENOTSUP;
>  }
> 
>  int
>  cnxk_cpt_sym_session_configure(struct rte_cryptodev *dev,
>  			       struct rte_crypto_sym_xform *xform,
> -			       struct rte_cryptodev_sym_session *sess,
> -			       struct rte_mempool *pool)
> +			       void *sess)
>  {
>  	struct cnxk_cpt_vf *vf = dev->data->dev_private;
>  	struct roc_cpt *roc_cpt = &vf->cpt;
> -	uint8_t driver_id;
> 
> -	driver_id = dev->driver_id;
> -
> -	return sym_session_configure(roc_cpt, driver_id, xform, sess, pool);
> +	return sym_session_configure(roc_cpt, xform, sess);
>  }
> 
>  void
> -sym_session_clear(int driver_id, struct rte_cryptodev_sym_session *sess)
> +sym_session_clear(void *sess)
>  {
> -	void *priv = get_sym_session_private_data(sess, driver_id);
> -	struct cnxk_se_sess *sess_priv;
> -	struct rte_mempool *pool;
> +	struct cnxk_se_sess *sess_priv = sess;
> 
> -	if (priv == NULL)
> +	if (sess == NULL)
>  		return;
> 
> -	sess_priv = priv;
> -
>  	if (sess_priv->roc_se_ctx.auth_key != NULL)
>  		plt_free(sess_priv->roc_se_ctx.auth_key);
> 
> -	memset(priv, 0, cnxk_cpt_sym_session_get_size(NULL));
> -
> -	pool = rte_mempool_from_obj(priv);
> -
> -	set_sym_session_private_data(sess, driver_id, NULL);
> -
> -	rte_mempool_put(pool, priv);
> +	memset(sess_priv, 0, cnxk_cpt_sym_session_get_size(NULL));
>  }
> 
>  void
> -cnxk_cpt_sym_session_clear(struct rte_cryptodev *dev,
> -			   struct rte_cryptodev_sym_session *sess)
> +cnxk_cpt_sym_session_clear(struct rte_cryptodev *dev, void *sess)
>  {
> -	return sym_session_clear(dev->driver_id, sess);
> +	RTE_SET_USED(dev);
> +
> +	return sym_session_clear(sess);
>  }
> 
>  unsigned int
> diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
> b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
> index c5332dec53..3c09d10582 100644
> --- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
> +++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
> @@ -111,18 +111,15 @@ unsigned int
> cnxk_cpt_sym_session_get_size(struct rte_cryptodev *dev);
> 
>  int cnxk_cpt_sym_session_configure(struct rte_cryptodev *dev,
>  				   struct rte_crypto_sym_xform *xform,
> -				   struct rte_cryptodev_sym_session *sess,
> -				   struct rte_mempool *pool);
> +				   void *sess);
> 
> -int sym_session_configure(struct roc_cpt *roc_cpt, int driver_id,
> +int sym_session_configure(struct roc_cpt *roc_cpt,
>  			  struct rte_crypto_sym_xform *xform,
> -			  struct rte_cryptodev_sym_session *sess,
> -			  struct rte_mempool *pool);
> +			  void *sess);
> 
> -void cnxk_cpt_sym_session_clear(struct rte_cryptodev *dev,
> -				struct rte_cryptodev_sym_session *sess);
> +void cnxk_cpt_sym_session_clear(struct rte_cryptodev *dev, void *sess);
> 
> -void sym_session_clear(int driver_id, struct rte_cryptodev_sym_session
> *sess);
> +void sym_session_clear(void *sess);
> 
>  unsigned int cnxk_ae_session_size_get(struct rte_cryptodev *dev
> __rte_unused);
> 
> diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> index 176f1a27a0..42229763f8 100644
> --- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> +++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> @@ -3438,49 +3438,32 @@ dpaa2_sec_security_session_destroy(void *dev
> __rte_unused, void *sess)
>  static int
>  dpaa2_sec_sym_session_configure(struct rte_cryptodev *dev,
>  		struct rte_crypto_sym_xform *xform,
> -		struct rte_cryptodev_sym_session *sess,
> -		struct rte_mempool *mempool)
> +		void *sess)
>  {
> -	void *sess_private_data;
>  	int ret;
> 
> -	if (rte_mempool_get(mempool, &sess_private_data)) {
> -		DPAA2_SEC_ERR("Couldn't get object from session
> mempool");
> -		return -ENOMEM;
> -	}
> -
> -	ret = dpaa2_sec_set_session_parameters(dev, xform,
> sess_private_data);
> +	ret = dpaa2_sec_set_session_parameters(dev, xform, sess);
>  	if (ret != 0) {
>  		DPAA2_SEC_ERR("Failed to configure session parameters");
> -		/* Return session to mempool */
> -		rte_mempool_put(mempool, sess_private_data);
>  		return ret;
>  	}
> 
> -	set_sym_session_private_data(sess, dev->driver_id,
> -		sess_private_data);
> -
>  	return 0;
>  }
> 
>  /** Clear the memory of session so it doesn't leave key material behind */
>  static void
> -dpaa2_sec_sym_session_clear(struct rte_cryptodev *dev,
> -		struct rte_cryptodev_sym_session *sess)
> +dpaa2_sec_sym_session_clear(struct rte_cryptodev *dev, void *sess)
>  {
>  	PMD_INIT_FUNC_TRACE();
> -	uint8_t index = dev->driver_id;
> -	void *sess_priv = get_sym_session_private_data(sess, index);
> -	dpaa2_sec_session *s = (dpaa2_sec_session *)sess_priv;
> +	RTE_SET_USED(dev);
> +	dpaa2_sec_session *s = (dpaa2_sec_session *)sess;
> 
> -	if (sess_priv) {
> +	if (sess) {
>  		rte_free(s->ctxt);
>  		rte_free(s->cipher_key.data);
>  		rte_free(s->auth_key.data);
>  		memset(s, 0, sizeof(dpaa2_sec_session));
> -		struct rte_mempool *sess_mp =
> rte_mempool_from_obj(sess_priv);
> -		set_sym_session_private_data(sess, index, NULL);
> -		rte_mempool_put(sess_mp, sess_priv);
>  	}
>  }
> 
> diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c
> b/drivers/crypto/dpaa_sec/dpaa_sec.c
> index 5a087df090..4727088b45 100644
> --- a/drivers/crypto/dpaa_sec/dpaa_sec.c
> +++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
> @@ -2537,33 +2537,18 @@ dpaa_sec_set_session_parameters(struct
> rte_cryptodev *dev,
> 
>  static int
>  dpaa_sec_sym_session_configure(struct rte_cryptodev *dev,
> -		struct rte_crypto_sym_xform *xform,
> -		struct rte_cryptodev_sym_session *sess,
> -		struct rte_mempool *mempool)
> +		struct rte_crypto_sym_xform *xform, void *sess)
>  {
> -	void *sess_private_data;
>  	int ret;
> 
>  	PMD_INIT_FUNC_TRACE();
> 
> -	if (rte_mempool_get(mempool, &sess_private_data)) {
> -		DPAA_SEC_ERR("Couldn't get object from session
> mempool");
> -		return -ENOMEM;
> -	}
> -
> -	ret = dpaa_sec_set_session_parameters(dev, xform,
> sess_private_data);
> +	ret = dpaa_sec_set_session_parameters(dev, xform, sess);
>  	if (ret != 0) {
>  		DPAA_SEC_ERR("failed to configure session parameters");
> -
> -		/* Return session to mempool */
> -		rte_mempool_put(mempool, sess_private_data);
>  		return ret;
>  	}
> 
> -	set_sym_session_private_data(sess, dev->driver_id,
> -			sess_private_data);
> -
> -
>  	return 0;
>  }
> 
> @@ -2584,18 +2569,14 @@ free_session_memory(struct rte_cryptodev
> *dev, dpaa_sec_session *s)
> 
>  /** Clear the memory of session so it doesn't leave key material behind */
>  static void
> -dpaa_sec_sym_session_clear(struct rte_cryptodev *dev,
> -		struct rte_cryptodev_sym_session *sess)
> +dpaa_sec_sym_session_clear(struct rte_cryptodev *dev, void *sess)
>  {
>  	PMD_INIT_FUNC_TRACE();
> -	uint8_t index = dev->driver_id;
> -	void *sess_priv = get_sym_session_private_data(sess, index);
> -	dpaa_sec_session *s = (dpaa_sec_session *)sess_priv;
> +	RTE_SET_USED(dev);
> +	dpaa_sec_session *s = (dpaa_sec_session *)sess;
> 
> -	if (sess_priv) {
> +	if (sess)
>  		free_session_memory(dev, s);
> -		set_sym_session_private_data(sess, index, NULL);
> -	}
>  }
> 
>  #ifdef RTE_LIB_SECURITY
> diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
> b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
> index f075054807..b2e5c92598 100644
> --- a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
> +++ b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
> @@ -220,7 +220,6 @@ kasumi_pmd_qp_setup(struct rte_cryptodev *dev,
> uint16_t qp_id,
> 
>  	qp->mgr = internals->mgr;
>  	qp->sess_mp = qp_conf->mp_session;
> -	qp->sess_mp_priv = qp_conf->mp_session_private;
> 
>  	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
> 
> @@ -243,10 +242,8 @@ kasumi_pmd_sym_session_get_size(struct
> rte_cryptodev *dev __rte_unused)
>  static int
>  kasumi_pmd_sym_session_configure(struct rte_cryptodev *dev,
>  		struct rte_crypto_sym_xform *xform,
> -		struct rte_cryptodev_sym_session *sess,
> -		struct rte_mempool *mempool)
> +		void *sess)
>  {
> -	void *sess_private_data;
>  	int ret;
>  	struct kasumi_private *internals = dev->data->dev_private;
> 
> @@ -255,43 +252,24 @@ kasumi_pmd_sym_session_configure(struct
> rte_cryptodev *dev,
>  		return -EINVAL;
>  	}
> 
> -	if (rte_mempool_get(mempool, &sess_private_data)) {
> -		KASUMI_LOG(ERR,
> -				"Couldn't get object from session mempool");
> -		return -ENOMEM;
> -	}
> -
>  	ret = kasumi_set_session_parameters(internals->mgr,
> -					sess_private_data, xform);
> +					sess, xform);
>  	if (ret != 0) {
>  		KASUMI_LOG(ERR, "failed configure session parameters");
> -
> -		/* Return session to mempool */
> -		rte_mempool_put(mempool, sess_private_data);
>  		return ret;
>  	}
> 
> -	set_sym_session_private_data(sess, dev->driver_id,
> -		sess_private_data);
> -
>  	return 0;
>  }
> 
>  /** Clear the memory of session so it doesn't leave key material behind */
>  static void
> -kasumi_pmd_sym_session_clear(struct rte_cryptodev *dev,
> -		struct rte_cryptodev_sym_session *sess)
> +kasumi_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
>  {
> -	uint8_t index = dev->driver_id;
> -	void *sess_priv = get_sym_session_private_data(sess, index);
> -
> +	RTE_SET_USED(dev);
>  	/* Zero out the whole structure */
> -	if (sess_priv) {
> -		memset(sess_priv, 0, sizeof(struct kasumi_session));
> -		struct rte_mempool *sess_mp =
> rte_mempool_from_obj(sess_priv);
> -		set_sym_session_private_data(sess, index, NULL);
> -		rte_mempool_put(sess_mp, sess_priv);
> -	}
> +	if (sess)
> +		memset(sess, 0, sizeof(struct kasumi_session));
>  }
> 
>  struct rte_cryptodev_ops kasumi_pmd_ops = {
> diff --git a/drivers/crypto/mlx5/mlx5_crypto.c
> b/drivers/crypto/mlx5/mlx5_crypto.c
> index 682cf8b607..615ab9f45d 100644
> --- a/drivers/crypto/mlx5/mlx5_crypto.c
> +++ b/drivers/crypto/mlx5/mlx5_crypto.c
> @@ -165,14 +165,12 @@ mlx5_crypto_sym_session_get_size(struct
> rte_cryptodev *dev __rte_unused)
>  static int
>  mlx5_crypto_sym_session_configure(struct rte_cryptodev *dev,
>  				  struct rte_crypto_sym_xform *xform,
> -				  struct rte_cryptodev_sym_session *session,
> -				  struct rte_mempool *mp)
> +				  void *session)
>  {
>  	struct mlx5_crypto_priv *priv = dev->data->dev_private;
> -	struct mlx5_crypto_session *sess_private_data;
> +	struct mlx5_crypto_session *sess_private_data = session;
>  	struct rte_crypto_cipher_xform *cipher;
>  	uint8_t encryption_order;
> -	int ret;
> 
>  	if (unlikely(xform->next != NULL)) {
>  		DRV_LOG(ERR, "Xform next is not supported.");
> @@ -183,17 +181,9 @@ mlx5_crypto_sym_session_configure(struct
> rte_cryptodev *dev,
>  		DRV_LOG(ERR, "Only AES-XTS algorithm is supported.");
>  		return -ENOTSUP;
>  	}
> -	ret = rte_mempool_get(mp, (void *)&sess_private_data);
> -	if (ret != 0) {
> -		DRV_LOG(ERR,
> -			"Failed to get session %p private data from
> mempool.",
> -			sess_private_data);
> -		return -ENOMEM;
> -	}
>  	cipher = &xform->cipher;
>  	sess_private_data->dek = mlx5_crypto_dek_prepare(priv, cipher);
>  	if (sess_private_data->dek == NULL) {
> -		rte_mempool_put(mp, sess_private_data);
>  		DRV_LOG(ERR, "Failed to prepare dek.");
>  		return -ENOMEM;
>  	}
> @@ -228,27 +218,21 @@ mlx5_crypto_sym_session_configure(struct
> rte_cryptodev *dev,
>  	sess_private_data->dek_id =
>  			rte_cpu_to_be_32(sess_private_data->dek->obj->id
> &
>  					 0xffffff);
> -	set_sym_session_private_data(session, dev->driver_id,
> -				     sess_private_data);
>  	DRV_LOG(DEBUG, "Session %p was configured.", sess_private_data);
>  	return 0;
>  }
> 
>  static void
> -mlx5_crypto_sym_session_clear(struct rte_cryptodev *dev,
> -			      struct rte_cryptodev_sym_session *sess)
> +mlx5_crypto_sym_session_clear(struct rte_cryptodev *dev, void *sess)
>  {
>  	struct mlx5_crypto_priv *priv = dev->data->dev_private;
> -	struct mlx5_crypto_session *spriv =
> get_sym_session_private_data(sess,
> -								dev-
> >driver_id);
> +	struct mlx5_crypto_session *spriv = sess;
> 
>  	if (unlikely(spriv == NULL)) {
>  		DRV_LOG(ERR, "Failed to get session %p private data.", spriv);
>  		return;
>  	}
>  	mlx5_crypto_dek_destroy(priv, spriv->dek);
> -	set_sym_session_private_data(sess, dev->driver_id, NULL);
> -	rte_mempool_put(rte_mempool_from_obj(spriv), spriv);
>  	DRV_LOG(DEBUG, "Session %p was cleared.", spriv);
>  }
> 
> diff --git a/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c
> b/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c
> index e04a2c88c7..2e4b27ea21 100644
> --- a/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c
> +++ b/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c
> @@ -704,7 +704,6 @@ mrvl_crypto_pmd_qp_setup(struct rte_cryptodev
> *dev, uint16_t qp_id,
>  			break;
> 
>  		qp->sess_mp = qp_conf->mp_session;
> -		qp->sess_mp_priv = qp_conf->mp_session_private;
> 
>  		memset(&qp->stats, 0, sizeof(qp->stats));
>  		dev->data->queue_pairs[qp_id] = qp;
> @@ -735,12 +734,9 @@
> mrvl_crypto_pmd_sym_session_get_size(__rte_unused struct
> rte_cryptodev *dev)
>   */
>  static int
>  mrvl_crypto_pmd_sym_session_configure(__rte_unused struct
> rte_cryptodev *dev,
> -		struct rte_crypto_sym_xform *xform,
> -		struct rte_cryptodev_sym_session *sess,
> -		struct rte_mempool *mp)
> +		struct rte_crypto_sym_xform *xform, void *sess)
>  {
>  	struct mrvl_crypto_session *mrvl_sess;
> -	void *sess_private_data;
>  	int ret;
> 
>  	if (sess == NULL) {
> @@ -748,25 +744,16 @@
> mrvl_crypto_pmd_sym_session_configure(__rte_unused struct
> rte_cryptodev *dev,
>  		return -EINVAL;
>  	}
> 
> -	if (rte_mempool_get(mp, &sess_private_data)) {
> -		CDEV_LOG_ERR("Couldn't get object from session
> mempool.");
> -		return -ENOMEM;
> -	}
> +	memset(sess, 0, sizeof(struct mrvl_crypto_session));
> 
> -	memset(sess_private_data, 0, sizeof(struct mrvl_crypto_session));
> -
> -	ret = mrvl_crypto_set_session_parameters(sess_private_data,
> xform);
> +	ret = mrvl_crypto_set_session_parameters(sess, xform);
>  	if (ret != 0) {
>  		MRVL_LOG(ERR, "Failed to configure session parameters!");
> -
> -		/* Return session to mempool */
> -		rte_mempool_put(mp, sess_private_data);
>  		return ret;
>  	}
> 
> -	set_sym_session_private_data(sess, dev->driver_id,
> sess_private_data);
> 
> -	mrvl_sess = (struct mrvl_crypto_session *)sess_private_data;
> +	mrvl_sess = (struct mrvl_crypto_session *)sess;
>  	if (sam_session_create(&mrvl_sess->sam_sess_params,
>  				&mrvl_sess->sam_sess) < 0) {
>  		MRVL_LOG(DEBUG, "Failed to create session!");
> @@ -789,17 +776,13 @@
> mrvl_crypto_pmd_sym_session_configure(__rte_unused struct
> rte_cryptodev *dev,
>   * @returns 0. Always.
>   */
>  static void
> -mrvl_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev,
> -		struct rte_cryptodev_sym_session *sess)
> +mrvl_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev, void
> *sess)
>  {
> -
> -	uint8_t index = dev->driver_id;
> -	void *sess_priv = get_sym_session_private_data(sess, index);
> -
> +	RTE_SET_USED(dev);
>  	/* Zero out the whole structure */
> -	if (sess_priv) {
> +	if (sess) {
>  		struct mrvl_crypto_session *mrvl_sess =
> -			(struct mrvl_crypto_session *)sess_priv;
> +			(struct mrvl_crypto_session *)sess;
> 
>  		if (mrvl_sess->sam_sess &&
>  		    sam_session_destroy(mrvl_sess->sam_sess) < 0) {
> @@ -807,9 +790,6 @@ mrvl_crypto_pmd_sym_session_clear(struct
> rte_cryptodev *dev,
>  		}
> 
>  		memset(mrvl_sess, 0, sizeof(struct mrvl_crypto_session));
> -		struct rte_mempool *sess_mp =
> rte_mempool_from_obj(sess_priv);
> -		set_sym_session_private_data(sess, index, NULL);
> -		rte_mempool_put(sess_mp, sess_priv);
>  	}
>  }
> 
> diff --git a/drivers/crypto/nitrox/nitrox_sym.c
> b/drivers/crypto/nitrox/nitrox_sym.c
> index f8b7edcd69..0c9bbfef46 100644
> --- a/drivers/crypto/nitrox/nitrox_sym.c
> +++ b/drivers/crypto/nitrox/nitrox_sym.c
> @@ -532,22 +532,16 @@ configure_aead_ctx(struct rte_crypto_aead_xform
> *xform,
>  static int
>  nitrox_sym_dev_sess_configure(struct rte_cryptodev *cdev,
>  			      struct rte_crypto_sym_xform *xform,
> -			      struct rte_cryptodev_sym_session *sess,
> -			      struct rte_mempool *mempool)
> +			      void *sess)
>  {
> -	void *mp_obj;
>  	struct nitrox_crypto_ctx *ctx;
>  	struct rte_crypto_cipher_xform *cipher_xform = NULL;
>  	struct rte_crypto_auth_xform *auth_xform = NULL;
>  	struct rte_crypto_aead_xform *aead_xform = NULL;
>  	int ret = -EINVAL;
> 
> -	if (rte_mempool_get(mempool, &mp_obj)) {
> -		NITROX_LOG(ERR, "Couldn't allocate context\n");
> -		return -ENOMEM;
> -	}
> -
> -	ctx = mp_obj;
> +	RTE_SET_USED(cdev);
> +	ctx = sess;
>  	ctx->nitrox_chain = get_crypto_chain_order(xform);
>  	switch (ctx->nitrox_chain) {
>  	case NITROX_CHAIN_CIPHER_ONLY:
> @@ -586,28 +580,17 @@ nitrox_sym_dev_sess_configure(struct
> rte_cryptodev *cdev,
>  	}
> 
>  	ctx->iova = rte_mempool_virt2iova(ctx);
> -	set_sym_session_private_data(sess, cdev->driver_id, ctx);
>  	return 0;
>  err:
> -	rte_mempool_put(mempool, mp_obj);
>  	return ret;
>  }
> 
>  static void
> -nitrox_sym_dev_sess_clear(struct rte_cryptodev *cdev,
> -			  struct rte_cryptodev_sym_session *sess)
> +nitrox_sym_dev_sess_clear(struct rte_cryptodev *cdev, void *sess)
>  {
> -	struct nitrox_crypto_ctx *ctx = get_sym_session_private_data(sess,
> -							cdev->driver_id);
> -	struct rte_mempool *sess_mp;
> -
> -	if (!ctx)
> -		return;
> -
> -	memset(ctx, 0, sizeof(*ctx));
> -	sess_mp = rte_mempool_from_obj(ctx);
> -	set_sym_session_private_data(sess, cdev->driver_id, NULL);
> -	rte_mempool_put(sess_mp, ctx);
> +	RTE_SET_USED(cdev);
> +	if (sess)
> +		memset(sess, 0, sizeof(struct nitrox_crypto_ctx));
>  }
> 
>  static struct nitrox_crypto_ctx *
> diff --git a/drivers/crypto/null/null_crypto_pmd_ops.c
> b/drivers/crypto/null/null_crypto_pmd_ops.c
> index a8b5a06e7f..65bfa8dcf7 100644
> --- a/drivers/crypto/null/null_crypto_pmd_ops.c
> +++ b/drivers/crypto/null/null_crypto_pmd_ops.c
> @@ -234,7 +234,6 @@ null_crypto_pmd_qp_setup(struct rte_cryptodev
> *dev, uint16_t qp_id,
>  	}
> 
>  	qp->sess_mp = qp_conf->mp_session;
> -	qp->sess_mp_priv = qp_conf->mp_session_private;
> 
>  	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
> 
> @@ -258,10 +257,8 @@ null_crypto_pmd_sym_session_get_size(struct
> rte_cryptodev *dev __rte_unused)
>  static int
>  null_crypto_pmd_sym_session_configure(struct rte_cryptodev *dev
> __rte_unused,
>  		struct rte_crypto_sym_xform *xform,
> -		struct rte_cryptodev_sym_session *sess,
> -		struct rte_mempool *mp)
> +		void *sess)
>  {
> -	void *sess_private_data;
>  	int ret;
> 
>  	if (unlikely(sess == NULL)) {
> @@ -269,42 +266,23 @@ null_crypto_pmd_sym_session_configure(struct
> rte_cryptodev *dev __rte_unused,
>  		return -EINVAL;
>  	}
> 
> -	if (rte_mempool_get(mp, &sess_private_data)) {
> -		NULL_LOG(ERR,
> -				"Couldn't get object from session mempool");
> -		return -ENOMEM;
> -	}
> -
> -	ret = null_crypto_set_session_parameters(sess_private_data,
> xform);
> +	ret = null_crypto_set_session_parameters(sess, xform);
>  	if (ret != 0) {
>  		NULL_LOG(ERR, "failed configure session parameters");
> -
> -		/* Return session to mempool */
> -		rte_mempool_put(mp, sess_private_data);
>  		return ret;
>  	}
> 
> -	set_sym_session_private_data(sess, dev->driver_id,
> -		sess_private_data);
> -
>  	return 0;
>  }
> 
>  /** Clear the memory of session so it doesn't leave key material behind */
>  static void
> -null_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev,
> -		struct rte_cryptodev_sym_session *sess)
> +null_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev, void
> *sess)
>  {
> -	uint8_t index = dev->driver_id;
> -	void *sess_priv = get_sym_session_private_data(sess, index);
> -
> +	RTE_SET_USED(dev);
>  	/* Zero out the whole structure */
> -	if (sess_priv) {
> -		memset(sess_priv, 0, sizeof(struct null_crypto_session));
> -		struct rte_mempool *sess_mp =
> rte_mempool_from_obj(sess_priv);
> -		set_sym_session_private_data(sess, index, NULL);
> -		rte_mempool_put(sess_mp, sess_priv);
> -	}
> +	if (sess)
> +		memset(sess, 0, sizeof(struct null_crypto_session));
>  }
> 
>  static struct rte_cryptodev_ops pmd_ops = {
> diff --git a/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
> b/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
> index 7c6b1e45b4..95659e472b 100644
> --- a/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
> +++ b/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
> @@ -49,7 +49,6 @@ struct cpt_instance {
>  	uint32_t queue_id;
>  	uintptr_t rsvd;
>  	struct rte_mempool *sess_mp;
> -	struct rte_mempool *sess_mp_priv;
>  	struct cpt_qp_meta_info meta_info;
>  	uint8_t ca_enabled;
>  };
> diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c
> b/drivers/crypto/octeontx/otx_cryptodev_ops.c
> index 9e8fd495cf..abd0963be0 100644
> --- a/drivers/crypto/octeontx/otx_cryptodev_ops.c
> +++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c
> @@ -171,7 +171,6 @@ otx_cpt_que_pair_setup(struct rte_cryptodev *dev,
> 
>  	instance->queue_id = que_pair_id;
>  	instance->sess_mp = qp_conf->mp_session;
> -	instance->sess_mp_priv = qp_conf->mp_session_private;
>  	dev->data->queue_pairs[que_pair_id] = instance;
> 
>  	return 0;
> @@ -243,29 +242,22 @@ sym_xform_verify(struct rte_crypto_sym_xform
> *xform)
>  }
> 
>  static int
> -sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform,
> -		      struct rte_cryptodev_sym_session *sess,
> -		      struct rte_mempool *pool)
> +sym_session_configure(struct rte_crypto_sym_xform *xform,
> +		      void *sess)
>  {
>  	struct rte_crypto_sym_xform *temp_xform = xform;
>  	struct cpt_sess_misc *misc;
>  	vq_cmd_word3_t vq_cmd_w3;
> -	void *priv;
>  	int ret;
> 
>  	ret = sym_xform_verify(xform);
>  	if (unlikely(ret))
>  		return ret;
> 
> -	if (unlikely(rte_mempool_get(pool, &priv))) {
> -		CPT_LOG_ERR("Could not allocate session private data");
> -		return -ENOMEM;
> -	}
> -
> -	memset(priv, 0, sizeof(struct cpt_sess_misc) +
> +	memset(sess, 0, sizeof(struct cpt_sess_misc) +
>  			offsetof(struct cpt_ctx, mc_ctx));
> 
> -	misc = priv;
> +	misc = sess;
> 
>  	for ( ; xform != NULL; xform = xform->next) {
>  		switch (xform->type) {
> @@ -301,8 +293,6 @@ sym_session_configure(int driver_id, struct
> rte_crypto_sym_xform *xform,
>  		goto priv_put;
>  	}
> 
> -	set_sym_session_private_data(sess, driver_id, priv);
> -
>  	misc->ctx_dma_addr = rte_mempool_virt2iova(misc) +
>  			     sizeof(struct cpt_sess_misc);
> 
> @@ -316,56 +306,46 @@ sym_session_configure(int driver_id, struct
> rte_crypto_sym_xform *xform,
>  	return 0;
> 
>  priv_put:
> -	if (priv)
> -		rte_mempool_put(pool, priv);
>  	return -ENOTSUP;
>  }
> 
>  static void
> -sym_session_clear(int driver_id, struct rte_cryptodev_sym_session *sess)
> +sym_session_clear(void *sess)
>  {
> -	void *priv = get_sym_session_private_data(sess, driver_id);
>  	struct cpt_sess_misc *misc;
> -	struct rte_mempool *pool;
>  	struct cpt_ctx *ctx;
> 
> -	if (priv == NULL)
> +	if (sess == NULL)
>  		return;
> 
> -	misc = priv;
> +	misc = sess;
>  	ctx = SESS_PRIV(misc);
> 
>  	if (ctx->auth_key != NULL)
>  		rte_free(ctx->auth_key);
> 
> -	memset(priv, 0, cpt_get_session_size());
> -
> -	pool = rte_mempool_from_obj(priv);
> -
> -	set_sym_session_private_data(sess, driver_id, NULL);
> -
> -	rte_mempool_put(pool, priv);
> +	memset(sess, 0, cpt_get_session_size());
>  }
> 
>  static int
>  otx_cpt_session_cfg(struct rte_cryptodev *dev,
>  		    struct rte_crypto_sym_xform *xform,
> -		    struct rte_cryptodev_sym_session *sess,
> -		    struct rte_mempool *pool)
> +		    void *sess)
>  {
>  	CPT_PMD_INIT_FUNC_TRACE();
> +	RTE_SET_USED(dev);
> 
> -	return sym_session_configure(dev->driver_id, xform, sess, pool);
> +	return sym_session_configure(xform, sess);
>  }
> 
> 
>  static void
> -otx_cpt_session_clear(struct rte_cryptodev *dev,
> -		  struct rte_cryptodev_sym_session *sess)
> +otx_cpt_session_clear(struct rte_cryptodev *dev, void *sess)
>  {
>  	CPT_PMD_INIT_FUNC_TRACE();
> +	RTE_SET_USED(dev);
> 
> -	return sym_session_clear(dev->driver_id, sess);
> +	return sym_session_clear(sess);
>  }
> 
>  static unsigned int
> @@ -576,7 +556,6 @@ static __rte_always_inline void * __rte_hot
>  otx_cpt_enq_single_sym_sessless(struct cpt_instance *instance,
>  				struct rte_crypto_op *op)
>  {
> -	const int driver_id = otx_cryptodev_driver_id;
>  	struct rte_crypto_sym_op *sym_op = op->sym;
>  	struct rte_cryptodev_sym_session *sess;
>  	void *req;
> @@ -589,8 +568,12 @@ otx_cpt_enq_single_sym_sessless(struct
> cpt_instance *instance,
>  		return NULL;
>  	}
> 
> -	ret = sym_session_configure(driver_id, sym_op->xform, sess,
> -				    instance->sess_mp_priv);
> +	sess->sess_data[otx_cryptodev_driver_id].data =
> +			(void *)((uint8_t *)sess +
> +			rte_cryptodev_sym_get_header_session_size() +
> +			(otx_cryptodev_driver_id * sess->priv_sz));
> +	ret = sym_session_configure(sym_op->xform,
> +			sess->sess_data[otx_cryptodev_driver_id].data);
>  	if (ret)
>  		goto sess_put;
> 
> @@ -604,7 +587,7 @@ otx_cpt_enq_single_sym_sessless(struct
> cpt_instance *instance,
>  	return req;
> 
>  priv_put:
> -	sym_session_clear(driver_id, sess);
> +	sym_session_clear(sess);
>  sess_put:
>  	rte_mempool_put(instance->sess_mp, sess);
>  	return NULL;
> @@ -913,7 +896,6 @@ free_sym_session_data(const struct cpt_instance
> *instance,
>  	memset(cop->sym->session, 0,
>  	       rte_cryptodev_sym_get_existing_header_session_size(
>  		       cop->sym->session));
> -	rte_mempool_put(instance->sess_mp_priv, sess_private_data_t);
>  	rte_mempool_put(instance->sess_mp, cop->sym->session);
>  	cop->sym->session = NULL;
>  }
> diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
> b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
> index 7b744cd4b4..dcfbc49996 100644
> --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
> +++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
> @@ -371,29 +371,21 @@ sym_xform_verify(struct rte_crypto_sym_xform
> *xform)
>  }
> 
>  static int
> -sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform,
> -		      struct rte_cryptodev_sym_session *sess,
> -		      struct rte_mempool *pool)
> +sym_session_configure(struct rte_crypto_sym_xform *xform, void *sess)
>  {
>  	struct rte_crypto_sym_xform *temp_xform = xform;
>  	struct cpt_sess_misc *misc;
>  	vq_cmd_word3_t vq_cmd_w3;
> -	void *priv;
>  	int ret;
> 
>  	ret = sym_xform_verify(xform);
>  	if (unlikely(ret))
>  		return ret;
> 
> -	if (unlikely(rte_mempool_get(pool, &priv))) {
> -		CPT_LOG_ERR("Could not allocate session private data");
> -		return -ENOMEM;
> -	}
> -
> -	memset(priv, 0, sizeof(struct cpt_sess_misc) +
> +	memset(sess, 0, sizeof(struct cpt_sess_misc) +
>  			offsetof(struct cpt_ctx, mc_ctx));
> 
> -	misc = priv;
> +	misc = sess;
> 
>  	for ( ; xform != NULL; xform = xform->next) {
>  		switch (xform->type) {
> @@ -414,7 +406,7 @@ sym_session_configure(int driver_id, struct
> rte_crypto_sym_xform *xform,
>  		}
> 
>  		if (ret)
> -			goto priv_put;
> +			return -ENOTSUP;
>  	}
> 
>  	if ((GET_SESS_FC_TYPE(misc) == HASH_HMAC) &&
> @@ -425,12 +417,9 @@ sym_session_configure(int driver_id, struct
> rte_crypto_sym_xform *xform,
>  			rte_free(ctx->auth_key);
>  			ctx->auth_key = NULL;
>  		}
> -		ret = -ENOTSUP;
> -		goto priv_put;
> +		return -ENOTSUP;
>  	}
> 
> -	set_sym_session_private_data(sess, driver_id, misc);
> -
>  	misc->ctx_dma_addr = rte_mempool_virt2iova(misc) +
>  			     sizeof(struct cpt_sess_misc);
> 
> @@ -451,11 +440,6 @@ sym_session_configure(int driver_id, struct
> rte_crypto_sym_xform *xform,
>  	misc->cpt_inst_w7 = vq_cmd_w3.u64;
> 
>  	return 0;
> -
> -priv_put:
> -	rte_mempool_put(pool, priv);
> -
> -	return -ENOTSUP;
>  }
> 
>  static __rte_always_inline int32_t __rte_hot
> @@ -765,7 +749,6 @@ otx2_cpt_enqueue_sym_sessless(struct otx2_cpt_qp
> *qp, struct rte_crypto_op *op,
>  			      struct pending_queue *pend_q,
>  			      unsigned int burst_index)
>  {
> -	const int driver_id = otx2_cryptodev_driver_id;
>  	struct rte_crypto_sym_op *sym_op = op->sym;
>  	struct rte_cryptodev_sym_session *sess;
>  	int ret;
> @@ -775,8 +758,12 @@ otx2_cpt_enqueue_sym_sessless(struct
> otx2_cpt_qp *qp, struct rte_crypto_op *op,
>  	if (sess == NULL)
>  		return -ENOMEM;
> 
> -	ret = sym_session_configure(driver_id, sym_op->xform, sess,
> -				    qp->sess_mp_priv);
> +	sess->sess_data[otx2_cryptodev_driver_id].data =
> +			(void *)((uint8_t *)sess +
> +			rte_cryptodev_sym_get_header_session_size() +
> +			(otx2_cryptodev_driver_id * sess->priv_sz));
> +	ret = sym_session_configure(sym_op->xform,
> +			sess->sess_data[otx2_cryptodev_driver_id].data);
>  	if (ret)
>  		goto sess_put;
> 
> @@ -790,7 +777,7 @@ otx2_cpt_enqueue_sym_sessless(struct otx2_cpt_qp
> *qp, struct rte_crypto_op *op,
>  	return 0;
> 
>  priv_put:
> -	sym_session_clear(driver_id, sess);
> +	sym_session_clear(sess);
>  sess_put:
>  	rte_mempool_put(qp->sess_mp, sess);
>  	return ret;
> @@ -1035,8 +1022,7 @@ otx2_cpt_dequeue_post_process(struct
> otx2_cpt_qp *qp, struct rte_crypto_op *cop,
>  		}
> 
>  		if (unlikely(cop->sess_type ==
> RTE_CRYPTO_OP_SESSIONLESS)) {
> -			sym_session_clear(otx2_cryptodev_driver_id,
> -					  cop->sym->session);
> +			sym_session_clear(cop->sym->session);
>  			sz =
> rte_cryptodev_sym_get_existing_header_session_size(
>  					cop->sym->session);
>  			memset(cop->sym->session, 0, sz);
> @@ -1291,7 +1277,6 @@ otx2_cpt_queue_pair_setup(struct rte_cryptodev
> *dev, uint16_t qp_id,
>  	}
> 
>  	qp->sess_mp = conf->mp_session;
> -	qp->sess_mp_priv = conf->mp_session_private;
>  	dev->data->queue_pairs[qp_id] = qp;
> 
>  	return 0;
> @@ -1330,21 +1315,22 @@ otx2_cpt_sym_session_get_size(struct
> rte_cryptodev *dev __rte_unused)
>  static int
>  otx2_cpt_sym_session_configure(struct rte_cryptodev *dev,
>  			       struct rte_crypto_sym_xform *xform,
> -			       struct rte_cryptodev_sym_session *sess,
> -			       struct rte_mempool *pool)
> +			       void *sess)
>  {
>  	CPT_PMD_INIT_FUNC_TRACE();
> +	RTE_SET_USED(dev);
> 
> -	return sym_session_configure(dev->driver_id, xform, sess, pool);
> +	return sym_session_configure(xform, sess);
>  }
> 
>  static void
>  otx2_cpt_sym_session_clear(struct rte_cryptodev *dev,
> -			   struct rte_cryptodev_sym_session *sess)
> +			   void *sess)
>  {
>  	CPT_PMD_INIT_FUNC_TRACE();
> +	RTE_SET_USED(dev);
> 
> -	return sym_session_clear(dev->driver_id, sess);
> +	return sym_session_clear(sess);
>  }
> 
>  static unsigned int
> diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
> b/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
> index 01c081a216..5f63eaf7b7 100644
> --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
> +++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
> @@ -8,29 +8,21 @@
>  #include "cpt_pmd_logs.h"
> 
>  static void
> -sym_session_clear(int driver_id, struct rte_cryptodev_sym_session *sess)
> +sym_session_clear(void *sess)
>  {
> -	void *priv = get_sym_session_private_data(sess, driver_id);
>  	struct cpt_sess_misc *misc;
> -	struct rte_mempool *pool;
>  	struct cpt_ctx *ctx;
> 
> -	if (priv == NULL)
> +	if (sess == NULL)
>  		return;
> 
> -	misc = priv;
> +	misc = sess;
>  	ctx = SESS_PRIV(misc);
> 
>  	if (ctx->auth_key != NULL)
>  		rte_free(ctx->auth_key);
> 
> -	memset(priv, 0, cpt_get_session_size());
> -
> -	pool = rte_mempool_from_obj(priv);
> -
> -	set_sym_session_private_data(sess, driver_id, NULL);
> -
> -	rte_mempool_put(pool, priv);
> +	memset(sess, 0, cpt_get_session_size());
>  }
> 
>  static __rte_always_inline uint8_t
> diff --git a/drivers/crypto/openssl/rte_openssl_pmd_ops.c
> b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
> index 52715f86f8..1b48a6b400 100644
> --- a/drivers/crypto/openssl/rte_openssl_pmd_ops.c
> +++ b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
> @@ -741,7 +741,6 @@ openssl_pmd_qp_setup(struct rte_cryptodev *dev,
> uint16_t qp_id,
>  		goto qp_setup_cleanup;
> 
>  	qp->sess_mp = qp_conf->mp_session;
> -	qp->sess_mp_priv = qp_conf->mp_session_private;
> 
>  	memset(&qp->stats, 0, sizeof(qp->stats));
> 
> @@ -772,10 +771,8 @@ openssl_pmd_asym_session_get_size(struct
> rte_cryptodev *dev __rte_unused)
>  static int
>  openssl_pmd_sym_session_configure(struct rte_cryptodev *dev
> __rte_unused,
>  		struct rte_crypto_sym_xform *xform,
> -		struct rte_cryptodev_sym_session *sess,
> -		struct rte_mempool *mempool)
> +		void *sess)
>  {
> -	void *sess_private_data;
>  	int ret;
> 
>  	if (unlikely(sess == NULL)) {
> @@ -783,24 +780,12 @@ openssl_pmd_sym_session_configure(struct
> rte_cryptodev *dev __rte_unused,
>  		return -EINVAL;
>  	}
> 
> -	if (rte_mempool_get(mempool, &sess_private_data)) {
> -		OPENSSL_LOG(ERR,
> -			"Couldn't get object from session mempool");
> -		return -ENOMEM;
> -	}
> -
> -	ret = openssl_set_session_parameters(sess_private_data, xform);
> +	ret = openssl_set_session_parameters(sess, xform);
>  	if (ret != 0) {
>  		OPENSSL_LOG(ERR, "failed configure session parameters");
> -
> -		/* Return session to mempool */
> -		rte_mempool_put(mempool, sess_private_data);
>  		return ret;
>  	}
> 
> -	set_sym_session_private_data(sess, dev->driver_id,
> -			sess_private_data);
> -
>  	return 0;
>  }
> 
> @@ -1154,19 +1139,13 @@ openssl_pmd_asym_session_configure(struct
> rte_cryptodev *dev __rte_unused,
> 
>  /** Clear the memory of session so it doesn't leave key material behind */
>  static void
> -openssl_pmd_sym_session_clear(struct rte_cryptodev *dev,
> -		struct rte_cryptodev_sym_session *sess)
> +openssl_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
>  {
> -	uint8_t index = dev->driver_id;
> -	void *sess_priv = get_sym_session_private_data(sess, index);
> -
> +	RTE_SET_USED(dev);
>  	/* Zero out the whole structure */
> -	if (sess_priv) {
> -		openssl_reset_session(sess_priv);
> -		memset(sess_priv, 0, sizeof(struct openssl_session));
> -		struct rte_mempool *sess_mp =
> rte_mempool_from_obj(sess_priv);
> -		set_sym_session_private_data(sess, index, NULL);
> -		rte_mempool_put(sess_mp, sess_priv);
> +	if (sess) {
> +		openssl_reset_session(sess);
> +		memset(sess, 0, sizeof(struct openssl_session));
>  	}
>  }
> 
> diff --git a/drivers/crypto/qat/qat_sym_session.c
> b/drivers/crypto/qat/qat_sym_session.c
> index 2a22347c7f..114bf081c1 100644
> --- a/drivers/crypto/qat/qat_sym_session.c
> +++ b/drivers/crypto/qat/qat_sym_session.c
> @@ -172,21 +172,14 @@ qat_is_auth_alg_supported(enum
> rte_crypto_auth_algorithm algo,
>  }
> 
>  void
> -qat_sym_session_clear(struct rte_cryptodev *dev,
> -		struct rte_cryptodev_sym_session *sess)
> +qat_sym_session_clear(struct rte_cryptodev *dev, void *sess)
>  {
> -	uint8_t index = dev->driver_id;
> -	void *sess_priv = get_sym_session_private_data(sess, index);
> -	struct qat_sym_session *s = (struct qat_sym_session *)sess_priv;
> +	struct qat_sym_session *s = (struct qat_sym_session *)sess;
> 
> -	if (sess_priv) {
> +	if (sess) {
>  		if (s->bpi_ctx)
>  			bpi_cipher_ctx_free(s->bpi_ctx);
>  		memset(s, 0, qat_sym_session_get_private_size(dev));
> -		struct rte_mempool *sess_mp =
> rte_mempool_from_obj(sess_priv);
> -
> -		set_sym_session_private_data(sess, index, NULL);
> -		rte_mempool_put(sess_mp, sess_priv);
>  	}
>  }
> 
> @@ -458,31 +451,17 @@ qat_sym_session_configure_cipher(struct
> rte_cryptodev *dev,
>  int
>  qat_sym_session_configure(struct rte_cryptodev *dev,
>  		struct rte_crypto_sym_xform *xform,
> -		struct rte_cryptodev_sym_session *sess,
> -		struct rte_mempool *mempool)
> +		void *sess_private_data)
>  {
> -	void *sess_private_data;
>  	int ret;
> 
> -	if (rte_mempool_get(mempool, &sess_private_data)) {
> -		CDEV_LOG_ERR(
> -			"Couldn't get object from session mempool");
> -		return -ENOMEM;
> -	}
> -
>  	ret = qat_sym_session_set_parameters(dev, xform,
> sess_private_data);
>  	if (ret != 0) {
>  		QAT_LOG(ERR,
>  		    "Crypto QAT PMD: failed to configure session parameters");
> -
> -		/* Return session to mempool */
> -		rte_mempool_put(mempool, sess_private_data);
>  		return ret;
>  	}
> 
> -	set_sym_session_private_data(sess, dev->driver_id,
> -		sess_private_data);
> -
>  	return 0;
>  }
> 
> diff --git a/drivers/crypto/qat/qat_sym_session.h
> b/drivers/crypto/qat/qat_sym_session.h
> index 7fcc1d6f7b..6da29e2305 100644
> --- a/drivers/crypto/qat/qat_sym_session.h
> +++ b/drivers/crypto/qat/qat_sym_session.h
> @@ -112,8 +112,7 @@ struct qat_sym_session {
>  int
>  qat_sym_session_configure(struct rte_cryptodev *dev,
>  		struct rte_crypto_sym_xform *xform,
> -		struct rte_cryptodev_sym_session *sess,
> -		struct rte_mempool *mempool);
> +		void *sess);
> 
>  int
>  qat_sym_session_set_parameters(struct rte_cryptodev *dev,
> @@ -135,8 +134,7 @@ qat_sym_session_configure_auth(struct
> rte_cryptodev *dev,
>  				struct qat_sym_session *session);
> 
>  void
> -qat_sym_session_clear(struct rte_cryptodev *dev,
> -		struct rte_cryptodev_sym_session *session);
> +qat_sym_session_clear(struct rte_cryptodev *dev, void *session);
> 
>  unsigned int
>  qat_sym_session_get_private_size(struct rte_cryptodev *dev);
> diff --git a/drivers/crypto/scheduler/scheduler_pmd_ops.c
> b/drivers/crypto/scheduler/scheduler_pmd_ops.c
> index 465b88ade8..87260b5a22 100644
> --- a/drivers/crypto/scheduler/scheduler_pmd_ops.c
> +++ b/drivers/crypto/scheduler/scheduler_pmd_ops.c
> @@ -476,9 +476,7 @@ scheduler_pmd_sym_session_get_size(struct
> rte_cryptodev *dev __rte_unused)
> 
>  static int
>  scheduler_pmd_sym_session_configure(struct rte_cryptodev *dev,
> -	struct rte_crypto_sym_xform *xform,
> -	struct rte_cryptodev_sym_session *sess,
> -	struct rte_mempool *mempool)
> +	struct rte_crypto_sym_xform *xform, void *sess)
>  {
>  	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
>  	uint32_t i;
> @@ -488,7 +486,7 @@ scheduler_pmd_sym_session_configure(struct
> rte_cryptodev *dev,
>  		struct scheduler_worker *worker = &sched_ctx->workers[i];
> 
>  		ret = rte_cryptodev_sym_session_init(worker->dev_id, sess,
> -					xform, mempool);
> +					xform);
>  		if (ret < 0) {
>  			CR_SCHED_LOG(ERR, "unable to config sym session");
>  			return ret;
> @@ -500,8 +498,7 @@ scheduler_pmd_sym_session_configure(struct
> rte_cryptodev *dev,
> 
>  /** Clear the memory of session so it doesn't leave key material behind */
>  static void
> -scheduler_pmd_sym_session_clear(struct rte_cryptodev *dev,
> -		struct rte_cryptodev_sym_session *sess)
> +scheduler_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
>  {
>  	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
>  	uint32_t i;
> diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
> b/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
> index 3f46014b7d..b0f8f6d86a 100644
> --- a/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
> +++ b/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
> @@ -226,7 +226,6 @@ snow3g_pmd_qp_setup(struct rte_cryptodev *dev,
> uint16_t qp_id,
> 
>  	qp->mgr = internals->mgr;
>  	qp->sess_mp = qp_conf->mp_session;
> -	qp->sess_mp_priv = qp_conf->mp_session_private;
> 
>  	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
> 
> @@ -250,10 +249,8 @@ snow3g_pmd_sym_session_get_size(struct
> rte_cryptodev *dev __rte_unused)
>  static int
>  snow3g_pmd_sym_session_configure(struct rte_cryptodev *dev,
>  		struct rte_crypto_sym_xform *xform,
> -		struct rte_cryptodev_sym_session *sess,
> -		struct rte_mempool *mempool)
> +		void *sess)
>  {
> -	void *sess_private_data;
>  	int ret;
>  	struct snow3g_private *internals = dev->data->dev_private;
> 
> @@ -262,43 +259,24 @@ snow3g_pmd_sym_session_configure(struct
> rte_cryptodev *dev,
>  		return -EINVAL;
>  	}
> 
> -	if (rte_mempool_get(mempool, &sess_private_data)) {
> -		SNOW3G_LOG(ERR,
> -			"Couldn't get object from session mempool");
> -		return -ENOMEM;
> -	}
> -
>  	ret = snow3g_set_session_parameters(internals->mgr,
> -					sess_private_data, xform);
> +					sess, xform);
>  	if (ret != 0) {
>  		SNOW3G_LOG(ERR, "failed configure session parameters");
> -
> -		/* Return session to mempool */
> -		rte_mempool_put(mempool, sess_private_data);
>  		return ret;
>  	}
> 
> -	set_sym_session_private_data(sess, dev->driver_id,
> -		sess_private_data);
> -
>  	return 0;
>  }
> 
>  /** Clear the memory of session so it doesn't leave key material behind */
>  static void
> -snow3g_pmd_sym_session_clear(struct rte_cryptodev *dev,
> -		struct rte_cryptodev_sym_session *sess)
> +snow3g_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
>  {
> -	uint8_t index = dev->driver_id;
> -	void *sess_priv = get_sym_session_private_data(sess, index);
> -
> +	RTE_SET_USED(dev);
>  	/* Zero out the whole structure */
> -	if (sess_priv) {
> -		memset(sess_priv, 0, sizeof(struct snow3g_session));
> -		struct rte_mempool *sess_mp =
> rte_mempool_from_obj(sess_priv);
> -		set_sym_session_private_data(sess, index, NULL);
> -		rte_mempool_put(sess_mp, sess_priv);
> -	}
> +	if (sess)
> +		memset(sess, 0, sizeof(struct snow3g_session));
>  }
> 
>  struct rte_cryptodev_ops snow3g_pmd_ops = {
> diff --git a/drivers/crypto/virtio/virtio_cryptodev.c
> b/drivers/crypto/virtio/virtio_cryptodev.c
> index 8faa39df4a..de52fec32e 100644
> --- a/drivers/crypto/virtio/virtio_cryptodev.c
> +++ b/drivers/crypto/virtio/virtio_cryptodev.c
> @@ -37,11 +37,10 @@ static void virtio_crypto_dev_free_mbufs(struct
> rte_cryptodev *dev);
>  static unsigned int virtio_crypto_sym_get_session_private_size(
>  		struct rte_cryptodev *dev);
>  static void virtio_crypto_sym_clear_session(struct rte_cryptodev *dev,
> -		struct rte_cryptodev_sym_session *sess);
> +		void *sess);
>  static int virtio_crypto_sym_configure_session(struct rte_cryptodev *dev,
>  		struct rte_crypto_sym_xform *xform,
> -		struct rte_cryptodev_sym_session *session,
> -		struct rte_mempool *mp);
> +		void *session);
> 
>  /*
>   * The set of PCI devices this driver supports
> @@ -927,7 +926,7 @@ virtio_crypto_check_sym_clear_session_paras(
>  static void
>  virtio_crypto_sym_clear_session(
>  		struct rte_cryptodev *dev,
> -		struct rte_cryptodev_sym_session *sess)
> +		void *sess)
>  {
>  	struct virtio_crypto_hw *hw;
>  	struct virtqueue *vq;
> @@ -1290,11 +1289,9 @@ static int
>  virtio_crypto_check_sym_configure_session_paras(
>  		struct rte_cryptodev *dev,
>  		struct rte_crypto_sym_xform *xform,
> -		struct rte_cryptodev_sym_session *sym_sess,
> -		struct rte_mempool *mempool)
> +		void *sym_sess)
>  {
> -	if (unlikely(xform == NULL) || unlikely(sym_sess == NULL) ||
> -		unlikely(mempool == NULL)) {
> +	if (unlikely(xform == NULL) || unlikely(sym_sess == NULL)) {
>  		VIRTIO_CRYPTO_SESSION_LOG_ERR("NULL pointer");
>  		return -1;
>  	}
> @@ -1309,12 +1306,9 @@ static int
>  virtio_crypto_sym_configure_session(
>  		struct rte_cryptodev *dev,
>  		struct rte_crypto_sym_xform *xform,
> -		struct rte_cryptodev_sym_session *sess,
> -		struct rte_mempool *mempool)
> +		void *sess)
>  {
>  	int ret;
> -	struct virtio_crypto_session crypto_sess;
> -	void *session_private = &crypto_sess;
>  	struct virtio_crypto_session *session;
>  	struct virtio_crypto_op_ctrl_req *ctrl_req;
>  	enum virtio_crypto_cmd_id cmd_id;
> @@ -1326,19 +1320,13 @@ virtio_crypto_sym_configure_session(
>  	PMD_INIT_FUNC_TRACE();
> 
>  	ret = virtio_crypto_check_sym_configure_session_paras(dev, xform,
> -			sess, mempool);
> +			sess);
>  	if (ret < 0) {
>  		VIRTIO_CRYPTO_SESSION_LOG_ERR("Invalid parameters");
>  		return ret;
>  	}
> 
> -	if (rte_mempool_get(mempool, &session_private)) {
> -		VIRTIO_CRYPTO_SESSION_LOG_ERR(
> -			"Couldn't get object from session mempool");
> -		return -ENOMEM;
> -	}
> -
> -	session = (struct virtio_crypto_session *)session_private;
> +	session = (struct virtio_crypto_session *)sess;
>  	memset(session, 0, sizeof(struct virtio_crypto_session));
>  	ctrl_req = &session->ctrl;
>  	ctrl_req->header.opcode =
> VIRTIO_CRYPTO_CIPHER_CREATE_SESSION;
> @@ -1401,9 +1389,6 @@ virtio_crypto_sym_configure_session(
>  		goto error_out;
>  	}
> 
> -	set_sym_session_private_data(sess, dev->driver_id,
> -		session_private);
> -
>  	return 0;
> 
>  error_out:
> diff --git a/drivers/crypto/zuc/rte_zuc_pmd_ops.c
> b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
> index 38642d45ab..04126c8a04 100644
> --- a/drivers/crypto/zuc/rte_zuc_pmd_ops.c
> +++ b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
> @@ -226,7 +226,6 @@ zuc_pmd_qp_setup(struct rte_cryptodev *dev,
> uint16_t qp_id,
> 
>  	qp->mb_mgr = internals->mb_mgr;
>  	qp->sess_mp = qp_conf->mp_session;
> -	qp->sess_mp_priv = qp_conf->mp_session_private;
> 
>  	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
> 
> @@ -250,10 +249,8 @@ zuc_pmd_sym_session_get_size(struct
> rte_cryptodev *dev __rte_unused)
>  static int
>  zuc_pmd_sym_session_configure(struct rte_cryptodev *dev __rte_unused,
>  		struct rte_crypto_sym_xform *xform,
> -		struct rte_cryptodev_sym_session *sess,
> -		struct rte_mempool *mempool)
> +		void *sess)
>  {
> -	void *sess_private_data;
>  	int ret;
> 
>  	if (unlikely(sess == NULL)) {
> @@ -261,43 +258,23 @@ zuc_pmd_sym_session_configure(struct
> rte_cryptodev *dev __rte_unused,
>  		return -EINVAL;
>  	}
> 
> -	if (rte_mempool_get(mempool, &sess_private_data)) {
> -		ZUC_LOG(ERR,
> -			"Couldn't get object from session mempool");
> -
> -		return -ENOMEM;
> -	}
> -
> -	ret = zuc_set_session_parameters(sess_private_data, xform);
> +	ret = zuc_set_session_parameters(sess, xform);
>  	if (ret != 0) {
>  		ZUC_LOG(ERR, "failed configure session parameters");
> -
> -		/* Return session to mempool */
> -		rte_mempool_put(mempool, sess_private_data);
>  		return ret;
>  	}
> 
> -	set_sym_session_private_data(sess, dev->driver_id,
> -		sess_private_data);
> -
>  	return 0;
>  }
> 
>  /** Clear the memory of session so it doesn't leave key material behind */
>  static void
> -zuc_pmd_sym_session_clear(struct rte_cryptodev *dev,
> -		struct rte_cryptodev_sym_session *sess)
> +zuc_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
>  {
> -	uint8_t index = dev->driver_id;
> -	void *sess_priv = get_sym_session_private_data(sess, index);
> -
> +	RTE_SET_USED(dev);
>  	/* Zero out the whole structure */
> -	if (sess_priv) {
> -		memset(sess_priv, 0, sizeof(struct zuc_session));
> -		struct rte_mempool *sess_mp =
> rte_mempool_from_obj(sess_priv);
> -		set_sym_session_private_data(sess, index, NULL);
> -		rte_mempool_put(sess_mp, sess_priv);
> -	}
> +	if (sess)
> +		memset(sess, 0, sizeof(struct zuc_session));
>  }
> 
>  struct rte_cryptodev_ops zuc_pmd_ops = {
> diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
> b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
> index b33cb7e139..8522f2dfda 100644
> --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
> +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
> @@ -38,8 +38,7 @@ otx2_ca_deq_post_process(const struct otx2_cpt_qp
> *qp,
>  		}
> 
>  		if (unlikely(cop->sess_type ==
> RTE_CRYPTO_OP_SESSIONLESS)) {
> -			sym_session_clear(otx2_cryptodev_driver_id,
> -					  cop->sym->session);
> +			sym_session_clear(cop->sym->session);
>  			memset(cop->sym->session, 0,
> 
> 	rte_cryptodev_sym_get_existing_header_session_size(
>  				cop->sym->session));
> diff --git a/examples/fips_validation/fips_dev_self_test.c
> b/examples/fips_validation/fips_dev_self_test.c
> index b4eab05a98..bbc27a1b6f 100644
> --- a/examples/fips_validation/fips_dev_self_test.c
> +++ b/examples/fips_validation/fips_dev_self_test.c
> @@ -969,7 +969,6 @@ struct fips_dev_auto_test_env {
>  	struct rte_mempool *mpool;
>  	struct rte_mempool *op_pool;
>  	struct rte_mempool *sess_pool;
> -	struct rte_mempool *sess_priv_pool;
>  	struct rte_mbuf *mbuf;
>  	struct rte_crypto_op *op;
>  };
> @@ -981,7 +980,7 @@ typedef int
> (*fips_dev_self_test_prepare_xform_t)(uint8_t,
>  		uint32_t);
> 
>  typedef int (*fips_dev_self_test_prepare_op_t)(struct rte_crypto_op *,
> -		struct rte_mbuf *, struct rte_cryptodev_sym_session *,
> +		struct rte_mbuf *, void *,
>  		uint32_t, struct fips_dev_self_test_vector *);
> 
>  typedef int (*fips_dev_self_test_check_result_t)(struct rte_crypto_op *,
> @@ -1173,7 +1172,7 @@ prepare_aead_xform(uint8_t dev_id,
>  static int
>  prepare_cipher_op(struct rte_crypto_op *op,
>  		struct rte_mbuf *mbuf,
> -		struct rte_cryptodev_sym_session *session,
> +		void *session,
>  		uint32_t dir,
>  		struct fips_dev_self_test_vector *vec)
>  {
> @@ -1212,7 +1211,7 @@ prepare_cipher_op(struct rte_crypto_op *op,
>  static int
>  prepare_auth_op(struct rte_crypto_op *op,
>  		struct rte_mbuf *mbuf,
> -		struct rte_cryptodev_sym_session *session,
> +		void *session,
>  		uint32_t dir,
>  		struct fips_dev_self_test_vector *vec)
>  {
> @@ -1251,7 +1250,7 @@ prepare_auth_op(struct rte_crypto_op *op,
>  static int
>  prepare_aead_op(struct rte_crypto_op *op,
>  		struct rte_mbuf *mbuf,
> -		struct rte_cryptodev_sym_session *session,
> +		void *session,
>  		uint32_t dir,
>  		struct fips_dev_self_test_vector *vec)
>  {
> @@ -1464,7 +1463,7 @@ run_single_test(uint8_t dev_id,
>  		uint32_t negative_test)
>  {
>  	struct rte_crypto_sym_xform xform;
> -	struct rte_cryptodev_sym_session *sess;
> +	void *sess;
>  	uint16_t n_deqd;
>  	uint8_t key[256];
>  	int ret;
> @@ -1484,8 +1483,7 @@ run_single_test(uint8_t dev_id,
>  	if (!sess)
>  		return -ENOMEM;
> 
> -	ret = rte_cryptodev_sym_session_init(dev_id,
> -			sess, &xform, env->sess_priv_pool);
> +	ret = rte_cryptodev_sym_session_init(dev_id, sess, &xform);
>  	if (ret < 0) {
>  		RTE_LOG(ERR, PMD, "Error %i: Init session\n", ret);
>  		return ret;
> @@ -1533,8 +1531,6 @@ fips_dev_auto_test_uninit(uint8_t dev_id,
>  		rte_mempool_free(env->op_pool);
>  	if (env->sess_pool)
>  		rte_mempool_free(env->sess_pool);
> -	if (env->sess_priv_pool)
> -		rte_mempool_free(env->sess_priv_pool);
> 
>  	rte_cryptodev_stop(dev_id);
>  }
> @@ -1542,7 +1538,7 @@ fips_dev_auto_test_uninit(uint8_t dev_id,
>  static int
>  fips_dev_auto_test_init(uint8_t dev_id, struct fips_dev_auto_test_env
> *env)
>  {
> -	struct rte_cryptodev_qp_conf qp_conf = {128, NULL, NULL};
> +	struct rte_cryptodev_qp_conf qp_conf = {128, NULL};
>  	uint32_t sess_sz =
> rte_cryptodev_sym_get_private_session_size(dev_id);
>  	struct rte_cryptodev_config conf;
>  	char name[128];
> @@ -1586,25 +1582,13 @@ fips_dev_auto_test_init(uint8_t dev_id, struct
> fips_dev_auto_test_env *env)
>  	snprintf(name, 128, "%s%u", "SELF_TEST_SESS_POOL", dev_id);
> 
>  	env->sess_pool = rte_cryptodev_sym_session_pool_create(name,
> -			128, 0, 0, 0, rte_cryptodev_socket_id(dev_id));
> +			128, sess_sz, 0, 0, rte_cryptodev_socket_id(dev_id));
>  	if (!env->sess_pool) {
>  		ret = -ENOMEM;
>  		goto error_exit;
>  	}
> 
> -	memset(name, 0, 128);
> -	snprintf(name, 128, "%s%u", "SELF_TEST_SESS_PRIV_POOL", dev_id);
> -
> -	env->sess_priv_pool = rte_mempool_create(name,
> -			128, sess_sz, 0, 0, NULL, NULL, NULL,
> -			NULL, rte_cryptodev_socket_id(dev_id), 0);
> -	if (!env->sess_priv_pool) {
> -		ret = -ENOMEM;
> -		goto error_exit;
> -	}
> -
>  	qp_conf.mp_session = env->sess_pool;
> -	qp_conf.mp_session_private = env->sess_priv_pool;
> 
>  	ret = rte_cryptodev_queue_pair_setup(dev_id, 0, &qp_conf,
>  			rte_cryptodev_socket_id(dev_id));
> diff --git a/examples/fips_validation/main.c
> b/examples/fips_validation/main.c
> index a8daad1f48..03c6ccb5b8 100644
> --- a/examples/fips_validation/main.c
> +++ b/examples/fips_validation/main.c
> @@ -48,13 +48,12 @@ struct cryptodev_fips_validate_env {
>  	uint16_t mbuf_data_room;
>  	struct rte_mempool *mpool;
>  	struct rte_mempool *sess_mpool;
> -	struct rte_mempool *sess_priv_mpool;
>  	struct rte_mempool *op_pool;
>  	struct rte_mbuf *mbuf;
>  	uint8_t *digest;
>  	uint16_t digest_len;
>  	struct rte_crypto_op *op;
> -	struct rte_cryptodev_sym_session *sess;
> +	void *sess;
>  	uint16_t self_test;
>  	struct fips_dev_broken_test_config *broken_test_config;
>  } env;
> @@ -63,7 +62,7 @@ static int
>  cryptodev_fips_validate_app_int(void)
>  {
>  	struct rte_cryptodev_config conf = {rte_socket_id(), 1, 0};
> -	struct rte_cryptodev_qp_conf qp_conf = {128, NULL, NULL};
> +	struct rte_cryptodev_qp_conf qp_conf = {128, NULL};
>  	struct rte_cryptodev_info dev_info;
>  	uint32_t sess_sz = rte_cryptodev_sym_get_private_session_size(
>  			env.dev_id);
> @@ -103,16 +102,11 @@ cryptodev_fips_validate_app_int(void)
>  	ret = -ENOMEM;
> 
>  	env.sess_mpool = rte_cryptodev_sym_session_pool_create(
> -			"FIPS_SESS_MEMPOOL", 16, 0, 0, 0, rte_socket_id());
> +			"FIPS_SESS_MEMPOOL", 16, sess_sz, 0, 0,
> +			rte_socket_id());
>  	if (!env.sess_mpool)
>  		goto error_exit;
> 
> -	env.sess_priv_mpool =
> rte_mempool_create("FIPS_SESS_PRIV_MEMPOOL",
> -			16, sess_sz, 0, 0, NULL, NULL, NULL,
> -			NULL, rte_socket_id(), 0);
> -	if (!env.sess_priv_mpool)
> -		goto error_exit;
> -
>  	env.op_pool = rte_crypto_op_pool_create(
>  			"FIPS_OP_POOL",
>  			RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> @@ -127,7 +121,6 @@ cryptodev_fips_validate_app_int(void)
>  		goto error_exit;
> 
>  	qp_conf.mp_session = env.sess_mpool;
> -	qp_conf.mp_session_private = env.sess_priv_mpool;
> 
>  	ret = rte_cryptodev_queue_pair_setup(env.dev_id, 0, &qp_conf,
>  			rte_socket_id());
> @@ -141,8 +134,6 @@ cryptodev_fips_validate_app_int(void)
>  	rte_mempool_free(env.mpool);
>  	if (env.sess_mpool)
>  		rte_mempool_free(env.sess_mpool);
> -	if (env.sess_priv_mpool)
> -		rte_mempool_free(env.sess_priv_mpool);
>  	if (env.op_pool)
>  		rte_mempool_free(env.op_pool);
> 
> @@ -158,7 +149,6 @@ cryptodev_fips_validate_app_uninit(void)
>  	rte_cryptodev_sym_session_free(env.sess);
>  	rte_mempool_free(env.mpool);
>  	rte_mempool_free(env.sess_mpool);
> -	rte_mempool_free(env.sess_priv_mpool);
>  	rte_mempool_free(env.op_pool);
>  }
> 
> @@ -1179,7 +1169,7 @@ fips_run_test(void)
>  		return -ENOMEM;
> 
>  	ret = rte_cryptodev_sym_session_init(env.dev_id,
> -			env.sess, &xform, env.sess_priv_mpool);
> +			env.sess, &xform);
>  	if (ret < 0) {
>  		RTE_LOG(ERR, USER1, "Error %i: Init session\n",
>  				ret);
> diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-
> secgw/ipsec-secgw.c
> index 7ad94cb822..65528ee2e7 100644
> --- a/examples/ipsec-secgw/ipsec-secgw.c
> +++ b/examples/ipsec-secgw/ipsec-secgw.c
> @@ -1216,15 +1216,11 @@ ipsec_poll_mode_worker(void)
>  	qconf->inbound.sa_ctx = socket_ctx[socket_id].sa_in;
>  	qconf->inbound.cdev_map = cdev_map_in;
>  	qconf->inbound.session_pool = socket_ctx[socket_id].session_pool;
> -	qconf->inbound.session_priv_pool =
> -			socket_ctx[socket_id].session_priv_pool;
>  	qconf->outbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_out;
>  	qconf->outbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_out;
>  	qconf->outbound.sa_ctx = socket_ctx[socket_id].sa_out;
>  	qconf->outbound.cdev_map = cdev_map_out;
>  	qconf->outbound.session_pool =
> socket_ctx[socket_id].session_pool;
> -	qconf->outbound.session_priv_pool =
> -			socket_ctx[socket_id].session_priv_pool;
>  	qconf->frag.pool_dir = socket_ctx[socket_id].mbuf_pool;
>  	qconf->frag.pool_indir = socket_ctx[socket_id].mbuf_pool_indir;
> 
> @@ -2142,8 +2138,6 @@ cryptodevs_init(uint16_t req_queue_num)
>  		qp_conf.nb_descriptors = CDEV_QUEUE_DESC;
>  		qp_conf.mp_session =
>  			socket_ctx[dev_conf.socket_id].session_pool;
> -		qp_conf.mp_session_private =
> -			socket_ctx[dev_conf.socket_id].session_priv_pool;
>  		for (qp = 0; qp < dev_conf.nb_queue_pairs; qp++)
>  			if (rte_cryptodev_queue_pair_setup(cdev_id, qp,
>  					&qp_conf, dev_conf.socket_id))
> @@ -2405,37 +2399,37 @@ session_pool_init(struct socket_ctx *ctx, int32_t
> socket_id, size_t sess_sz)
>  		printf("Allocated session pool on socket %d\n",
> 	socket_id);
>  }
> 
> -static void
> -session_priv_pool_init(struct socket_ctx *ctx, int32_t socket_id,
> -	size_t sess_sz)
> -{
> -	char mp_name[RTE_MEMPOOL_NAMESIZE];
> -	struct rte_mempool *sess_mp;
> -	uint32_t nb_sess;
> -
> -	snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
> -			"sess_mp_priv_%u", socket_id);
> -	nb_sess = (get_nb_crypto_sessions() + CDEV_MP_CACHE_SZ *
> -		rte_lcore_count());
> -	nb_sess = RTE_MAX(nb_sess, CDEV_MP_CACHE_SZ *
> -			CDEV_MP_CACHE_MULTIPLIER);
> -	sess_mp = rte_mempool_create(mp_name,
> -			nb_sess,
> -			sess_sz,
> -			CDEV_MP_CACHE_SZ,
> -			0, NULL, NULL, NULL,
> -			NULL, socket_id,
> -			0);
> -	ctx->session_priv_pool = sess_mp;
> -
> -	if (ctx->session_priv_pool == NULL)
> -		rte_exit(EXIT_FAILURE,
> -			"Cannot init session priv pool on socket %d\n",
> -			socket_id);
> -	else
> -		printf("Allocated session priv pool on socket %d\n",
> -			socket_id);
> -}
> +//static void
> +//session_priv_pool_init(struct socket_ctx *ctx, int32_t socket_id,
> +//	size_t sess_sz)
> +//{
> +//	char mp_name[RTE_MEMPOOL_NAMESIZE];
> +//	struct rte_mempool *sess_mp;
> +//	uint32_t nb_sess;
> +//
> +//	snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
> +//			"sess_mp_priv_%u", socket_id);
> +//	nb_sess = (get_nb_crypto_sessions() + CDEV_MP_CACHE_SZ *
> +//		rte_lcore_count());
> +//	nb_sess = RTE_MAX(nb_sess, CDEV_MP_CACHE_SZ *
> +//			CDEV_MP_CACHE_MULTIPLIER);
> +//	sess_mp = rte_mempool_create(mp_name,
> +//			nb_sess,
> +//			sess_sz,
> +//			CDEV_MP_CACHE_SZ,
> +//			0, NULL, NULL, NULL,
> +//			NULL, socket_id,
> +//			0);
> +//	ctx->session_priv_pool = sess_mp;
> +//
> +//	if (ctx->session_priv_pool == NULL)
> +//		rte_exit(EXIT_FAILURE,
> +//			"Cannot init session priv pool on socket %d\n",
> +//			socket_id);
> +//	else
> +//		printf("Allocated session priv pool on socket %d\n",
> +//			socket_id);
> +//}
> 
>  static void
>  pool_init(struct socket_ctx *ctx, int32_t socket_id, uint32_t nb_mbuf)
> @@ -2938,8 +2932,8 @@ main(int32_t argc, char **argv)
> 
>  		pool_init(&socket_ctx[socket_id], socket_id,
> nb_bufs_in_pool);
>  		session_pool_init(&socket_ctx[socket_id], socket_id,
> sess_sz);
> -		session_priv_pool_init(&socket_ctx[socket_id], socket_id,
> -			sess_sz);
> +//		session_priv_pool_init(&socket_ctx[socket_id], socket_id,
> +//			sess_sz);
>  	}
>  	printf("Number of mbufs in packet pool %d\n", nb_bufs_in_pool);
> 
> diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c
> index 03d907cba8..a5921de11c 100644
> --- a/examples/ipsec-secgw/ipsec.c
> +++ b/examples/ipsec-secgw/ipsec.c
> @@ -143,8 +143,7 @@ create_lookaside_session(struct ipsec_ctx *ipsec_ctx,
> struct ipsec_sa *sa,
>  		ips->crypto.ses = rte_cryptodev_sym_session_create(
>  				ipsec_ctx->session_pool);
>  		rte_cryptodev_sym_session_init(ipsec_ctx-
> >tbl[cdev_id_qp].id,
> -				ips->crypto.ses, sa->xforms,
> -				ipsec_ctx->session_priv_pool);
> +				ips->crypto.ses, sa->xforms);
> 
>  		rte_cryptodev_info_get(ipsec_ctx->tbl[cdev_id_qp].id,
>  				&cdev_info);
> diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h
> index 8405c48171..673c64e8dc 100644
> --- a/examples/ipsec-secgw/ipsec.h
> +++ b/examples/ipsec-secgw/ipsec.h
> @@ -243,7 +243,6 @@ struct socket_ctx {
>  	struct rte_mempool *mbuf_pool;
>  	struct rte_mempool *mbuf_pool_indir;
>  	struct rte_mempool *session_pool;
> -	struct rte_mempool *session_priv_pool;
>  };
> 
>  struct cnt_blk {
> diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-
> secgw/ipsec_worker.c
> index c545497cee..04bcce49db 100644
> --- a/examples/ipsec-secgw/ipsec_worker.c
> +++ b/examples/ipsec-secgw/ipsec_worker.c
> @@ -537,14 +537,10 @@ ipsec_wrkr_non_burst_int_port_app_mode(struct
> eh_event_link_info *links,
>  	lconf.inbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_in;
>  	lconf.inbound.sa_ctx = socket_ctx[socket_id].sa_in;
>  	lconf.inbound.session_pool = socket_ctx[socket_id].session_pool;
> -	lconf.inbound.session_priv_pool =
> -			socket_ctx[socket_id].session_priv_pool;
>  	lconf.outbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_out;
>  	lconf.outbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_out;
>  	lconf.outbound.sa_ctx = socket_ctx[socket_id].sa_out;
>  	lconf.outbound.session_pool = socket_ctx[socket_id].session_pool;
> -	lconf.outbound.session_priv_pool =
> -			socket_ctx[socket_id].session_priv_pool;
> 
>  	RTE_LOG(INFO, IPSEC,
>  		"Launching event mode worker (non-burst - Tx internal port -
> "
> diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
> index 66d1491bf7..bdc3da731c 100644
> --- a/examples/l2fwd-crypto/main.c
> +++ b/examples/l2fwd-crypto/main.c
> @@ -188,7 +188,7 @@ struct l2fwd_crypto_params {
>  	struct l2fwd_iv auth_iv;
>  	struct l2fwd_iv aead_iv;
>  	struct l2fwd_key aad;
> -	struct rte_cryptodev_sym_session *session;
> +	void *session;
> 
>  	uint8_t do_cipher;
>  	uint8_t do_hash;
> @@ -229,7 +229,6 @@ struct rte_mempool *l2fwd_pktmbuf_pool;
>  struct rte_mempool *l2fwd_crypto_op_pool;
>  static struct {
>  	struct rte_mempool *sess_mp;
> -	struct rte_mempool *priv_mp;
>  } session_pool_socket[RTE_MAX_NUMA_NODES];
> 
>  /* Per-port statistics struct */
> @@ -671,11 +670,11 @@ generate_random_key(uint8_t *key, unsigned
> length)
>  }
> 
>  /* Session is created and is later attached to the crypto operation. 8< */
> -static struct rte_cryptodev_sym_session *
> +static void *
>  initialize_crypto_session(struct l2fwd_crypto_options *options, uint8_t
> cdev_id)
>  {
>  	struct rte_crypto_sym_xform *first_xform;
> -	struct rte_cryptodev_sym_session *session;
> +	void *session;
>  	int retval = rte_cryptodev_socket_id(cdev_id);
> 
>  	if (retval < 0)
> @@ -703,8 +702,7 @@ initialize_crypto_session(struct l2fwd_crypto_options
> *options, uint8_t cdev_id)
>  		return NULL;
> 
>  	if (rte_cryptodev_sym_session_init(cdev_id, session,
> -				first_xform,
> -				session_pool_socket[socket_id].priv_mp) < 0)
> +				first_xform) < 0)
>  		return NULL;
> 
>  	return session;
> @@ -730,7 +728,7 @@ l2fwd_main_loop(struct l2fwd_crypto_options
> *options)
>  			US_PER_S * BURST_TX_DRAIN_US;
>  	struct l2fwd_crypto_params *cparams;
>  	struct l2fwd_crypto_params port_cparams[qconf->nb_crypto_devs];
> -	struct rte_cryptodev_sym_session *session;
> +	void *session;
> 
>  	if (qconf->nb_rx_ports == 0) {
>  		RTE_LOG(INFO, L2FWD, "lcore %u has nothing to do\n",
> lcore_id);
> @@ -2388,30 +2386,6 @@ initialize_cryptodevs(struct l2fwd_crypto_options
> *options, unsigned nb_ports,
>  		} else
>  			sessions_needed = enabled_cdev_count;
> 
> -		if (session_pool_socket[socket_id].priv_mp == NULL) {
> -			char mp_name[RTE_MEMPOOL_NAMESIZE];
> -
> -			snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
> -				"priv_sess_mp_%u", socket_id);
> -
> -			session_pool_socket[socket_id].priv_mp =
> -					rte_mempool_create(mp_name,
> -						sessions_needed,
> -						max_sess_sz,
> -						0, 0, NULL, NULL, NULL,
> -						NULL, socket_id,
> -						0);
> -
> -			if (session_pool_socket[socket_id].priv_mp == NULL)
> {
> -				printf("Cannot create pool on socket %d\n",
> -					socket_id);
> -				return -ENOMEM;
> -			}
> -
> -			printf("Allocated pool \"%s\" on socket %d\n",
> -				mp_name, socket_id);
> -		}
> -
>  		if (session_pool_socket[socket_id].sess_mp == NULL) {
>  			char mp_name[RTE_MEMPOOL_NAMESIZE];
>  			snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
> @@ -2421,7 +2395,8 @@ initialize_cryptodevs(struct l2fwd_crypto_options
> *options, unsigned nb_ports,
> 
> 	rte_cryptodev_sym_session_pool_create(
>  							mp_name,
>  							sessions_needed,
> -							0, 0, 0, socket_id);
> +							max_sess_sz,
> +							0, 0, socket_id);
> 
>  			if (session_pool_socket[socket_id].sess_mp == NULL)
> {
>  				printf("Cannot create pool on socket %d\n",
> @@ -2573,8 +2548,6 @@ initialize_cryptodevs(struct l2fwd_crypto_options
> *options, unsigned nb_ports,
> 
>  		qp_conf.nb_descriptors = 2048;
>  		qp_conf.mp_session =
> session_pool_socket[socket_id].sess_mp;
> -		qp_conf.mp_session_private =
> -				session_pool_socket[socket_id].priv_mp;
> 
>  		retval = rte_cryptodev_queue_pair_setup(cdev_id, 0,
> &qp_conf,
>  				socket_id);
> diff --git a/examples/vhost_crypto/main.c b/examples/vhost_crypto/main.c
> index dea7dcbd07..cbb97aaf76 100644
> --- a/examples/vhost_crypto/main.c
> +++ b/examples/vhost_crypto/main.c
> @@ -46,7 +46,6 @@ struct vhost_crypto_info {
>  	int vids[MAX_NB_SOCKETS];
>  	uint32_t nb_vids;
>  	struct rte_mempool *sess_pool;
> -	struct rte_mempool *sess_priv_pool;
>  	struct rte_mempool *cop_pool;
>  	uint8_t cid;
>  	uint32_t qid;
> @@ -304,7 +303,6 @@ new_device(int vid)
>  	}
> 
>  	ret = rte_vhost_crypto_create(vid, info->cid, info->sess_pool,
> -			info->sess_priv_pool,
>  			rte_lcore_to_socket_id(options.los[i].lcore_id));
>  	if (ret) {
>  		RTE_LOG(ERR, USER1, "Cannot create vhost crypto\n");
> @@ -458,7 +456,6 @@ free_resource(void)
> 
>  		rte_mempool_free(info->cop_pool);
>  		rte_mempool_free(info->sess_pool);
> -		rte_mempool_free(info->sess_priv_pool);
> 
>  		for (j = 0; j < lo->nb_sockets; j++) {
>  			rte_vhost_driver_unregister(lo->socket_files[i]);
> @@ -544,16 +541,12 @@ main(int argc, char *argv[])
> 
>  		snprintf(name, 127, "SESS_POOL_%u", lo->lcore_id);
>  		info->sess_pool =
> rte_cryptodev_sym_session_pool_create(name,
> -				SESSION_MAP_ENTRIES, 0, 0, 0,
> -				rte_lcore_to_socket_id(lo->lcore_id));
> -
> -		snprintf(name, 127, "SESS_POOL_PRIV_%u", lo->lcore_id);
> -		info->sess_priv_pool = rte_mempool_create(name,
>  				SESSION_MAP_ENTRIES,
> 
> 	rte_cryptodev_sym_get_private_session_size(
> -				info->cid), 64, 0, NULL, NULL, NULL, NULL,
> -				rte_lcore_to_socket_id(lo->lcore_id), 0);
> -		if (!info->sess_priv_pool || !info->sess_pool) {
> +					info->cid), 0, 0,
> +				rte_lcore_to_socket_id(lo->lcore_id));
> +
> +		if (!info->sess_pool) {
>  			RTE_LOG(ERR, USER1, "Failed to create mempool");
>  			goto error_exit;
>  		}
> @@ -574,7 +567,6 @@ main(int argc, char *argv[])
> 
>  		qp_conf.nb_descriptors = NB_CRYPTO_DESCRIPTORS;
>  		qp_conf.mp_session = info->sess_pool;
> -		qp_conf.mp_session_private = info->sess_priv_pool;
> 
>  		for (j = 0; j < dev_info.max_nb_queue_pairs; j++) {
>  			ret = rte_cryptodev_queue_pair_setup(info->cid, j,
> diff --git a/lib/cryptodev/cryptodev_pmd.h
> b/lib/cryptodev/cryptodev_pmd.h
> index ae3aac59ae..d758b3b85d 100644
> --- a/lib/cryptodev/cryptodev_pmd.h
> +++ b/lib/cryptodev/cryptodev_pmd.h
> @@ -242,7 +242,6 @@ typedef unsigned int
> (*cryptodev_asym_get_session_private_size_t)(
>   * @param	dev		Crypto device pointer
>   * @param	xform		Single or chain of crypto xforms
>   * @param	session		Pointer to cryptodev's private session
> structure
> - * @param	mp		Mempool where the private session is
> allocated
>   *
>   * @return
>   *  - Returns 0 if private session structure have been created successfully.
> @@ -251,9 +250,7 @@ typedef unsigned int
> (*cryptodev_asym_get_session_private_size_t)(
>   *  - Returns -ENOMEM if the private session could not be allocated.
>   */
>  typedef int (*cryptodev_sym_configure_session_t)(struct rte_cryptodev
> *dev,
> -		struct rte_crypto_sym_xform *xform,
> -		struct rte_cryptodev_sym_session *session,
> -		struct rte_mempool *mp);
> +		struct rte_crypto_sym_xform *xform, void *session);
>  /**
>   * Configure a Crypto asymmetric session on a device.
>   *
> @@ -279,7 +276,7 @@ typedef int
> (*cryptodev_asym_configure_session_t)(struct rte_cryptodev *dev,
>   * @param	sess		Cryptodev session structure
>   */
>  typedef void (*cryptodev_sym_free_session_t)(struct rte_cryptodev *dev,
> -		struct rte_cryptodev_sym_session *sess);
> +		void *sess);
>  /**
>   * Free asymmetric session private data.
>   *
> diff --git a/lib/cryptodev/rte_crypto.h b/lib/cryptodev/rte_crypto.h
> index a864f5036f..200617f623 100644
> --- a/lib/cryptodev/rte_crypto.h
> +++ b/lib/cryptodev/rte_crypto.h
> @@ -420,7 +420,7 @@ rte_crypto_op_sym_xforms_alloc(struct
> rte_crypto_op *op, uint8_t nb_xforms)
>   */
>  static inline int
>  rte_crypto_op_attach_sym_session(struct rte_crypto_op *op,
> -		struct rte_cryptodev_sym_session *sess)
> +		void *sess)
>  {
>  	if (unlikely(op->type != RTE_CRYPTO_OP_TYPE_SYMMETRIC))
>  		return -1;
> diff --git a/lib/cryptodev/rte_crypto_sym.h
> b/lib/cryptodev/rte_crypto_sym.h
> index 58c0724743..848da1942c 100644
> --- a/lib/cryptodev/rte_crypto_sym.h
> +++ b/lib/cryptodev/rte_crypto_sym.h
> @@ -932,7 +932,7 @@ __rte_crypto_sym_op_sym_xforms_alloc(struct
> rte_crypto_sym_op *sym_op,
>   */
>  static inline int
>  __rte_crypto_sym_op_attach_sym_session(struct rte_crypto_sym_op
> *sym_op,
> -		struct rte_cryptodev_sym_session *sess)
> +		void *sess)
>  {
>  	sym_op->session = sess;
> 
> diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
> index 9fa3aff1d3..a31cae202a 100644
> --- a/lib/cryptodev/rte_cryptodev.c
> +++ b/lib/cryptodev/rte_cryptodev.c
> @@ -199,6 +199,8 @@ struct
> rte_cryptodev_sym_session_pool_private_data {
>  	/**< number of elements in sess_data array */
>  	uint16_t user_data_sz;
>  	/**< session user data will be placed after sess_data */
> +	uint16_t sess_priv_sz;
> +	/**< session user data will be placed after sess_data */
>  };
> 
>  int
> @@ -1223,8 +1225,7 @@ rte_cryptodev_queue_pair_setup(uint8_t dev_id,
> uint16_t queue_pair_id,
>  		return -EINVAL;
>  	}
> 
> -	if ((qp_conf->mp_session && !qp_conf->mp_session_private) ||
> -			(!qp_conf->mp_session && qp_conf-
> >mp_session_private)) {
> +	if (!qp_conf->mp_session) {
>  		CDEV_LOG_ERR("Invalid mempools\n");
>  		return -EINVAL;
>  	}
> @@ -1232,7 +1233,6 @@ rte_cryptodev_queue_pair_setup(uint8_t dev_id,
> uint16_t queue_pair_id,
>  	if (qp_conf->mp_session) {
>  		struct rte_cryptodev_sym_session_pool_private_data
> *pool_priv;
>  		uint32_t obj_size = qp_conf->mp_session->elt_size;
> -		uint32_t obj_priv_size = qp_conf->mp_session_private-
> >elt_size;
>  		struct rte_cryptodev_sym_session s = {0};
> 
>  		pool_priv = rte_mempool_get_priv(qp_conf->mp_session);
> @@ -1244,11 +1244,11 @@ rte_cryptodev_queue_pair_setup(uint8_t
> dev_id, uint16_t queue_pair_id,
> 
>  		s.nb_drivers = pool_priv->nb_drivers;
>  		s.user_data_sz = pool_priv->user_data_sz;
> +		s.priv_sz = pool_priv->sess_priv_sz;
> 
> -		if
> ((rte_cryptodev_sym_get_existing_header_session_size(&s) >
> -			obj_size) || (s.nb_drivers <= dev->driver_id) ||
> -
> 	rte_cryptodev_sym_get_private_session_size(dev_id) >
> -				obj_priv_size) {
> +		if
> (((rte_cryptodev_sym_get_existing_header_session_size(&s) +
> +				(s.nb_drivers * s.priv_sz)) > obj_size) ||
> +				(s.nb_drivers <= dev->driver_id)) {
>  			CDEV_LOG_ERR("Invalid mempool\n");
>  			return -EINVAL;
>  		}
> @@ -1710,11 +1710,11 @@ rte_cryptodev_pmd_callback_process(struct
> rte_cryptodev *dev,
> 
>  int
>  rte_cryptodev_sym_session_init(uint8_t dev_id,
> -		struct rte_cryptodev_sym_session *sess,
> -		struct rte_crypto_sym_xform *xforms,
> -		struct rte_mempool *mp)
> +		void *sess_opaque,
> +		struct rte_crypto_sym_xform *xforms)
>  {
>  	struct rte_cryptodev *dev;
> +	struct rte_cryptodev_sym_session *sess = sess_opaque;
>  	uint32_t sess_priv_sz =
> rte_cryptodev_sym_get_private_session_size(
>  			dev_id);
>  	uint8_t index;
> @@ -1727,10 +1727,10 @@ rte_cryptodev_sym_session_init(uint8_t dev_id,
> 
>  	dev = rte_cryptodev_pmd_get_dev(dev_id);
> 
> -	if (sess == NULL || xforms == NULL || dev == NULL || mp == NULL)
> +	if (sess == NULL || xforms == NULL || dev == NULL)
>  		return -EINVAL;
> 
> -	if (mp->elt_size < sess_priv_sz)
> +	if (sess->priv_sz < sess_priv_sz)
>  		return -EINVAL;
> 
>  	index = dev->driver_id;
> @@ -1740,8 +1740,11 @@ rte_cryptodev_sym_session_init(uint8_t dev_id,
>  	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
> >sym_session_configure, -ENOTSUP);
> 
>  	if (sess->sess_data[index].refcnt == 0) {
> +		sess->sess_data[index].data = (void *)((uint8_t *)sess +
> +
> 	rte_cryptodev_sym_get_header_session_size() +
> +				(index * sess->priv_sz));
>  		ret = dev->dev_ops->sym_session_configure(dev, xforms,
> -							sess, mp);
> +				sess->sess_data[index].data);
>  		if (ret < 0) {
>  			CDEV_LOG_ERR(
>  				"dev_id %d failed to configure session
> details",
> @@ -1750,7 +1753,7 @@ rte_cryptodev_sym_session_init(uint8_t dev_id,
>  		}
>  	}
> 
> -	rte_cryptodev_trace_sym_session_init(dev_id, sess, xforms, mp);
> +	rte_cryptodev_trace_sym_session_init(dev_id, sess, xforms);
>  	sess->sess_data[index].refcnt++;
>  	return 0;
>  }
> @@ -1795,6 +1798,21 @@ rte_cryptodev_asym_session_init(uint8_t dev_id,
>  	rte_cryptodev_trace_asym_session_init(dev_id, sess, xforms, mp);
>  	return 0;
>  }
> +static size_t
> +get_max_sym_sess_priv_sz(void)
> +{
> +	size_t max_sz, sz;
> +	int16_t cdev_id, n;
> +
> +	max_sz = 0;
> +	n =  rte_cryptodev_count();
> +	for (cdev_id = 0; cdev_id != n; cdev_id++) {
> +		sz = rte_cryptodev_sym_get_private_session_size(cdev_id);
> +		if (sz > max_sz)
> +			max_sz = sz;
> +	}
> +	return max_sz;
> +}
> 
>  struct rte_mempool *
>  rte_cryptodev_sym_session_pool_create(const char *name, uint32_t
> nb_elts,
> @@ -1804,15 +1822,15 @@ rte_cryptodev_sym_session_pool_create(const
> char *name, uint32_t nb_elts,
>  	struct rte_mempool *mp;
>  	struct rte_cryptodev_sym_session_pool_private_data *pool_priv;
>  	uint32_t obj_sz;
> +	uint32_t sess_priv_sz = get_max_sym_sess_priv_sz();
> 
>  	obj_sz = rte_cryptodev_sym_get_header_session_size() +
> user_data_size;
> -	if (obj_sz > elt_size)
> +	if (elt_size < obj_sz + (sess_priv_sz * nb_drivers)) {
>  		CDEV_LOG_INFO("elt_size %u is expanded to %u\n",
> elt_size,
> -				obj_sz);
> -	else
> -		obj_sz = elt_size;
> -
> -	mp = rte_mempool_create(name, nb_elts, obj_sz, cache_size,
> +				obj_sz + (sess_priv_sz * nb_drivers));
> +		elt_size = obj_sz + (sess_priv_sz * nb_drivers);
> +	}
> +	mp = rte_mempool_create(name, nb_elts, elt_size, cache_size,
>  			(uint32_t)(sizeof(*pool_priv)),
>  			NULL, NULL, NULL, NULL,
>  			socket_id, 0);
> @@ -1832,6 +1850,7 @@ rte_cryptodev_sym_session_pool_create(const
> char *name, uint32_t nb_elts,
> 
>  	pool_priv->nb_drivers = nb_drivers;
>  	pool_priv->user_data_sz = user_data_size;
> +	pool_priv->sess_priv_sz = sess_priv_sz;
> 
>  	rte_cryptodev_trace_sym_session_pool_create(name, nb_elts,
>  		elt_size, cache_size, user_data_size, mp);
> @@ -1865,7 +1884,7 @@ rte_cryptodev_sym_is_valid_session_pool(struct
> rte_mempool *mp)
>  	return 1;
>  }
> 
> -struct rte_cryptodev_sym_session *
> +void *
>  rte_cryptodev_sym_session_create(struct rte_mempool *mp)
>  {
>  	struct rte_cryptodev_sym_session *sess;
> @@ -1886,6 +1905,7 @@ rte_cryptodev_sym_session_create(struct
> rte_mempool *mp)
> 
>  	sess->nb_drivers = pool_priv->nb_drivers;
>  	sess->user_data_sz = pool_priv->user_data_sz;
> +	sess->priv_sz = pool_priv->sess_priv_sz;
>  	sess->opaque_data = 0;
> 
>  	/* Clear device session pointer.
> @@ -1895,7 +1915,7 @@ rte_cryptodev_sym_session_create(struct
> rte_mempool *mp)
>  			rte_cryptodev_sym_session_data_size(sess));
> 
>  	rte_cryptodev_trace_sym_session_create(mp, sess);
> -	return sess;
> +	return (void *)sess;
>  }
> 
>  struct rte_cryptodev_asym_session *
> @@ -1933,9 +1953,9 @@ rte_cryptodev_asym_session_create(struct
> rte_mempool *mp)
>  }
> 
>  int
> -rte_cryptodev_sym_session_clear(uint8_t dev_id,
> -		struct rte_cryptodev_sym_session *sess)
> +rte_cryptodev_sym_session_clear(uint8_t dev_id, void *s)
>  {
> +	struct rte_cryptodev_sym_session *sess = s;
>  	struct rte_cryptodev *dev;
>  	uint8_t driver_id;
> 
> @@ -1957,7 +1977,7 @@ rte_cryptodev_sym_session_clear(uint8_t dev_id,
> 
>  	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->sym_session_clear,
> -ENOTSUP);
> 
> -	dev->dev_ops->sym_session_clear(dev, sess);
> +	dev->dev_ops->sym_session_clear(dev, sess-
> >sess_data[driver_id].data);
> 
>  	rte_cryptodev_trace_sym_session_clear(dev_id, sess);
>  	return 0;
> @@ -1988,10 +2008,11 @@ rte_cryptodev_asym_session_clear(uint8_t
> dev_id,
>  }
> 
>  int
> -rte_cryptodev_sym_session_free(struct rte_cryptodev_sym_session *sess)
> +rte_cryptodev_sym_session_free(void *s)
>  {
>  	uint8_t i;
>  	struct rte_mempool *sess_mp;
> +	struct rte_cryptodev_sym_session *sess = s;
> 
>  	if (sess == NULL)
>  		return -EINVAL;
> diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
> index bb01f0f195..25af6fa7b9 100644
> --- a/lib/cryptodev/rte_cryptodev.h
> +++ b/lib/cryptodev/rte_cryptodev.h
> @@ -537,8 +537,6 @@ struct rte_cryptodev_qp_conf {
>  	uint32_t nb_descriptors; /**< Number of descriptors per queue pair
> */
>  	struct rte_mempool *mp_session;
>  	/**< The mempool for creating session in sessionless mode */
> -	struct rte_mempool *mp_session_private;
> -	/**< The mempool for creating sess private data in sessionless mode
> */
>  };
> 
>  /**
> @@ -1126,6 +1124,8 @@ struct rte_cryptodev_sym_session {
>  	/**< number of elements in sess_data array */
>  	uint16_t user_data_sz;
>  	/**< session user data will be placed after sess_data */
> +	uint16_t priv_sz;
> +	/**< Maximum private session data size which each driver can use */
>  	__extension__ struct {
>  		void *data;
>  		uint16_t refcnt;
> @@ -1177,10 +1177,10 @@ rte_cryptodev_sym_session_pool_create(const
> char *name, uint32_t nb_elts,
>   * @param   mempool    Symmetric session mempool to allocate session
>   *                     objects from
>   * @return
> - *  - On success return pointer to sym-session
> + *  - On success return opaque pointer to sym-session
>   *  - On failure returns NULL
>   */
> -struct rte_cryptodev_sym_session *
> +void *
>  rte_cryptodev_sym_session_create(struct rte_mempool *mempool);
> 
>  /**
> @@ -1209,7 +1209,7 @@ rte_cryptodev_asym_session_create(struct
> rte_mempool *mempool);
>   *  - -EBUSY if not all device private data has been freed.
>   */
>  int
> -rte_cryptodev_sym_session_free(struct rte_cryptodev_sym_session
> *sess);
> +rte_cryptodev_sym_session_free(void *sess);
> 
>  /**
>   * Frees asymmetric crypto session header, after checking that all
> @@ -1229,25 +1229,23 @@ rte_cryptodev_asym_session_free(struct
> rte_cryptodev_asym_session *sess);
> 
>  /**
>   * Fill out private data for the device id, based on its device type.
> + * Memory for private data is already allocated in sess, driver need
> + * to fill the content.
>   *
>   * @param   dev_id   ID of device that we want the session to be used on
>   * @param   sess     Session where the private data will be attached to
>   * @param   xforms   Symmetric crypto transform operations to apply on
> flow
>   *                   processed with this session
> - * @param   mempool  Mempool where the private data is allocated.
>   *
>   * @return
>   *  - On success, zero.
>   *  - -EINVAL if input parameters are invalid.
>   *  - -ENOTSUP if crypto device does not support the crypto transform or
>   *    does not support symmetric operations.
> - *  - -ENOMEM if the private session could not be allocated.
>   */
>  int
> -rte_cryptodev_sym_session_init(uint8_t dev_id,
> -			struct rte_cryptodev_sym_session *sess,
> -			struct rte_crypto_sym_xform *xforms,
> -			struct rte_mempool *mempool);
> +rte_cryptodev_sym_session_init(uint8_t dev_id, void *sess,
> +			struct rte_crypto_sym_xform *xforms);
> 
>  /**
>   * Initialize asymmetric session on a device with specific asymmetric xform
> @@ -1286,8 +1284,7 @@ rte_cryptodev_asym_session_init(uint8_t dev_id,
>   *  - -ENOTSUP if crypto device does not support symmetric operations.
>   */
>  int
> -rte_cryptodev_sym_session_clear(uint8_t dev_id,
> -			struct rte_cryptodev_sym_session *sess);
> +rte_cryptodev_sym_session_clear(uint8_t dev_id, void *sess);
> 
>  /**
>   * Frees resources held by asymmetric session during
> rte_cryptodev_session_init
> diff --git a/lib/cryptodev/rte_cryptodev_trace.h
> b/lib/cryptodev/rte_cryptodev_trace.h
> index d1f4f069a3..44da04c425 100644
> --- a/lib/cryptodev/rte_cryptodev_trace.h
> +++ b/lib/cryptodev/rte_cryptodev_trace.h
> @@ -56,7 +56,6 @@ RTE_TRACE_POINT(
>  	rte_trace_point_emit_u16(queue_pair_id);
>  	rte_trace_point_emit_u32(conf->nb_descriptors);
>  	rte_trace_point_emit_ptr(conf->mp_session);
> -	rte_trace_point_emit_ptr(conf->mp_session_private);
>  )
> 
>  RTE_TRACE_POINT(
> @@ -106,15 +105,13 @@ RTE_TRACE_POINT(
>  RTE_TRACE_POINT(
>  	rte_cryptodev_trace_sym_session_init,
>  	RTE_TRACE_POINT_ARGS(uint8_t dev_id,
> -		struct rte_cryptodev_sym_session *sess, void *xforms,
> -		void *mempool),
> +		struct rte_cryptodev_sym_session *sess, void *xforms),
>  	rte_trace_point_emit_u8(dev_id);
>  	rte_trace_point_emit_ptr(sess);
>  	rte_trace_point_emit_u64(sess->opaque_data);
>  	rte_trace_point_emit_u16(sess->nb_drivers);
>  	rte_trace_point_emit_u16(sess->user_data_sz);
>  	rte_trace_point_emit_ptr(xforms);
> -	rte_trace_point_emit_ptr(mempool);
>  )
> 
>  RTE_TRACE_POINT(
> diff --git a/lib/pipeline/rte_table_action.c b/lib/pipeline/rte_table_action.c
> index ad7904c0ee..efdba9c899 100644
> --- a/lib/pipeline/rte_table_action.c
> +++ b/lib/pipeline/rte_table_action.c
> @@ -1719,7 +1719,7 @@ struct sym_crypto_data {
>  	uint16_t op_mask;
> 
>  	/** Session pointer. */
> -	struct rte_cryptodev_sym_session *session;
> +	void *session;
> 
>  	/** Direction of crypto, encrypt or decrypt */
>  	uint16_t direction;
> @@ -1780,7 +1780,7 @@ sym_crypto_apply(struct sym_crypto_data *data,
>  	const struct rte_crypto_auth_xform *auth_xform = NULL;
>  	const struct rte_crypto_aead_xform *aead_xform = NULL;
>  	struct rte_crypto_sym_xform *xform = p->xform;
> -	struct rte_cryptodev_sym_session *session;
> +	void *session;
>  	int ret;
> 
>  	memset(data, 0, sizeof(*data));
> @@ -1905,7 +1905,7 @@ sym_crypto_apply(struct sym_crypto_data *data,
>  		return -ENOMEM;
> 
>  	ret = rte_cryptodev_sym_session_init(cfg->cryptodev_id, session,
> -			p->xform, cfg->mp_init);
> +			p->xform);
>  	if (ret < 0) {
>  		rte_cryptodev_sym_session_free(session);
>  		return ret;
> @@ -2858,7 +2858,7 @@ rte_table_action_time_read(struct
> rte_table_action *action,
>  	return 0;
>  }
> 
> -struct rte_cryptodev_sym_session *
> +void *
>  rte_table_action_crypto_sym_session_get(struct rte_table_action *action,
>  	void *data)
>  {
> diff --git a/lib/pipeline/rte_table_action.h b/lib/pipeline/rte_table_action.h
> index 82bc9d9ac9..68db453a8b 100644
> --- a/lib/pipeline/rte_table_action.h
> +++ b/lib/pipeline/rte_table_action.h
> @@ -1129,7 +1129,7 @@ rte_table_action_time_read(struct
> rte_table_action *action,
>   *   The pointer to the session on success, NULL otherwise.
>   */
>  __rte_experimental
> -struct rte_cryptodev_sym_session *
> +void *
>  rte_table_action_crypto_sym_session_get(struct rte_table_action *action,
>  	void *data);
> 
> diff --git a/lib/vhost/rte_vhost_crypto.h b/lib/vhost/rte_vhost_crypto.h
> index f54d731139..d9b7beed9c 100644
> --- a/lib/vhost/rte_vhost_crypto.h
> +++ b/lib/vhost/rte_vhost_crypto.h
> @@ -50,8 +50,6 @@ rte_vhost_crypto_driver_start(const char *path);
>   *  multiple Vhost-crypto devices.
>   * @param sess_pool
>   *  The pointer to the created cryptodev session pool.
> - * @param sess_priv_pool
> - *  The pointer to the created cryptodev session private data mempool.
>   * @param socket_id
>   *  NUMA Socket ID to allocate resources on. *
>   * @return
> @@ -61,7 +59,6 @@ rte_vhost_crypto_driver_start(const char *path);
>  int
>  rte_vhost_crypto_create(int vid, uint8_t cryptodev_id,
>  		struct rte_mempool *sess_pool,
> -		struct rte_mempool *sess_priv_pool,
>  		int socket_id);
> 
>  /**
> diff --git a/lib/vhost/vhost_crypto.c b/lib/vhost/vhost_crypto.c
> index 926b5c0bd9..b4464c4253 100644
> --- a/lib/vhost/vhost_crypto.c
> +++ b/lib/vhost/vhost_crypto.c
> @@ -338,7 +338,7 @@ vhost_crypto_create_sess(struct vhost_crypto
> *vcrypto,
>  		VhostUserCryptoSessionParam *sess_param)
>  {
>  	struct rte_crypto_sym_xform xform1 = {0}, xform2 = {0};
> -	struct rte_cryptodev_sym_session *session;
> +	void *session;
>  	int ret;
> 
>  	switch (sess_param->op_type) {
> @@ -383,8 +383,7 @@ vhost_crypto_create_sess(struct vhost_crypto
> *vcrypto,
>  		return;
>  	}
> 
> -	if (rte_cryptodev_sym_session_init(vcrypto->cid, session, &xform1,
> -			vcrypto->sess_priv_pool) < 0) {
> +	if (rte_cryptodev_sym_session_init(vcrypto->cid, session, &xform1)
> < 0) {
>  		VC_LOG_ERR("Failed to initialize session");
>  		sess_param->session_id = -VIRTIO_CRYPTO_ERR;
>  		return;
> @@ -1425,7 +1424,6 @@ rte_vhost_crypto_driver_start(const char *path)
>  int
>  rte_vhost_crypto_create(int vid, uint8_t cryptodev_id,
>  		struct rte_mempool *sess_pool,
> -		struct rte_mempool *sess_priv_pool,
>  		int socket_id)
>  {
>  	struct virtio_net *dev = get_device(vid);
> @@ -1447,7 +1445,6 @@ rte_vhost_crypto_create(int vid, uint8_t
> cryptodev_id,
>  	}
> 
>  	vcrypto->sess_pool = sess_pool;
> -	vcrypto->sess_priv_pool = sess_priv_pool;
>  	vcrypto->cid = cryptodev_id;
>  	vcrypto->cache_session_id = UINT64_MAX;
>  	vcrypto->last_session_id = 1;
> --
> 2.25.1


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v12 06/12] pdump: support pcapng and filtering
  @ 2021-10-01 16:26  1%   ` Stephen Hemminger
  2021-10-01 16:27  1%   ` [dpdk-dev] [PATCH v12 11/12] doc: changes for new pcapng and dumpcap Stephen Hemminger
  1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-10-01 16:26 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

This enhances the DPDK pdump library to support new
pcapng format and filtering via BPF.

The internal client/server protocol is changed to support
two versions: the original pdump basic version and a
new pcapng version.

The internal version number (not part of exposed API or ABI)
is intentionally increased to cause any attempt to try
mismatched primary/secondary process to fail.

Add new API to do allow filtering of captured packets with
DPDK BPF (eBPF) filter program. It keeps statistics
on packets captured, filtered, and missed (because ring was full).

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 lib/meson.build       |   4 +-
 lib/pdump/meson.build |   2 +-
 lib/pdump/rte_pdump.c | 427 ++++++++++++++++++++++++++++++------------
 lib/pdump/rte_pdump.h | 113 ++++++++++-
 lib/pdump/version.map |   8 +
 5 files changed, 427 insertions(+), 127 deletions(-)

diff --git a/lib/meson.build b/lib/meson.build
index fe53024e2dcd..a0bafa08b559 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -27,6 +27,7 @@ libraries = [
         'acl',
         'bbdev',
         'bitratestats',
+        'bpf',
         'cfgfile',
         'compressdev',
         'cryptodev',
@@ -43,7 +44,6 @@ libraries = [
         'member',
         'pcapng',
         'power',
-        'pdump',
         'rawdev',
         'regexdev',
         'rib',
@@ -55,10 +55,10 @@ libraries = [
         'ipsec', # ipsec lib depends on net, crypto and security
         'fib', #fib lib depends on rib
         'port', # pkt framework libs which use other libs from above
+        'pdump', # pdump lib depends on bpf
         'table',
         'pipeline',
         'flow_classify', # flow_classify lib depends on pkt framework table lib
-        'bpf',
         'graph',
         'node',
 ]
diff --git a/lib/pdump/meson.build b/lib/pdump/meson.build
index 3a95eabde6a6..51ceb2afdec5 100644
--- a/lib/pdump/meson.build
+++ b/lib/pdump/meson.build
@@ -3,4 +3,4 @@
 
 sources = files('rte_pdump.c')
 headers = files('rte_pdump.h')
-deps += ['ethdev']
+deps += ['ethdev', 'bpf', 'pcapng']
diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c
index 382217bc1564..82b4f622ca37 100644
--- a/lib/pdump/rte_pdump.c
+++ b/lib/pdump/rte_pdump.c
@@ -7,8 +7,10 @@
 #include <rte_ethdev.h>
 #include <rte_lcore.h>
 #include <rte_log.h>
+#include <rte_memzone.h>
 #include <rte_errno.h>
 #include <rte_string_fns.h>
+#include <rte_pcapng.h>
 
 #include "rte_pdump.h"
 
@@ -27,30 +29,23 @@ enum pdump_operation {
 	ENABLE = 2
 };
 
+/* Internal version number in request */
 enum pdump_version {
-	V1 = 1
+	V1 = 1,		    /* no filtering or snap */
+	V2 = 2,
 };
 
 struct pdump_request {
 	uint16_t ver;
 	uint16_t op;
 	uint32_t flags;
-	union pdump_data {
-		struct enable_v1 {
-			char device[RTE_DEV_NAME_MAX_LEN];
-			uint16_t queue;
-			struct rte_ring *ring;
-			struct rte_mempool *mp;
-			void *filter;
-		} en_v1;
-		struct disable_v1 {
-			char device[RTE_DEV_NAME_MAX_LEN];
-			uint16_t queue;
-			struct rte_ring *ring;
-			struct rte_mempool *mp;
-			void *filter;
-		} dis_v1;
-	} data;
+	char device[RTE_DEV_NAME_MAX_LEN];
+	uint16_t queue;
+	struct rte_ring *ring;
+	struct rte_mempool *mp;
+
+	const struct rte_bpf_prm *prm;
+	uint32_t snaplen;
 };
 
 struct pdump_response {
@@ -63,80 +58,136 @@ static struct pdump_rxtx_cbs {
 	struct rte_ring *ring;
 	struct rte_mempool *mp;
 	const struct rte_eth_rxtx_callback *cb;
-	void *filter;
+	const struct rte_bpf *filter;
+	enum pdump_version ver;
+	uint32_t snaplen;
 } rx_cbs[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT],
 tx_cbs[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
 
-
-static inline void
-pdump_copy(struct rte_mbuf **pkts, uint16_t nb_pkts, void *user_params)
+static const char *MZ_RTE_PDUMP_STATS = "rte_pdump_stats";
+
+/* Shared memory between primary and secondary processes. */
+static struct {
+	struct rte_pdump_stats rx[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
+	struct rte_pdump_stats tx[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];
+} *pdump_stats;
+
+/* Create a clone of mbuf to be placed into ring. */
+static void
+pdump_copy(uint16_t port_id, uint16_t queue,
+	   enum rte_pcapng_direction direction,
+	   struct rte_mbuf **pkts, uint16_t nb_pkts,
+	   const struct pdump_rxtx_cbs *cbs,
+	   struct rte_pdump_stats *stats)
 {
 	unsigned int i;
 	int ring_enq;
 	uint16_t d_pkts = 0;
 	struct rte_mbuf *dup_bufs[nb_pkts];
-	struct pdump_rxtx_cbs *cbs;
+	uint64_t ts;
 	struct rte_ring *ring;
 	struct rte_mempool *mp;
 	struct rte_mbuf *p;
+	uint64_t rcs[nb_pkts];
+
+	if (cbs->filter)
+		rte_bpf_exec_burst(cbs->filter, (void **)pkts, rcs, nb_pkts);
 
-	cbs  = user_params;
+	ts = rte_get_tsc_cycles();
 	ring = cbs->ring;
 	mp = cbs->mp;
 	for (i = 0; i < nb_pkts; i++) {
-		p = rte_pktmbuf_copy(pkts[i], mp, 0, UINT32_MAX);
-		if (p)
+		/*
+		 * This uses same BPF return value convention as socket filter
+		 * and pcap_offline_filter.
+		 * if program returns zero
+		 * then packet doesn't match the filter (will be ignored).
+		 */
+		if (cbs->filter && rcs[i] == 0) {
+			__atomic_fetch_add(&stats->filtered,
+					   1, __ATOMIC_RELAXED);
+			continue;
+		}
+
+		/*
+		 * If using pcapng then want to wrap packets
+		 * otherwise a simple copy.
+		 */
+		if (cbs->ver == V2)
+			p = rte_pcapng_copy(port_id, queue,
+					    pkts[i], mp, cbs->snaplen,
+					    ts, direction);
+		else
+			p = rte_pktmbuf_copy(pkts[i], mp, 0, cbs->snaplen);
+
+		if (unlikely(p == NULL))
+			__atomic_fetch_add(&stats->nombuf, 1, __ATOMIC_RELAXED);
+		else
 			dup_bufs[d_pkts++] = p;
 	}
 
+	__atomic_fetch_add(&stats->accepted, d_pkts, __ATOMIC_RELAXED);
+
 	ring_enq = rte_ring_enqueue_burst(ring, (void *)dup_bufs, d_pkts, NULL);
 	if (unlikely(ring_enq < d_pkts)) {
 		unsigned int drops = d_pkts - ring_enq;
 
-		PDUMP_LOG(DEBUG,
-			"only %d of packets enqueued to ring\n", ring_enq);
+		__atomic_fetch_add(&stats->ringfull, drops, __ATOMIC_RELAXED);
 		rte_pktmbuf_free_bulk(&dup_bufs[ring_enq], drops);
 	}
 }
 
 static uint16_t
-pdump_rx(uint16_t port __rte_unused, uint16_t qidx __rte_unused,
+pdump_rx(uint16_t port, uint16_t queue,
 	struct rte_mbuf **pkts, uint16_t nb_pkts,
-	uint16_t max_pkts __rte_unused,
-	void *user_params)
+	uint16_t max_pkts __rte_unused, void *user_params)
 {
-	pdump_copy(pkts, nb_pkts, user_params);
+	const struct pdump_rxtx_cbs *cbs = user_params;
+	struct rte_pdump_stats *stats = &pdump_stats->rx[port][queue];
+
+	pdump_copy(port, queue, RTE_PCAPNG_DIRECTION_IN,
+		   pkts, nb_pkts, cbs, stats);
 	return nb_pkts;
 }
 
 static uint16_t
-pdump_tx(uint16_t port __rte_unused, uint16_t qidx __rte_unused,
+pdump_tx(uint16_t port, uint16_t queue,
 		struct rte_mbuf **pkts, uint16_t nb_pkts, void *user_params)
 {
-	pdump_copy(pkts, nb_pkts, user_params);
+	const struct pdump_rxtx_cbs *cbs = user_params;
+	struct rte_pdump_stats *stats = &pdump_stats->tx[port][queue];
+
+	pdump_copy(port, queue, RTE_PCAPNG_DIRECTION_OUT,
+		   pkts, nb_pkts, cbs, stats);
 	return nb_pkts;
 }
 
 static int
-pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
-				struct rte_ring *ring, struct rte_mempool *mp,
-				uint16_t operation)
+pdump_register_rx_callbacks(enum pdump_version ver,
+			    uint16_t end_q, uint16_t port, uint16_t queue,
+			    struct rte_ring *ring, struct rte_mempool *mp,
+			    struct rte_bpf *filter,
+			    uint16_t operation, uint32_t snaplen)
 {
 	uint16_t qid;
-	struct pdump_rxtx_cbs *cbs = NULL;
 
 	qid = (queue == RTE_PDUMP_ALL_QUEUES) ? 0 : queue;
 	for (; qid < end_q; qid++) {
-		cbs = &rx_cbs[port][qid];
-		if (cbs && operation == ENABLE) {
+		struct pdump_rxtx_cbs *cbs = &rx_cbs[port][qid];
+
+		if (operation == ENABLE) {
 			if (cbs->cb) {
 				PDUMP_LOG(ERR,
 					"rx callback for port=%d queue=%d, already exists\n",
 					port, qid);
 				return -EEXIST;
 			}
+			cbs->ver = ver;
 			cbs->ring = ring;
 			cbs->mp = mp;
+			cbs->snaplen = snaplen;
+			cbs->filter = filter;
+
 			cbs->cb = rte_eth_add_first_rx_callback(port, qid,
 								pdump_rx, cbs);
 			if (cbs->cb == NULL) {
@@ -145,8 +196,7 @@ pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
 					rte_errno);
 				return rte_errno;
 			}
-		}
-		if (cbs && operation == DISABLE) {
+		} else if (operation == DISABLE) {
 			int ret;
 
 			if (cbs->cb == NULL) {
@@ -170,26 +220,32 @@ pdump_register_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
 }
 
 static int
-pdump_register_tx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
-				struct rte_ring *ring, struct rte_mempool *mp,
-				uint16_t operation)
+pdump_register_tx_callbacks(enum pdump_version ver,
+			    uint16_t end_q, uint16_t port, uint16_t queue,
+			    struct rte_ring *ring, struct rte_mempool *mp,
+			    struct rte_bpf *filter,
+			    uint16_t operation, uint32_t snaplen)
 {
 
 	uint16_t qid;
-	struct pdump_rxtx_cbs *cbs = NULL;
 
 	qid = (queue == RTE_PDUMP_ALL_QUEUES) ? 0 : queue;
 	for (; qid < end_q; qid++) {
-		cbs = &tx_cbs[port][qid];
-		if (cbs && operation == ENABLE) {
+		struct pdump_rxtx_cbs *cbs = &tx_cbs[port][qid];
+
+		if (operation == ENABLE) {
 			if (cbs->cb) {
 				PDUMP_LOG(ERR,
 					"tx callback for port=%d queue=%d, already exists\n",
 					port, qid);
 				return -EEXIST;
 			}
+			cbs->ver = ver;
 			cbs->ring = ring;
 			cbs->mp = mp;
+			cbs->snaplen = snaplen;
+			cbs->filter = filter;
+
 			cbs->cb = rte_eth_add_tx_callback(port, qid, pdump_tx,
 								cbs);
 			if (cbs->cb == NULL) {
@@ -198,8 +254,7 @@ pdump_register_tx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue,
 					rte_errno);
 				return rte_errno;
 			}
-		}
-		if (cbs && operation == DISABLE) {
+		} else if (operation == DISABLE) {
 			int ret;
 
 			if (cbs->cb == NULL) {
@@ -228,37 +283,47 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
 	uint16_t nb_rx_q = 0, nb_tx_q = 0, end_q, queue;
 	uint16_t port;
 	int ret = 0;
+	struct rte_bpf *filter = NULL;
 	uint32_t flags;
 	uint16_t operation;
 	struct rte_ring *ring;
 	struct rte_mempool *mp;
 
-	flags = p->flags;
-	operation = p->op;
-	if (operation == ENABLE) {
-		ret = rte_eth_dev_get_port_by_name(p->data.en_v1.device,
-				&port);
-		if (ret < 0) {
+	/* Check for possible DPDK version mismatch */
+	if (!(p->ver == V1 || p->ver == V2)) {
+		PDUMP_LOG(ERR,
+			  "incorrect client version %u\n", p->ver);
+		return -EINVAL;
+	}
+
+	if (p->prm) {
+		if (p->prm->prog_arg.type != RTE_BPF_ARG_PTR_MBUF) {
 			PDUMP_LOG(ERR,
-				"failed to get port id for device id=%s\n",
-				p->data.en_v1.device);
+				  "invalid BPF program type: %u\n",
+				  p->prm->prog_arg.type);
 			return -EINVAL;
 		}
-		queue = p->data.en_v1.queue;
-		ring = p->data.en_v1.ring;
-		mp = p->data.en_v1.mp;
-	} else {
-		ret = rte_eth_dev_get_port_by_name(p->data.dis_v1.device,
-				&port);
-		if (ret < 0) {
-			PDUMP_LOG(ERR,
-				"failed to get port id for device id=%s\n",
-				p->data.dis_v1.device);
-			return -EINVAL;
+
+		filter = rte_bpf_load(p->prm);
+		if (filter == NULL) {
+			PDUMP_LOG(ERR, "cannot load BPF filter: %s\n",
+				  rte_strerror(rte_errno));
+			return -rte_errno;
 		}
-		queue = p->data.dis_v1.queue;
-		ring = p->data.dis_v1.ring;
-		mp = p->data.dis_v1.mp;
+	}
+
+	flags = p->flags;
+	operation = p->op;
+	queue = p->queue;
+	ring = p->ring;
+	mp = p->mp;
+
+	ret = rte_eth_dev_get_port_by_name(p->device, &port);
+	if (ret < 0) {
+		PDUMP_LOG(ERR,
+			  "failed to get port id for device id=%s\n",
+			  p->device);
+		return -EINVAL;
 	}
 
 	/* validation if packet capture is for all queues */
@@ -296,8 +361,9 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
 	/* register RX callback */
 	if (flags & RTE_PDUMP_FLAG_RX) {
 		end_q = (queue == RTE_PDUMP_ALL_QUEUES) ? nb_rx_q : queue + 1;
-		ret = pdump_register_rx_callbacks(end_q, port, queue, ring, mp,
-							operation);
+		ret = pdump_register_rx_callbacks(p->ver, end_q, port, queue,
+						  ring, mp, filter,
+						  operation, p->snaplen);
 		if (ret < 0)
 			return ret;
 	}
@@ -305,8 +371,9 @@ set_pdump_rxtx_cbs(const struct pdump_request *p)
 	/* register TX callback */
 	if (flags & RTE_PDUMP_FLAG_TX) {
 		end_q = (queue == RTE_PDUMP_ALL_QUEUES) ? nb_tx_q : queue + 1;
-		ret = pdump_register_tx_callbacks(end_q, port, queue, ring, mp,
-							operation);
+		ret = pdump_register_tx_callbacks(p->ver, end_q, port, queue,
+						  ring, mp, filter,
+						  operation, p->snaplen);
 		if (ret < 0)
 			return ret;
 	}
@@ -332,7 +399,7 @@ pdump_server(const struct rte_mp_msg *mp_msg, const void *peer)
 		resp->err_value = set_pdump_rxtx_cbs(cli_req);
 	}
 
-	strlcpy(mp_resp.name, PDUMP_MP, RTE_MP_MAX_NAME_LEN);
+	rte_strscpy(mp_resp.name, PDUMP_MP, RTE_MP_MAX_NAME_LEN);
 	mp_resp.len_param = sizeof(*resp);
 	mp_resp.num_fds = 0;
 	if (rte_mp_reply(&mp_resp, peer) < 0) {
@@ -347,8 +414,18 @@ pdump_server(const struct rte_mp_msg *mp_msg, const void *peer)
 int
 rte_pdump_init(void)
 {
+	const struct rte_memzone *mz;
 	int ret;
 
+	mz = rte_memzone_reserve(MZ_RTE_PDUMP_STATS, sizeof(*pdump_stats),
+				 rte_socket_id(), 0);
+	if (mz == NULL) {
+		PDUMP_LOG(ERR, "cannot allocate pdump statistics\n");
+		rte_errno = ENOMEM;
+		return -1;
+	}
+	pdump_stats = mz->addr;
+
 	ret = rte_mp_action_register(PDUMP_MP, pdump_server);
 	if (ret && rte_errno != ENOTSUP)
 		return -1;
@@ -392,14 +469,21 @@ pdump_validate_ring_mp(struct rte_ring *ring, struct rte_mempool *mp)
 static int
 pdump_validate_flags(uint32_t flags)
 {
-	if (flags != RTE_PDUMP_FLAG_RX && flags != RTE_PDUMP_FLAG_TX &&
-		flags != RTE_PDUMP_FLAG_RXTX) {
+	if ((flags & RTE_PDUMP_FLAG_RXTX) == 0) {
 		PDUMP_LOG(ERR,
 			"invalid flags, should be either rx/tx/rxtx\n");
 		rte_errno = EINVAL;
 		return -1;
 	}
 
+	/* mask off the flags we know about */
+	if (flags & ~(RTE_PDUMP_FLAG_RXTX | RTE_PDUMP_FLAG_PCAPNG)) {
+		PDUMP_LOG(ERR,
+			  "unknown flags: %#x\n", flags);
+		rte_errno = ENOTSUP;
+		return -1;
+	}
+
 	return 0;
 }
 
@@ -426,12 +510,12 @@ pdump_validate_port(uint16_t port, char *name)
 }
 
 static int
-pdump_prepare_client_request(char *device, uint16_t queue,
-				uint32_t flags,
-				uint16_t operation,
-				struct rte_ring *ring,
-				struct rte_mempool *mp,
-				void *filter)
+pdump_prepare_client_request(const char *device, uint16_t queue,
+			     uint32_t flags, uint32_t snaplen,
+			     uint16_t operation,
+			     struct rte_ring *ring,
+			     struct rte_mempool *mp,
+			     const struct rte_bpf_prm *prm)
 {
 	int ret = -1;
 	struct rte_mp_msg mp_req, *mp_rep;
@@ -440,26 +524,22 @@ pdump_prepare_client_request(char *device, uint16_t queue,
 	struct pdump_request *req = (struct pdump_request *)mp_req.param;
 	struct pdump_response *resp;
 
-	req->ver = 1;
-	req->flags = flags;
+	memset(req, 0, sizeof(*req));
+
+	req->ver = (flags & RTE_PDUMP_FLAG_PCAPNG) ? V2 : V1;
+	req->flags = flags & RTE_PDUMP_FLAG_RXTX;
 	req->op = operation;
+	req->queue = queue;
+	rte_strscpy(req->device, device, sizeof(req->device));
+
 	if ((operation & ENABLE) != 0) {
-		strlcpy(req->data.en_v1.device, device,
-			sizeof(req->data.en_v1.device));
-		req->data.en_v1.queue = queue;
-		req->data.en_v1.ring = ring;
-		req->data.en_v1.mp = mp;
-		req->data.en_v1.filter = filter;
-	} else {
-		strlcpy(req->data.dis_v1.device, device,
-			sizeof(req->data.dis_v1.device));
-		req->data.dis_v1.queue = queue;
-		req->data.dis_v1.ring = NULL;
-		req->data.dis_v1.mp = NULL;
-		req->data.dis_v1.filter = NULL;
+		req->ring = ring;
+		req->mp = mp;
+		req->prm = prm;
+		req->snaplen = snaplen;
 	}
 
-	strlcpy(mp_req.name, PDUMP_MP, RTE_MP_MAX_NAME_LEN);
+	rte_strscpy(mp_req.name, PDUMP_MP, RTE_MP_MAX_NAME_LEN);
 	mp_req.len_param = sizeof(*req);
 	mp_req.num_fds = 0;
 	if (rte_mp_request_sync(&mp_req, &mp_reply, &ts) == 0) {
@@ -477,11 +557,17 @@ pdump_prepare_client_request(char *device, uint16_t queue,
 	return ret;
 }
 
-int
-rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
-			struct rte_ring *ring,
-			struct rte_mempool *mp,
-			void *filter)
+/*
+ * There are two versions of this function, because although original API
+ * left place holder for future filter, it never checked the value.
+ * Therefore the API can't depend on application passing a non
+ * bogus value.
+ */
+static int
+pdump_enable(uint16_t port, uint16_t queue,
+	     uint32_t flags, uint32_t snaplen,
+	     struct rte_ring *ring, struct rte_mempool *mp,
+	     const struct rte_bpf_prm *prm)
 {
 	int ret;
 	char name[RTE_DEV_NAME_MAX_LEN];
@@ -496,20 +582,42 @@ rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
 	if (ret < 0)
 		return ret;
 
-	ret = pdump_prepare_client_request(name, queue, flags,
-						ENABLE, ring, mp, filter);
+	if (snaplen == 0)
+		snaplen = UINT32_MAX;
 
-	return ret;
+	return pdump_prepare_client_request(name, queue, flags, snaplen,
+					    ENABLE, ring, mp, prm);
 }
 
 int
-rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
-				uint32_t flags,
-				struct rte_ring *ring,
-				struct rte_mempool *mp,
-				void *filter)
+rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
+		 struct rte_ring *ring,
+		 struct rte_mempool *mp,
+		 void *filter __rte_unused)
 {
-	int ret = 0;
+	return pdump_enable(port, queue, flags, 0,
+			    ring, mp, NULL);
+}
+
+int
+rte_pdump_enable_bpf(uint16_t port, uint16_t queue,
+		     uint32_t flags, uint32_t snaplen,
+		     struct rte_ring *ring,
+		     struct rte_mempool *mp,
+		     const struct rte_bpf_prm *prm)
+{
+	return pdump_enable(port, queue, flags, snaplen,
+			    ring, mp, prm);
+}
+
+static int
+pdump_enable_by_deviceid(const char *device_id, uint16_t queue,
+			 uint32_t flags, uint32_t snaplen,
+			 struct rte_ring *ring,
+			 struct rte_mempool *mp,
+			 const struct rte_bpf_prm *prm)
+{
+	int ret;
 
 	ret = pdump_validate_ring_mp(ring, mp);
 	if (ret < 0)
@@ -518,10 +626,30 @@ rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
 	if (ret < 0)
 		return ret;
 
-	ret = pdump_prepare_client_request(device_id, queue, flags,
-						ENABLE, ring, mp, filter);
+	return pdump_prepare_client_request(device_id, queue, flags, snaplen,
+					    ENABLE, ring, mp, prm);
+}
 
-	return ret;
+int
+rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
+			     uint32_t flags,
+			     struct rte_ring *ring,
+			     struct rte_mempool *mp,
+			     void *filter __rte_unused)
+{
+	return pdump_enable_by_deviceid(device_id, queue, flags, 0,
+					ring, mp, NULL);
+}
+
+int
+rte_pdump_enable_bpf_by_deviceid(const char *device_id, uint16_t queue,
+				 uint32_t flags, uint32_t snaplen,
+				 struct rte_ring *ring,
+				 struct rte_mempool *mp,
+				 const struct rte_bpf_prm *prm)
+{
+	return pdump_enable_by_deviceid(device_id, queue, flags, snaplen,
+					ring, mp, prm);
 }
 
 int
@@ -537,8 +665,8 @@ rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags)
 	if (ret < 0)
 		return ret;
 
-	ret = pdump_prepare_client_request(name, queue, flags,
-						DISABLE, NULL, NULL, NULL);
+	ret = pdump_prepare_client_request(name, queue, flags, 0,
+					   DISABLE, NULL, NULL, NULL);
 
 	return ret;
 }
@@ -553,8 +681,65 @@ rte_pdump_disable_by_deviceid(char *device_id, uint16_t queue,
 	if (ret < 0)
 		return ret;
 
-	ret = pdump_prepare_client_request(device_id, queue, flags,
-						DISABLE, NULL, NULL, NULL);
+	ret = pdump_prepare_client_request(device_id, queue, flags, 0,
+					   DISABLE, NULL, NULL, NULL);
 
 	return ret;
 }
+
+static void
+pdump_sum_stats(uint16_t port, uint16_t nq,
+		struct rte_pdump_stats stats[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT],
+		struct rte_pdump_stats *total)
+{
+	uint64_t *sum = (uint64_t *)total;
+	unsigned int i;
+	uint64_t val;
+	uint16_t qid;
+
+	for (qid = 0; qid < nq; qid++) {
+		const uint64_t *perq = (const uint64_t *)&stats[port][qid];
+
+		for (i = 0; i < sizeof(*total) / sizeof(uint64_t); i++) {
+			val = __atomic_load_n(&perq[i], __ATOMIC_RELAXED);
+			sum[i] += val;
+		}
+	}
+}
+
+int
+rte_pdump_stats(uint16_t port, struct rte_pdump_stats *stats)
+{
+	struct rte_eth_dev_info dev_info;
+	const struct rte_memzone *mz;
+	int ret;
+
+	memset(stats, 0, sizeof(*stats));
+	ret = rte_eth_dev_info_get(port, &dev_info);
+	if (ret != 0) {
+		PDUMP_LOG(ERR,
+			  "Error during getting device (port %u) info: %s\n",
+			  port, strerror(-ret));
+		return ret;
+	}
+
+	if (pdump_stats == NULL) {
+		if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+			PDUMP_LOG(ERR, "pdump stats initialized\n");
+			rte_errno = EINVAL;
+			return -1;
+		}
+
+		mz = rte_memzone_lookup(MZ_RTE_PDUMP_STATS);
+		if (mz == NULL) {
+			PDUMP_LOG(ERR, "can not find pdump stats\n");
+			rte_errno = EINVAL;
+			return -1;
+		}
+		pdump_stats = mz->addr;
+	}
+
+	pdump_sum_stats(port, dev_info.nb_rx_queues, pdump_stats->rx, stats);
+	pdump_sum_stats(port, dev_info.nb_tx_queues, pdump_stats->tx, stats);
+	return 0;
+}
diff --git a/lib/pdump/rte_pdump.h b/lib/pdump/rte_pdump.h
index 6b00fc17aeb2..6efa0274f2ce 100644
--- a/lib/pdump/rte_pdump.h
+++ b/lib/pdump/rte_pdump.h
@@ -15,6 +15,7 @@
 #include <stdint.h>
 #include <rte_mempool.h>
 #include <rte_ring.h>
+#include <rte_bpf.h>
 
 #ifdef __cplusplus
 extern "C" {
@@ -26,7 +27,9 @@ enum {
 	RTE_PDUMP_FLAG_RX = 1,  /* receive direction */
 	RTE_PDUMP_FLAG_TX = 2,  /* transmit direction */
 	/* both receive and transmit directions */
-	RTE_PDUMP_FLAG_RXTX = (RTE_PDUMP_FLAG_RX|RTE_PDUMP_FLAG_TX)
+	RTE_PDUMP_FLAG_RXTX = (RTE_PDUMP_FLAG_RX|RTE_PDUMP_FLAG_TX),
+
+	RTE_PDUMP_FLAG_PCAPNG = 4, /* format for pcapng */
 };
 
 /**
@@ -68,7 +71,7 @@ rte_pdump_uninit(void);
  * @param mp
  *  mempool on to which original packets will be mirrored or duplicated.
  * @param filter
- *  place holder for packet filtering.
+ *  Unused should be NULL.
  *
  * @return
  *    0 on success, -1 on error, rte_errno is set accordingly.
@@ -80,6 +83,41 @@ rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags,
 		struct rte_mempool *mp,
 		void *filter);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Enables packet capturing on given port and queue with filtering.
+ *
+ * @param port_id
+ *  The Ethernet port on which packet capturing should be enabled.
+ * @param queue
+ *  The queue on the Ethernet port which packet capturing
+ *  should be enabled. Pass UINT16_MAX to enable packet capturing on all
+ *  queues of a given port.
+ * @param flags
+ *  Pdump library flags that specify direction and packet format.
+ * @param snaplen
+ *  The upper limit on bytes to copy.
+ *  Passing UINT32_MAX means capture all the possible data.
+ * @param ring
+ *  The ring on which captured packets will be enqueued for user.
+ * @param mp
+ *  The mempool on to which original packets will be mirrored or duplicated.
+ * @param prm
+ *  Use BPF program to run to filter packes (can be NULL)
+ *
+ * @return
+ *    0 on success, -1 on error, rte_errno is set accordingly.
+ */
+__rte_experimental
+int
+rte_pdump_enable_bpf(uint16_t port_id, uint16_t queue,
+		     uint32_t flags, uint32_t snaplen,
+		     struct rte_ring *ring,
+		     struct rte_mempool *mp,
+		     const struct rte_bpf_prm *prm);
+
 /**
  * Disables packet capturing on given port and queue.
  *
@@ -118,7 +156,7 @@ rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags);
  * @param mp
  *  mempool on to which original packets will be mirrored or duplicated.
  * @param filter
- *  place holder for packet filtering.
+ *  unused should be NULL
  *
  * @return
  *    0 on success, -1 on error, rte_errno is set accordingly.
@@ -131,6 +169,43 @@ rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue,
 				struct rte_mempool *mp,
 				void *filter);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Enables packet capturing on given device id and queue with filtering.
+ * device_id can be name or pci address of device.
+ *
+ * @param device_id
+ *  device id on which packet capturing should be enabled.
+ * @param queue
+ *  The queue on the Ethernet port which packet capturing
+ *  should be enabled. Pass UINT16_MAX to enable packet capturing on all
+ *  queues of a given port.
+ * @param flags
+ *  Pdump library flags that specify direction and packet format.
+ * @param snaplen
+ *  The upper limit on bytes to copy.
+ *  Passing UINT32_MAX means capture all the possible data.
+ * @param ring
+ *  The ring on which captured packets will be enqueued for user.
+ * @param mp
+ *  The mempool on to which original packets will be mirrored or duplicated.
+ * @param filter
+ *  Use BPF program to run to filter packes (can be NULL)
+ *
+ * @return
+ *    0 on success, -1 on error, rte_errno is set accordingly.
+ */
+__rte_experimental
+int
+rte_pdump_enable_bpf_by_deviceid(const char *device_id, uint16_t queue,
+				 uint32_t flags, uint32_t snaplen,
+				 struct rte_ring *ring,
+				 struct rte_mempool *mp,
+				 const struct rte_bpf_prm *filter);
+
+
 /**
  * Disables packet capturing on given device_id and queue.
  * device_id can be name or pci address of device.
@@ -153,6 +228,38 @@ int
 rte_pdump_disable_by_deviceid(char *device_id, uint16_t queue,
 				uint32_t flags);
 
+
+/**
+ * A structure used to retrieve statistics from packet capture.
+ * The statistics are sum of both receive and transmit queues.
+ */
+struct rte_pdump_stats {
+	uint64_t accepted; /**< Number of packets accepted by filter. */
+	uint64_t filtered; /**< Number of packets rejected by filter. */
+	uint64_t nombuf;   /**< Number of mbuf allocation failures. */
+	uint64_t ringfull; /**< Number of missed packets due to ring full. */
+
+	uint64_t reserved[4]; /**< Reserved and pad to cache line */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Retrieve the packet capture statistics for a queue.
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device.
+ * @param stats
+ *   A pointer to structure of type *rte_pdump_stats* to be filled in.
+ * @return
+ *   Zero if successful. -1 on error and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_pdump_stats(uint16_t port_id, struct rte_pdump_stats *stats);
+
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/pdump/version.map b/lib/pdump/version.map
index f0a9d12c9a9e..ce5502d9cdf4 100644
--- a/lib/pdump/version.map
+++ b/lib/pdump/version.map
@@ -10,3 +10,11 @@ DPDK_22 {
 
 	local: *;
 };
+
+EXPERIMENTAL {
+	global:
+
+	rte_pdump_enable_bpf;
+	rte_pdump_enable_bpf_by_deviceid;
+	rte_pdump_stats;
+};
-- 
2.30.2


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [PATCH v12 11/12] doc: changes for new pcapng and dumpcap
    2021-10-01 16:26  1%   ` [dpdk-dev] [PATCH v12 06/12] pdump: support pcapng and filtering Stephen Hemminger
@ 2021-10-01 16:27  1%   ` Stephen Hemminger
  1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-10-01 16:27 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

Describe the new packet capture library and utilities

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 doc/api/doxy-api-index.md                     |  1 +
 doc/api/doxy-api.conf.in                      |  1 +
 .../howto/img/packet_capture_framework.svg    | 96 +++++++++----------
 doc/guides/howto/packet_capture_framework.rst | 67 ++++++-------
 doc/guides/prog_guide/index.rst               |  1 +
 doc/guides/prog_guide/pcapng_lib.rst          | 24 +++++
 doc/guides/prog_guide/pdump_lib.rst           | 28 ++++--
 doc/guides/rel_notes/release_21_11.rst        | 10 ++
 doc/guides/tools/dumpcap.rst                  | 86 +++++++++++++++++
 doc/guides/tools/index.rst                    |  1 +
 10 files changed, 228 insertions(+), 87 deletions(-)
 create mode 100644 doc/guides/prog_guide/pcapng_lib.rst
 create mode 100644 doc/guides/tools/dumpcap.rst

diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 1992107a0356..ee07394d1c78 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -223,3 +223,4 @@ The public API headers are grouped by topics:
   [experimental APIs]  (@ref rte_compat.h),
   [ABI versioning]     (@ref rte_function_versioning.h),
   [version]            (@ref rte_version.h)
+  [pcapng]             (@ref rte_pcapng.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 325a0195c6ab..aba17799a9a1 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -58,6 +58,7 @@ INPUT                   = @TOPDIR@/doc/api/doxy-api-index.md \
                           @TOPDIR@/lib/metrics \
                           @TOPDIR@/lib/node \
                           @TOPDIR@/lib/net \
+                          @TOPDIR@/lib/pcapng \
                           @TOPDIR@/lib/pci \
                           @TOPDIR@/lib/pdump \
                           @TOPDIR@/lib/pipeline \
diff --git a/doc/guides/howto/img/packet_capture_framework.svg b/doc/guides/howto/img/packet_capture_framework.svg
index a76baf71fdee..1c2646a81096 100644
--- a/doc/guides/howto/img/packet_capture_framework.svg
+++ b/doc/guides/howto/img/packet_capture_framework.svg
@@ -1,6 +1,4 @@
 <?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!-- Created with Inkscape (http://www.inkscape.org/) -->
-
 <svg
    xmlns:osb="http://www.openswatchbook.org/uri/2009/osb"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
@@ -16,8 +14,8 @@
    viewBox="0 0 425.19685 283.46457"
    id="svg2"
    version="1.1"
-   inkscape:version="0.91 r13725"
-   sodipodi:docname="drawing-pcap.svg">
+   inkscape:version="1.0.2 (e86c870879, 2021-01-15)"
+   sodipodi:docname="packet_capture_framework.svg">
   <defs
      id="defs4">
     <marker
@@ -228,7 +226,7 @@
        x2="487.64606"
        y2="258.38232"
        gradientUnits="userSpaceOnUse"
-       gradientTransform="translate(-84.916417,744.90779)" />
+       gradientTransform="matrix(1.1457977,0,0,0.99944907,-151.97019,745.05014)" />
     <linearGradient
        inkscape:collect="always"
        xlink:href="#linearGradient5784"
@@ -277,17 +275,18 @@
      borderopacity="1.0"
      inkscape:pageopacity="0.0"
      inkscape:pageshadow="2"
-     inkscape:zoom="0.57434918"
-     inkscape:cx="215.17857"
-     inkscape:cy="285.26445"
+     inkscape:zoom="1"
+     inkscape:cx="226.77165"
+     inkscape:cy="78.124511"
      inkscape:document-units="px"
      inkscape:current-layer="layer1"
      showgrid="false"
-     inkscape:window-width="1874"
-     inkscape:window-height="971"
-     inkscape:window-x="2"
-     inkscape:window-y="24"
-     inkscape:window-maximized="0" />
+     inkscape:window-width="2560"
+     inkscape:window-height="1414"
+     inkscape:window-x="0"
+     inkscape:window-y="0"
+     inkscape:window-maximized="1"
+     inkscape:document-rotation="0" />
   <metadata
      id="metadata7">
     <rdf:RDF>
@@ -296,7 +295,7 @@
         <dc:format>image/svg+xml</dc:format>
         <dc:type
            rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
-        <dc:title></dc:title>
+        <dc:title />
       </cc:Work>
     </rdf:RDF>
   </metadata>
@@ -321,15 +320,15 @@
        y="790.82452" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="61.050636"
        y="807.3205"
-       id="text4152"
-       sodipodi:linespacing="125%"><tspan
+       id="text4152"><tspan
          sodipodi:role="line"
          id="tspan4154"
          x="61.050636"
-         y="807.3205">DPDK Primary Application</tspan></text>
+         y="807.3205"
+         style="font-size:12.5px;line-height:1.25">DPDK Primary Application</tspan></text>
     <rect
        style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        id="rect4156-6"
@@ -339,19 +338,20 @@
        y="827.01843" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="350.68585"
        y="841.16058"
-       id="text4189"
-       sodipodi:linespacing="125%"><tspan
+       id="text4189"><tspan
          sodipodi:role="line"
          id="tspan4191"
          x="350.68585"
-         y="841.16058">dpdk-pdump</tspan><tspan
+         y="841.16058"
+         style="font-size:12.5px;line-height:1.25">dpdk-dumpcap</tspan><tspan
          sodipodi:role="line"
          x="350.68585"
          y="856.78558"
-         id="tspan4193">tool</tspan></text>
+         id="tspan4193"
+         style="font-size:12.5px;line-height:1.25">tool</tspan></text>
     <rect
        style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        id="rect4156-6-4"
@@ -361,15 +361,15 @@
        y="891.16315" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="352.70612"
        y="905.3053"
-       id="text4189-1"
-       sodipodi:linespacing="125%"><tspan
+       id="text4189-1"><tspan
          sodipodi:role="line"
          x="352.70612"
          y="905.3053"
-         id="tspan4193-3">PCAP PMD</tspan></text>
+         id="tspan4193-3"
+         style="font-size:12.5px;line-height:1.25">librte_pcapng</tspan></text>
     <rect
        style="fill:url(#linearGradient5745);fill-opacity:1;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        id="rect4156-6-6"
@@ -379,15 +379,15 @@
        y="923.9931" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="136.02846"
        y="938.13525"
-       id="text4189-0"
-       sodipodi:linespacing="125%"><tspan
+       id="text4189-0"><tspan
          sodipodi:role="line"
          x="136.02846"
          y="938.13525"
-         id="tspan4193-6">dpdk_port0</tspan></text>
+         id="tspan4193-6"
+         style="font-size:12.5px;line-height:1.25">dpdk_port0</tspan></text>
     <rect
        style="fill:#000000;fill-opacity:0;stroke:#257cdc;stroke-width:2;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        id="rect4156-6-5"
@@ -397,33 +397,33 @@
        y="824.99817" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="137.54369"
        y="839.14026"
-       id="text4189-4"
-       sodipodi:linespacing="125%"><tspan
+       id="text4189-4"><tspan
          sodipodi:role="line"
          x="137.54369"
          y="839.14026"
-         id="tspan4193-2">librte_pdump</tspan></text>
+         id="tspan4193-2"
+         style="font-size:12.5px;line-height:1.25">librte_pdump</tspan></text>
     <rect
-       style="fill:url(#linearGradient5788);fill-opacity:1;stroke:#257cdc;stroke-width:1;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
+       style="fill:url(#linearGradient5788);fill-opacity:1;stroke:#257cdc;stroke-width:1.07013;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        id="rect4156-6-4-5"
-       width="94.449265"
-       height="35.355339"
-       x="307.7804"
-       y="985.61243" />
+       width="108.21974"
+       height="35.335861"
+       x="297.9809"
+       y="985.62219" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="352.70618"
        y="999.75458"
-       id="text4189-1-8"
-       sodipodi:linespacing="125%"><tspan
+       id="text4189-1-8"><tspan
          sodipodi:role="line"
          x="352.70618"
          y="999.75458"
-         id="tspan4193-3-2">capture.pcap</tspan></text>
+         id="tspan4193-3-2"
+         style="font-size:12.5px;line-height:1.25">capture.pcapng</tspan></text>
     <rect
        style="fill:url(#linearGradient5788-1);fill-opacity:1;stroke:#257cdc;stroke-width:1.12555885;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
        id="rect4156-6-4-5-1"
@@ -433,15 +433,15 @@
        y="983.14984" />
     <text
        xml:space="preserve"
-       style="font-style:normal;font-weight:normal;font-size:12.5px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
+       style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        x="136.53352"
        y="1002.785"
-       id="text4189-1-8-4"
-       sodipodi:linespacing="125%"><tspan
+       id="text4189-1-8-4"><tspan
          sodipodi:role="line"
          x="136.53352"
          y="1002.785"
-         id="tspan4193-3-2-7">Traffic Generator</tspan></text>
+         id="tspan4193-3-2-7"
+         style="font-size:12.5px;line-height:1.25">Traffic Generator</tspan></text>
     <path
        style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#marker7331)"
        d="m 351.46948,927.02357 c 0,57.5787 0,57.5787 0,57.5787"
diff --git a/doc/guides/howto/packet_capture_framework.rst b/doc/guides/howto/packet_capture_framework.rst
index c31bac52340e..78baa609a021 100644
--- a/doc/guides/howto/packet_capture_framework.rst
+++ b/doc/guides/howto/packet_capture_framework.rst
@@ -1,18 +1,19 @@
 ..  SPDX-License-Identifier: BSD-3-Clause
     Copyright(c) 2017 Intel Corporation.
 
-DPDK pdump Library and pdump Tool
-=================================
+DPDK packet capture libraries and tools
+=======================================
 
 This document describes how the Data Plane Development Kit (DPDK) Packet
 Capture Framework is used for capturing packets on DPDK ports. It is intended
 for users of DPDK who want to know more about the Packet Capture feature and
 for those who want to monitor traffic on DPDK-controlled devices.
 
-The DPDK packet capture framework was introduced in DPDK v16.07. The DPDK
-packet capture framework consists of the DPDK pdump library and DPDK pdump
-tool.
-
+The DPDK packet capture framework was introduced in DPDK v16.07 and
+enhanced in 21.1. The DPDK packet capture framework consists of the
+libraries for collecting packets ``librte_pdump`` and writing packets
+to a file ``librte_pcapng``. There are two sample applications:
+``dpdk-dumpcap`` and older ``dpdk-pdump``.
 
 Introduction
 ------------
@@ -22,43 +23,46 @@ allow users to initialize the packet capture framework and to enable or
 disable packet capture. The library works on a multi process communication model and its
 usage is recommended for debugging purposes.
 
-The :ref:`dpdk-pdump <pdump_tool>` tool is developed based on the
-``librte_pdump`` library.  It runs as a DPDK secondary process and is capable
-of enabling or disabling packet capture on DPDK ports. The ``dpdk-pdump`` tool
-provides command-line options with which users can request enabling or
-disabling of the packet capture on DPDK ports.
+The :ref:`librte_pcapng <pcapng_library>` library provides the APIs to format
+packets and write them to a file in Pcapng format.
+
+
+The :ref:`dpdk-dumpcap <dumpcap_tool>` is a tool that captures packets in
+like Wireshark dumpcap does for Linux. It runs as a DPDK secondary process and
+captures packets from one or more interfaces and writes them to a file
+in Pcapng format.  The ``dpdk-dumpcap`` tool is designed to take
+most of the same options as the Wireshark ``dumpcap`` command.
 
-The application which initializes the packet capture framework will be a primary process
-and the application that enables or disables the packet capture will
-be a secondary process. The primary process sends the Rx and Tx packets from the DPDK ports
-to the secondary process.
+Without any options it will use the packet capture framework to
+capture traffic from the first available DPDK port.
 
 In DPDK the ``testpmd`` application can be used to initialize the packet
-capture framework and acts as a server, and the ``dpdk-pdump`` tool acts as a
+capture framework and acts as a server, and the ``dpdk-dumpcap`` tool acts as a
 client. To view Rx or Tx packets of ``testpmd``, the application should be
-launched first, and then the ``dpdk-pdump`` tool. Packets from ``testpmd``
-will be sent to the tool, which then sends them on to the Pcap PMD device and
-that device writes them to the Pcap file or to an external interface depending
-on the command-line option used.
+launched first, and then the ``dpdk-dumpcap`` tool. Packets from ``testpmd``
+will be sent to the tool, and then to the Pcapng file.
 
 Some things to note:
 
-* The ``dpdk-pdump`` tool can only be used in conjunction with a primary
+* All tools using ``librte_pdump`` can only be used in conjunction with a primary
   application which has the packet capture framework initialized already. In
   dpdk, only ``testpmd`` is modified to initialize packet capture framework,
-  other applications remain untouched. So, if the ``dpdk-pdump`` tool has to
+  other applications remain untouched. So, if the ``dpdk-dumpcap`` tool has to
   be used with any application other than the testpmd, the user needs to
   explicitly modify that application to call the packet capture framework
   initialization code. Refer to the ``app/test-pmd/testpmd.c`` code and look
   for ``pdump`` keyword to see how this is done.
 
-* The ``dpdk-pdump`` tool depends on the libpcap based PMD.
+* The ``dpdk-pdump`` tool is an older tool created as demonstration of ``librte_pdump``
+  library. The ``dpdk-pdump`` tool provides more limited functionality and
+  and depends on the Pcap PMD. It is retained only for compatibility reasons;
+  users should use ``dpdk-dumpcap`` instead.
 
 
 Test Environment
 ----------------
 
-The overview of using the Packet Capture Framework and the ``dpdk-pdump`` tool
+The overview of using the Packet Capture Framework and the ``dpdk-dumpcap`` utility
 for packet capturing on the DPDK port in
 :numref:`figure_packet_capture_framework`.
 
@@ -66,13 +70,13 @@ for packet capturing on the DPDK port in
 
 .. figure:: img/packet_capture_framework.*
 
-   Packet capturing on a DPDK port using the dpdk-pdump tool.
+   Packet capturing on a DPDK port using the dpdk-dumpcap utility.
 
 
 Running the Application
 -----------------------
 
-The following steps demonstrate how to run the ``dpdk-pdump`` tool to capture
+The following steps demonstrate how to run the ``dpdk-dumpcap`` tool to capture
 Rx side packets on dpdk_port0 in :numref:`figure_packet_capture_framework` and
 inspect them using ``tcpdump``.
 
@@ -80,16 +84,15 @@ inspect them using ``tcpdump``.
 
      sudo <build_dir>/app/dpdk-testpmd -c 0xf0 -n 4 -- -i --port-topology=chained
 
-#. Launch the pdump tool as follows::
+#. Launch the dpdk-dump as follows::
 
-     sudo <build_dir>/app/dpdk-pdump -- \
-          --pdump 'port=0,queue=*,rx-dev=/tmp/capture.pcap'
+     sudo <build_dir>/app/dpdk-dumpcap -w /tmp/capture.pcapng
 
 #. Send traffic to dpdk_port0 from traffic generator.
-   Inspect packets captured in the file capture.pcap using a tool
-   that can interpret Pcap files, for example tcpdump::
+   Inspect packets captured in the file capture.pcap using a tool such as
+   tcpdump or tshark that can interpret Pcapng files::
 
-     $tcpdump -nr /tmp/capture.pcap
+     $ tcpdump -nr /tmp/capture.pcapng
      reading from file /tmp/capture.pcap, link-type EN10MB (Ethernet)
      11:11:36.891404 IP 4.4.4.4.whois++ > 3.3.3.3.whois++: UDP, length 18
      11:11:36.891442 IP 4.4.4.4.whois++ > 3.3.3.3.whois++: UDP, length 18
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 2dce507f46a3..b440c77c2ba1 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -43,6 +43,7 @@ Programmer's Guide
     ip_fragment_reassembly_lib
     generic_receive_offload_lib
     generic_segmentation_offload_lib
+    pcapng_lib
     pdump_lib
     multi_proc_support
     kernel_nic_interface
diff --git a/doc/guides/prog_guide/pcapng_lib.rst b/doc/guides/prog_guide/pcapng_lib.rst
new file mode 100644
index 000000000000..36379b530a57
--- /dev/null
+++ b/doc/guides/prog_guide/pcapng_lib.rst
@@ -0,0 +1,24 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2016 Intel Corporation.
+
+.. _pcapng_library:
+
+Packet Capture File Writer
+==========================
+
+Pcapng is a library for creating files in Pcapng file format.
+The Pcapng file format is the default capture file format for modern
+network capture processing tools. It can be read by wireshark and tcpdump.
+
+Usage
+-----
+
+Before the library can be used the function ``rte_pcapng_init``
+should be called once to initialize timestamp computation.
+
+
+References
+----------
+* Draft RFC https://www.ietf.org/id/draft-tuexen-opsawg-pcapng-03.html
+
+* Project repository  https://github.com/pcapng/pcapng/
diff --git a/doc/guides/prog_guide/pdump_lib.rst b/doc/guides/prog_guide/pdump_lib.rst
index 62c0b015b2fe..9af91415e5ea 100644
--- a/doc/guides/prog_guide/pdump_lib.rst
+++ b/doc/guides/prog_guide/pdump_lib.rst
@@ -3,10 +3,10 @@
 
 .. _pdump_library:
 
-The librte_pdump Library
-========================
+The Packet Capture Library
+==========================
 
-The ``librte_pdump`` library provides a framework for packet capturing in DPDK.
+The DPDK ``pdump`` library provides a framework for packet capturing in DPDK.
 The library does the complete copy of the Rx and Tx mbufs to a new mempool and
 hence it slows down the performance of the applications, so it is recommended
 to use this library for debugging purposes.
@@ -23,11 +23,19 @@ or disable the packet capture, and to uninitialize it.
 
 * ``rte_pdump_enable()``:
   This API enables the packet capture on a given port and queue.
-  Note: The filter option in the API is a place holder for future enhancements.
+
+* ``rte_pdump_enable_bpf()``
+  This API enables the packet capture on a given port and queue.
+  It also allows setting an optional filter using DPDK BPF interpreter and
+  setting the captured packet length.
 
 * ``rte_pdump_enable_by_deviceid()``:
   This API enables the packet capture on a given device id (``vdev name or pci address``) and queue.
-  Note: The filter option in the API is a place holder for future enhancements.
+
+* ``rte_pdump_enable_bpf_by_deviceid()``
+  This API enables the packet capture on a given device id (``vdev name or pci address``) and queue.
+  It also allows seating an optional filter using DPDK BPF interpreter and
+  setting the captured packet length.
 
 * ``rte_pdump_disable()``:
   This API disables the packet capture on a given port and queue.
@@ -61,6 +69,12 @@ and enables the packet capture by registering the Ethernet RX and TX callbacks f
 and queue combinations. Then the primary process will mirror the packets to the new mempool and enqueue them to
 the rte_ring that secondary process have passed to these APIs.
 
+The packet ring supports one of two formats. The default format enqueues copies of the original packets
+into the rte_ring. If the ``RTE_PDUMP_FLAG_PCAPNG`` is set the mbuf data is extended with header and trailer
+to match the format of Pcapng enhanced packet block. The enhanced packet block has meta-data such as the
+timestamp, port and queue the packet was captured on. It is up to the application consuming the
+packets from the ring to select the format desired.
+
 The library APIs ``rte_pdump_disable()`` and ``rte_pdump_disable_by_deviceid()`` disables the packet capture.
 For the calls to these APIs from secondary process, the library creates the "pdump disable" request and sends
 the request to the primary process over the multi process channel. The primary process takes this request and
@@ -74,5 +88,5 @@ function.
 Use Case: Packet Capturing
 --------------------------
 
-The DPDK ``app/pdump`` tool is developed based on this library to capture packets in DPDK.
-Users can use this as an example to develop their own packet capturing tools.
+The DPDK ``app/dpdk-dumpcap`` utility uses this library
+to capture packets in DPDK.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 37dc1a77866a..eb93bd6b9157 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -125,6 +125,16 @@ New Features
   * Added tests to validate packets hard expiry.
   * Added tests to verify tunnel header verification in IPsec inbound.
 
+* **Revised packet capture framework.**
+
+  * New dpdk-dumpcap program that has most of the features of the
+    wireshark dumpcap utility including: capture of multiple interfaces,
+    filtering, and stopping after number of bytes, packets.
+  * New library for writing pcapng packet capture files.
+  * Enhancements to the pdump library to support:
+    * Packet filter with BPF.
+    * Pcapng format with timestamps and meta-data.
+    * Fixes packet capture with stripped VLAN tags.
 
 Removed Items
 -------------
diff --git a/doc/guides/tools/dumpcap.rst b/doc/guides/tools/dumpcap.rst
new file mode 100644
index 000000000000..664ea0c79802
--- /dev/null
+++ b/doc/guides/tools/dumpcap.rst
@@ -0,0 +1,86 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2020 Microsoft Corporation.
+
+.. _dumpcap_tool:
+
+dpdk-dumpcap Application
+========================
+
+The ``dpdk-dumpcap`` tool is a Data Plane Development Kit (DPDK)
+network traffic dump tool.  The interface is similar to  the dumpcap tool in Wireshark.
+It runs as a secondary DPDK process and lets you capture packets that are
+coming into and out of a DPDK primary process.
+The ``dpdk-dumpcap`` writes files in Pcapng packet format using
+capture file format is pcapng.
+
+Without any options set it will use DPDK to capture traffic from the first
+available DPDK interface and write the received raw packet data, along
+with timestamps into a pcapng file.
+
+If the ``-w`` option is not specified, ``dpdk-dumpcap`` writes to a newly
+create file with a name chosen based on interface name and timestamp.
+If ``-w`` option is specified, then that file is used.
+
+   .. Note::
+      * The ``dpdk-dumpcap`` tool can only be used in conjunction with a primary
+        application which has the packet capture framework initialized already.
+        In dpdk, only the ``testpmd`` is modified to initialize packet capture
+        framework, other applications remain untouched. So, if the ``dpdk-dumpcap``
+        tool has to be used with any application other than the testpmd, user
+        needs to explicitly modify that application to call packet capture
+        framework initialization code. Refer ``app/test-pmd/testpmd.c``
+        code to see how this is done.
+
+      * The ``dpdk-dumpcap`` tool runs as a DPDK secondary process. It exits when
+        the primary application exits.
+
+
+Running the Application
+-----------------------
+
+To list interfaces available for capture use ``--list-interfaces``.
+
+To filter packets in style of *tshark* use the ``-f`` flag.
+
+To capture on multiple interfaces at once, use multiple ``-I`` flags.
+
+Example
+-------
+
+.. code-block:: console
+
+   # ./<build_dir>/app/dpdk-dumpcap --list-interfaces
+   0. 000:00:03.0
+   1. 000:00:03.1
+
+   # ./<build_dir>/app/dpdk-dumpcap -I 0000:00:03.0 -c 6 -w /tmp/sample.pcapng
+   Packets captured: 6
+   Packets received/dropped on interface '0000:00:03.0' 6/0
+
+   # ./<build_dir>/app/dpdk-dumpcap -f 'tcp port 80'
+   Packets captured: 6
+   Packets received/dropped on interface '0000:00:03.0' 10/8
+
+
+Limitations
+-----------
+The following option of Wireshark ``dumpcap`` is not yet implemented:
+
+   * ``-b|--ring-buffer`` -- more complex file management.
+
+The following options do not make sense in the context of DPDK.
+
+   * ``-C <byte_limit>`` -- its a kernel thing
+
+   * ``-t`` -- use a thread per interface
+
+   * Timestamp type.
+
+   * Link data types. Only EN10MB (Ethernet) is supported.
+
+   * Wireless related options:  ``-I|--monitor-mode`` and  ``-k <freq>``
+
+
+.. Note::
+   * The options to ``dpdk-dumpcap`` are like the Wireshark dumpcap program and
+     are not the same as ``dpdk-pdump`` and other DPDK applications.
diff --git a/doc/guides/tools/index.rst b/doc/guides/tools/index.rst
index 93dde4148e90..b71c12b8f2dd 100644
--- a/doc/guides/tools/index.rst
+++ b/doc/guides/tools/index.rst
@@ -8,6 +8,7 @@ DPDK Tools User Guides
     :maxdepth: 2
     :numbered:
 
+    dumpcap
     proc_info
     pdump
     pmdinfo
-- 
2.30.2


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [PATCH v2 1/2] net: rename Ethernet header fields
  @ 2021-10-01 16:33  1%   ` Dmitry Kozlyuk
    1 sibling, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-01 16:33 UTC (permalink / raw)
  To: dev; +Cc: Dmitry Kozlyuk, Ferruh Yigit, Olivier Matz, Stephen Hemminger

Definition of `rte_ether_addr` structure used a workaround allowing DPDK
and Windows SDK headers to be used in the same file, because Windows SDK
defines `s_addr` as a macro. Rename `s_addr` to `src_addr` and `d_addr`
to `dst_addr` to avoid the conflict and remove the workaround.
Deprecation notice:
https://mails.dpdk.org/archives/dev/2021-July/215270.html

Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
---
 app/test-pmd/5tswap.c                         |  6 +--
 app/test-pmd/csumonly.c                       |  4 +-
 app/test-pmd/flowgen.c                        |  4 +-
 app/test-pmd/icmpecho.c                       | 16 ++++----
 app/test-pmd/ieee1588fwd.c                    |  6 +--
 app/test-pmd/macfwd.c                         |  4 +-
 app/test-pmd/macswap.h                        |  4 +-
 app/test-pmd/txonly.c                         |  4 +-
 app/test-pmd/util.c                           |  4 +-
 app/test/packet_burst_generator.c             |  4 +-
 app/test/test_bpf.c                           |  4 +-
 app/test/test_link_bonding_mode4.c            | 15 +++----
 doc/guides/rel_notes/deprecation.rst          |  3 --
 doc/guides/rel_notes/release_21_11.rst        |  3 ++
 drivers/net/avp/avp_ethdev.c                  |  6 +--
 drivers/net/bnx2x/bnx2x.c                     | 16 ++++----
 drivers/net/bonding/rte_eth_bond_8023ad.c     |  6 +--
 drivers/net/bonding/rte_eth_bond_alb.c        |  4 +-
 drivers/net/bonding/rte_eth_bond_pmd.c        | 20 +++++-----
 drivers/net/enic/enic_flow.c                  |  8 ++--
 drivers/net/mlx5/mlx5_txpp.c                  |  4 +-
 examples/bond/main.c                          | 14 ++++---
 examples/ethtool/ethtool-app/main.c           |  4 +-
 examples/eventdev_pipeline/pipeline_common.h  |  4 +-
 examples/flow_filtering/main.c                |  4 +-
 examples/ioat/ioatfwd.c                       |  4 +-
 examples/ip_fragmentation/main.c              |  4 +-
 examples/ip_reassembly/main.c                 |  4 +-
 examples/ipsec-secgw/ipsec-secgw.c            |  4 +-
 examples/ipsec-secgw/ipsec_worker.c           |  4 +-
 examples/ipv4_multicast/main.c                |  4 +-
 examples/l2fwd-crypto/main.c                  |  4 +-
 examples/l2fwd-event/l2fwd_common.h           |  4 +-
 examples/l2fwd-jobstats/main.c                |  4 +-
 examples/l2fwd-keepalive/main.c               |  4 +-
 examples/l2fwd/main.c                         |  4 +-
 examples/l3fwd-acl/main.c                     | 19 +++++----
 examples/l3fwd-power/main.c                   |  6 +--
 examples/l3fwd/l3fwd_em.h                     |  4 +-
 examples/l3fwd/l3fwd_fib.c                    |  2 +-
 examples/l3fwd/l3fwd_lpm.c                    |  2 +-
 examples/l3fwd/l3fwd_lpm.h                    |  4 +-
 examples/link_status_interrupt/main.c         |  4 +-
 .../performance-thread/l3fwd-thread/main.c    | 40 +++++++++----------
 examples/ptpclient/ptpclient.c                | 16 ++++----
 examples/vhost/main.c                         | 10 ++---
 examples/vmdq/main.c                          |  4 +-
 examples/vmdq_dcb/main.c                      |  4 +-
 lib/ethdev/rte_flow.h                         |  4 +-
 lib/gro/gro_tcp4.c                            |  4 +-
 lib/gro/gro_udp4.c                            |  4 +-
 lib/gro/gro_vxlan_tcp4.c                      |  8 ++--
 lib/gro/gro_vxlan_udp4.c                      |  8 ++--
 lib/net/rte_arp.c                             |  4 +-
 lib/net/rte_ether.h                           | 22 +---------
 lib/pipeline/rte_table_action.c               | 40 +++++++++----------
 56 files changed, 206 insertions(+), 218 deletions(-)

diff --git a/app/test-pmd/5tswap.c b/app/test-pmd/5tswap.c
index e8cef9623b..629d3e0d31 100644
--- a/app/test-pmd/5tswap.c
+++ b/app/test-pmd/5tswap.c
@@ -27,9 +27,9 @@ swap_mac(struct rte_ether_hdr *eth_hdr)
 	struct rte_ether_addr addr;
 
 	/* Swap dest and src mac addresses. */
-	rte_ether_addr_copy(&eth_hdr->d_addr, &addr);
-	rte_ether_addr_copy(&eth_hdr->s_addr, &eth_hdr->d_addr);
-	rte_ether_addr_copy(&addr, &eth_hdr->s_addr);
+	rte_ether_addr_copy(&eth_hdr->dst_addr, &addr);
+	rte_ether_addr_copy(&eth_hdr->src_addr, &eth_hdr->dst_addr);
+	rte_ether_addr_copy(&addr, &eth_hdr->src_addr);
 }
 
 static inline void
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 38cc256533..090797318a 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -873,9 +873,9 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 
 		eth_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 		rte_ether_addr_copy(&peer_eth_addrs[fs->peer_addr],
-				&eth_hdr->d_addr);
+				&eth_hdr->dst_addr);
 		rte_ether_addr_copy(&ports[fs->tx_port].eth_addr,
-				&eth_hdr->s_addr);
+				&eth_hdr->src_addr);
 		parse_ethernet(eth_hdr, &info);
 		l3_hdr = (char *)eth_hdr + info.l2_len;
 
diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index 0d3664a64d..a96169e680 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -122,8 +122,8 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
 
 			/* Initialize Ethernet header. */
 			eth_hdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *);
-			rte_ether_addr_copy(&cfg_ether_dst, &eth_hdr->d_addr);
-			rte_ether_addr_copy(&cfg_ether_src, &eth_hdr->s_addr);
+			rte_ether_addr_copy(&cfg_ether_dst, &eth_hdr->dst_addr);
+			rte_ether_addr_copy(&cfg_ether_src, &eth_hdr->src_addr);
 			eth_hdr->ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 
 			/* Initialize IP header. */
diff --git a/app/test-pmd/icmpecho.c b/app/test-pmd/icmpecho.c
index 8948f28eb5..8f1d68a83a 100644
--- a/app/test-pmd/icmpecho.c
+++ b/app/test-pmd/icmpecho.c
@@ -319,8 +319,8 @@ reply_to_icmp_echo_rqsts(struct fwd_stream *fs)
 		if (verbose_level > 0) {
 			printf("\nPort %d pkt-len=%u nb-segs=%u\n",
 			       fs->rx_port, pkt->pkt_len, pkt->nb_segs);
-			ether_addr_dump("  ETH:  src=", &eth_h->s_addr);
-			ether_addr_dump(" dst=", &eth_h->d_addr);
+			ether_addr_dump("  ETH:  src=", &eth_h->src_addr);
+			ether_addr_dump(" dst=", &eth_h->dst_addr);
 		}
 		if (eth_type == RTE_ETHER_TYPE_VLAN) {
 			vlan_h = (struct rte_vlan_hdr *)
@@ -385,17 +385,17 @@ reply_to_icmp_echo_rqsts(struct fwd_stream *fs)
 			 */
 
 			/* Use source MAC address as destination MAC address. */
-			rte_ether_addr_copy(&eth_h->s_addr, &eth_h->d_addr);
+			rte_ether_addr_copy(&eth_h->src_addr, &eth_h->dst_addr);
 			/* Set source MAC address with MAC address of TX port */
 			rte_ether_addr_copy(&ports[fs->tx_port].eth_addr,
-					&eth_h->s_addr);
+					&eth_h->src_addr);
 
 			arp_h->arp_opcode = rte_cpu_to_be_16(RTE_ARP_OP_REPLY);
 			rte_ether_addr_copy(&arp_h->arp_data.arp_tha,
 					&eth_addr);
 			rte_ether_addr_copy(&arp_h->arp_data.arp_sha,
 					&arp_h->arp_data.arp_tha);
-			rte_ether_addr_copy(&eth_h->s_addr,
+			rte_ether_addr_copy(&eth_h->src_addr,
 					&arp_h->arp_data.arp_sha);
 
 			/* Swap IP addresses in ARP payload */
@@ -453,9 +453,9 @@ reply_to_icmp_echo_rqsts(struct fwd_stream *fs)
 		 * ICMP checksum is computed by assuming it is valid in the
 		 * echo request and not verified.
 		 */
-		rte_ether_addr_copy(&eth_h->s_addr, &eth_addr);
-		rte_ether_addr_copy(&eth_h->d_addr, &eth_h->s_addr);
-		rte_ether_addr_copy(&eth_addr, &eth_h->d_addr);
+		rte_ether_addr_copy(&eth_h->src_addr, &eth_addr);
+		rte_ether_addr_copy(&eth_h->dst_addr, &eth_h->src_addr);
+		rte_ether_addr_copy(&eth_addr, &eth_h->dst_addr);
 		ip_addr = ip_h->src_addr;
 		if (is_multicast_ipv4_addr(ip_h->dst_addr)) {
 			uint32_t ip_src;
diff --git a/app/test-pmd/ieee1588fwd.c b/app/test-pmd/ieee1588fwd.c
index 034f238c34..9cf10c1c50 100644
--- a/app/test-pmd/ieee1588fwd.c
+++ b/app/test-pmd/ieee1588fwd.c
@@ -178,9 +178,9 @@ ieee1588_packet_fwd(struct fwd_stream *fs)
 	port_ieee1588_rx_timestamp_check(fs->rx_port, timesync_index);
 
 	/* Swap dest and src mac addresses. */
-	rte_ether_addr_copy(&eth_hdr->d_addr, &addr);
-	rte_ether_addr_copy(&eth_hdr->s_addr, &eth_hdr->d_addr);
-	rte_ether_addr_copy(&addr, &eth_hdr->s_addr);
+	rte_ether_addr_copy(&eth_hdr->dst_addr, &addr);
+	rte_ether_addr_copy(&eth_hdr->src_addr, &eth_hdr->dst_addr);
+	rte_ether_addr_copy(&addr, &eth_hdr->src_addr);
 
 	/* Forward PTP packet with hardware TX timestamp */
 	mb->ol_flags |= PKT_TX_IEEE1588_TMST;
diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c
index 0568ea794d..ee76df7f03 100644
--- a/app/test-pmd/macfwd.c
+++ b/app/test-pmd/macfwd.c
@@ -85,9 +85,9 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
 		mb = pkts_burst[i];
 		eth_hdr = rte_pktmbuf_mtod(mb, struct rte_ether_hdr *);
 		rte_ether_addr_copy(&peer_eth_addrs[fs->peer_addr],
-				&eth_hdr->d_addr);
+				&eth_hdr->dst_addr);
 		rte_ether_addr_copy(&ports[fs->tx_port].eth_addr,
-				&eth_hdr->s_addr);
+				&eth_hdr->src_addr);
 		mb->ol_flags &= IND_ATTACHED_MBUF | EXT_ATTACHED_MBUF;
 		mb->ol_flags |= ol_flags;
 		mb->l2_len = sizeof(struct rte_ether_hdr);
diff --git a/app/test-pmd/macswap.h b/app/test-pmd/macswap.h
index 0138441566..20823b9b8c 100644
--- a/app/test-pmd/macswap.h
+++ b/app/test-pmd/macswap.h
@@ -30,8 +30,8 @@ do_macswap(struct rte_mbuf *pkts[], uint16_t nb,
 
 		/* Swap dest and src mac addresses. */
 		rte_ether_addr_copy(&eth_hdr->d_addr, &addr);
-		rte_ether_addr_copy(&eth_hdr->s_addr, &eth_hdr->d_addr);
-		rte_ether_addr_copy(&addr, &eth_hdr->s_addr);
+		rte_ether_addr_copy(&eth_hdr->src_addr, &eth_hdr->d_addr);
+		rte_ether_addr_copy(&addr, &eth_hdr->src_addr);
 
 		mbuf_field_set(mb, ol_flags);
 	}
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index aed820f5d3..40655801cc 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -362,8 +362,8 @@ pkt_burst_transmit(struct fwd_stream *fs)
 	/*
 	 * Initialize Ethernet header.
 	 */
-	rte_ether_addr_copy(&peer_eth_addrs[fs->peer_addr], &eth_hdr.d_addr);
-	rte_ether_addr_copy(&ports[fs->tx_port].eth_addr, &eth_hdr.s_addr);
+	rte_ether_addr_copy(&peer_eth_addrs[fs->peer_addr], &eth_hdr.dst_addr);
+	rte_ether_addr_copy(&ports[fs->tx_port].eth_addr, &eth_hdr.src_addr);
 	eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 
 	if (rte_mempool_get_bulk(mbp, (void **)pkts_burst,
diff --git a/app/test-pmd/util.c b/app/test-pmd/util.c
index 14a9a251fb..51506e4940 100644
--- a/app/test-pmd/util.c
+++ b/app/test-pmd/util.c
@@ -142,9 +142,9 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[],
 					  " - no miss group");
 			MKDUMPSTR(print_buf, buf_size, cur_len, "\n");
 		}
-		print_ether_addr("  src=", &eth_hdr->s_addr,
+		print_ether_addr("  src=", &eth_hdr->src_addr,
 				 print_buf, buf_size, &cur_len);
-		print_ether_addr(" - dst=", &eth_hdr->d_addr,
+		print_ether_addr(" - dst=", &eth_hdr->dst_addr,
 				 print_buf, buf_size, &cur_len);
 		MKDUMPSTR(print_buf, buf_size, cur_len,
 			  " - type=0x%04x - length=%u - nb_segs=%d",
diff --git a/app/test/packet_burst_generator.c b/app/test/packet_burst_generator.c
index 0fd7290b0e..8ac24577ba 100644
--- a/app/test/packet_burst_generator.c
+++ b/app/test/packet_burst_generator.c
@@ -56,8 +56,8 @@ initialize_eth_header(struct rte_ether_hdr *eth_hdr,
 		struct rte_ether_addr *dst_mac, uint16_t ether_type,
 		uint8_t vlan_enabled, uint16_t van_id)
 {
-	rte_ether_addr_copy(dst_mac, &eth_hdr->d_addr);
-	rte_ether_addr_copy(src_mac, &eth_hdr->s_addr);
+	rte_ether_addr_copy(dst_mac, &eth_hdr->dst_addr);
+	rte_ether_addr_copy(src_mac, &eth_hdr->src_addr);
 
 	if (vlan_enabled) {
 		struct rte_vlan_hdr *vhdr = (struct rte_vlan_hdr *)(
diff --git a/app/test/test_bpf.c b/app/test/test_bpf.c
index 527c06b807..8118a1849b 100644
--- a/app/test/test_bpf.c
+++ b/app/test/test_bpf.c
@@ -1008,9 +1008,9 @@ test_jump2_prepare(void *arg)
 	 * Initialize ether header.
 	 */
 	rte_ether_addr_copy((struct rte_ether_addr *)dst_mac,
-			    &dn->eth_hdr.d_addr);
+			    &dn->eth_hdr.dst_addr);
 	rte_ether_addr_copy((struct rte_ether_addr *)src_mac,
-			    &dn->eth_hdr.s_addr);
+			    &dn->eth_hdr.src_addr);
 	dn->eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 
 	/*
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 2c835fa7ad..f120b2e3be 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -502,8 +502,8 @@ make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
 	slow_hdr = rte_pktmbuf_mtod(pkt, struct slow_protocol_frame *);
 
 	/* Change source address to partner address */
-	rte_ether_addr_copy(&parnter_mac_default, &slow_hdr->eth_hdr.s_addr);
-	slow_hdr->eth_hdr.s_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
+	rte_ether_addr_copy(&parnter_mac_default, &slow_hdr->eth_hdr.src_addr);
+	slow_hdr->eth_hdr.src_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
 		slave->port_id;
 
 	lacp = (struct lacpdu *) &slow_hdr->slow_protocol;
@@ -870,7 +870,7 @@ test_mode4_rx(void)
 
 		for (i = 0; i < expected_pkts_cnt; i++) {
 			hdr = rte_pktmbuf_mtod(pkts[i], struct rte_ether_hdr *);
-			cnt[rte_is_same_ether_addr(&hdr->d_addr,
+			cnt[rte_is_same_ether_addr(&hdr->dst_addr,
 							&bonded_mac)]++;
 		}
 
@@ -918,7 +918,7 @@ test_mode4_rx(void)
 
 		for (i = 0; i < expected_pkts_cnt; i++) {
 			hdr = rte_pktmbuf_mtod(pkts[i], struct rte_ether_hdr *);
-			eq_cnt += rte_is_same_ether_addr(&hdr->d_addr,
+			eq_cnt += rte_is_same_ether_addr(&hdr->dst_addr,
 							&bonded_mac);
 		}
 
@@ -1163,11 +1163,12 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
 
 	/* Copy multicast destination address */
 	rte_ether_addr_copy(&slow_protocol_mac_addr,
-			&marker_hdr->eth_hdr.d_addr);
+			&marker_hdr->eth_hdr.dst_addr);
 
 	/* Init source address */
-	rte_ether_addr_copy(&parnter_mac_default, &marker_hdr->eth_hdr.s_addr);
-	marker_hdr->eth_hdr.s_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
+	rte_ether_addr_copy(&parnter_mac_default,
+			&marker_hdr->eth_hdr.src_addr);
+	marker_hdr->eth_hdr.src_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
 		slave->port_id;
 
 	marker_hdr->eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_SLOW);
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 05fc2fdee7..918ea3f403 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -172,9 +172,6 @@ Deprecation Notices
   can still be used if users specify the devarg "driver=i40evf". I40evf will
   be deleted in DPDK 21.11.
 
-* net: ``s_addr`` and ``d_addr`` fields of ``rte_ether_hdr`` structure
-  will be renamed in DPDK 21.11 to avoid conflict with Windows Sockets headers.
-
 * net: The structure ``rte_ipv4_hdr`` will have two unions.
   The first union is for existing ``version_ihl`` byte
   and new bitfield for version and IHL.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 37dc1a7786..9ed3d84b32 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -178,6 +178,9 @@ API Changes
   the crypto/security operation. This field will be used to communicate
   events such as soft expiry with IPsec in lookaside mode.
 
+* net: Renamed ``s_addr`` and ``d_addr`` fields of ``rte_ether_hdr`` structure
+  to ``src_addr`` and ``dst_addr``, respectively.
+
 
 ABI Changes
 -----------
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 623fa5e5ff..b5fafd32b0 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -1205,17 +1205,17 @@ _avp_mac_filter(struct avp_dev *avp, struct rte_mbuf *m)
 {
 	struct rte_ether_hdr *eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 
-	if (likely(_avp_cmp_ether_addr(&avp->ethaddr, &eth->d_addr) == 0)) {
+	if (likely(_avp_cmp_ether_addr(&avp->ethaddr, &eth->dst_addr) == 0)) {
 		/* allow all packets destined to our address */
 		return 0;
 	}
 
-	if (likely(rte_is_broadcast_ether_addr(&eth->d_addr))) {
+	if (likely(rte_is_broadcast_ether_addr(&eth->dst_addr))) {
 		/* allow all broadcast packets */
 		return 0;
 	}
 
-	if (likely(rte_is_multicast_ether_addr(&eth->d_addr))) {
+	if (likely(rte_is_multicast_ether_addr(&eth->dst_addr))) {
 		/* allow all multicast packets */
 		return 0;
 	}
diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c
index 9163b8b1fd..083deff1b1 100644
--- a/drivers/net/bnx2x/bnx2x.c
+++ b/drivers/net/bnx2x/bnx2x.c
@@ -2233,8 +2233,8 @@ int bnx2x_tx_encap(struct bnx2x_tx_queue *txq, struct rte_mbuf *m0)
 
 		tx_parse_bd =
 		    &txq->tx_ring[TX_BD(bd_prod, txq)].parse_bd_e2;
-		if (rte_is_multicast_ether_addr(&eh->d_addr)) {
-			if (rte_is_broadcast_ether_addr(&eh->d_addr))
+		if (rte_is_multicast_ether_addr(&eh->dst_addr)) {
+			if (rte_is_broadcast_ether_addr(&eh->dst_addr))
 				mac_type = BROADCAST_ADDRESS;
 			else
 				mac_type = MULTICAST_ADDRESS;
@@ -2243,17 +2243,17 @@ int bnx2x_tx_encap(struct bnx2x_tx_queue *txq, struct rte_mbuf *m0)
 		    (mac_type << ETH_TX_PARSE_BD_E2_ETH_ADDR_TYPE_SHIFT);
 
 		rte_memcpy(&tx_parse_bd->data.mac_addr.dst_hi,
-			   &eh->d_addr.addr_bytes[0], 2);
+			   &eh->dst_addr.addr_bytes[0], 2);
 		rte_memcpy(&tx_parse_bd->data.mac_addr.dst_mid,
-			   &eh->d_addr.addr_bytes[2], 2);
+			   &eh->dst_addr.addr_bytes[2], 2);
 		rte_memcpy(&tx_parse_bd->data.mac_addr.dst_lo,
-			   &eh->d_addr.addr_bytes[4], 2);
+			   &eh->dst_addr.addr_bytes[4], 2);
 		rte_memcpy(&tx_parse_bd->data.mac_addr.src_hi,
-			   &eh->s_addr.addr_bytes[0], 2);
+			   &eh->src_addr.addr_bytes[0], 2);
 		rte_memcpy(&tx_parse_bd->data.mac_addr.src_mid,
-			   &eh->s_addr.addr_bytes[2], 2);
+			   &eh->src_addr.addr_bytes[2], 2);
 		rte_memcpy(&tx_parse_bd->data.mac_addr.src_lo,
-			   &eh->s_addr.addr_bytes[4], 2);
+			   &eh->src_addr.addr_bytes[4], 2);
 
 		tx_parse_bd->data.mac_addr.dst_hi =
 		    rte_cpu_to_be_16(tx_parse_bd->data.mac_addr.dst_hi);
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 8b5b32fcaf..3558644232 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -587,8 +587,8 @@ tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
 	hdr = rte_pktmbuf_mtod(lacp_pkt, struct lacpdu_header *);
 
 	/* Source and destination MAC */
-	rte_ether_addr_copy(&lacp_mac_addr, &hdr->eth_hdr.d_addr);
-	rte_eth_macaddr_get(slave_id, &hdr->eth_hdr.s_addr);
+	rte_ether_addr_copy(&lacp_mac_addr, &hdr->eth_hdr.dst_addr);
+	rte_eth_macaddr_get(slave_id, &hdr->eth_hdr.src_addr);
 	hdr->eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_SLOW);
 
 	lacpdu = &hdr->lacpdu;
@@ -1346,7 +1346,7 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
 		} while (unlikely(retval == 0));
 
 		m_hdr->marker.tlv_type_marker = MARKER_TLV_TYPE_RESP;
-		rte_eth_macaddr_get(slave_id, &m_hdr->eth_hdr.s_addr);
+		rte_eth_macaddr_get(slave_id, &m_hdr->eth_hdr.src_addr);
 
 		if (internals->mode4.dedicated_queues.enabled == 0) {
 			if (rte_ring_enqueue(port->tx_ring, pkt) != 0) {
diff --git a/drivers/net/bonding/rte_eth_bond_alb.c b/drivers/net/bonding/rte_eth_bond_alb.c
index 1d36a4a4a2..86335a7971 100644
--- a/drivers/net/bonding/rte_eth_bond_alb.c
+++ b/drivers/net/bonding/rte_eth_bond_alb.c
@@ -213,8 +213,8 @@ bond_mode_alb_arp_upd(struct client_data *client_info,
 	rte_spinlock_lock(&internals->mode6.lock);
 	eth_h = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *);
 
-	rte_ether_addr_copy(&client_info->app_mac, &eth_h->s_addr);
-	rte_ether_addr_copy(&client_info->cli_mac, &eth_h->d_addr);
+	rte_ether_addr_copy(&client_info->app_mac, &eth_h->src_addr);
+	rte_ether_addr_copy(&client_info->cli_mac, &eth_h->dst_addr);
 	if (client_info->vlan_count > 0)
 		eth_h->ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 54987d96b3..6831fcb104 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -342,11 +342,11 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
 						 bufs[j])) ||
 				!collecting ||
 				(!promisc &&
-				 ((rte_is_unicast_ether_addr(&hdr->d_addr) &&
+				 ((rte_is_unicast_ether_addr(&hdr->dst_addr) &&
 				   !rte_is_same_ether_addr(bond_mac,
-						       &hdr->d_addr)) ||
+						       &hdr->dst_addr)) ||
 				  (!allmulti &&
-				   rte_is_multicast_ether_addr(&hdr->d_addr)))))) {
+				   rte_is_multicast_ether_addr(&hdr->dst_addr)))))) {
 
 				if (hdr->ether_type == ether_type_slow_be) {
 					bond_mode_8023ad_handle_slow_pkt(
@@ -477,9 +477,9 @@ update_client_stats(uint32_t addr, uint16_t port, uint32_t *TXorRXindicator)
 		"DstMAC:" RTE_ETHER_ADDR_PRT_FMT " DstIP:%s %s %d\n", \
 		info,							\
 		port,							\
-		RTE_ETHER_ADDR_BYTES(&eth_h->s_addr),                  \
+		RTE_ETHER_ADDR_BYTES(&eth_h->src_addr),                  \
 		src_ip,							\
-		RTE_ETHER_ADDR_BYTES(&eth_h->d_addr),                  \
+		RTE_ETHER_ADDR_BYTES(&eth_h->dst_addr),                  \
 		dst_ip,							\
 		arp_op, ++burstnumber)
 #endif
@@ -643,9 +643,9 @@ static inline uint16_t
 ether_hash(struct rte_ether_hdr *eth_hdr)
 {
 	unaligned_uint16_t *word_src_addr =
-		(unaligned_uint16_t *)eth_hdr->s_addr.addr_bytes;
+		(unaligned_uint16_t *)eth_hdr->src_addr.addr_bytes;
 	unaligned_uint16_t *word_dst_addr =
-		(unaligned_uint16_t *)eth_hdr->d_addr.addr_bytes;
+		(unaligned_uint16_t *)eth_hdr->dst_addr.addr_bytes;
 
 	return (word_src_addr[0] ^ word_dst_addr[0]) ^
 			(word_src_addr[1] ^ word_dst_addr[1]) ^
@@ -942,10 +942,10 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 
 			ether_hdr = rte_pktmbuf_mtod(bufs[j],
 						struct rte_ether_hdr *);
-			if (rte_is_same_ether_addr(&ether_hdr->s_addr,
+			if (rte_is_same_ether_addr(&ether_hdr->src_addr,
 							&primary_slave_addr))
 				rte_ether_addr_copy(&active_slave_addr,
-						&ether_hdr->s_addr);
+						&ether_hdr->src_addr);
 #if defined(RTE_LIBRTE_BOND_DEBUG_ALB) || defined(RTE_LIBRTE_BOND_DEBUG_ALB_L1)
 					mode6_debug("TX IPv4:", ether_hdr, slaves[i], &burstnumberTX);
 #endif
@@ -1017,7 +1017,7 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			slave_idx = bond_mode_alb_arp_xmit(eth_h, offset, internals);
 
 			/* Change src mac in eth header */
-			rte_eth_macaddr_get(slave_idx, &eth_h->s_addr);
+			rte_eth_macaddr_get(slave_idx, &eth_h->src_addr);
 
 			/* Add packet to slave tx buffer */
 			slave_bufs[slave_idx][slave_bufs_pkts[slave_idx]] = bufs[i];
diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c
index cdfdc904a6..33147169ba 100644
--- a/drivers/net/enic/enic_flow.c
+++ b/drivers/net/enic/enic_flow.c
@@ -656,14 +656,14 @@ enic_copy_item_eth_v2(struct copy_item_args *arg)
 	if (!mask)
 		mask = &rte_flow_item_eth_mask;
 
-	memcpy(enic_spec.d_addr.addr_bytes, spec->dst.addr_bytes,
+	memcpy(enic_spec.dst_addr.addr_bytes, spec->dst.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	memcpy(enic_spec.s_addr.addr_bytes, spec->src.addr_bytes,
+	memcpy(enic_spec.src_addr.addr_bytes, spec->src.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
 
-	memcpy(enic_mask.d_addr.addr_bytes, mask->dst.addr_bytes,
+	memcpy(enic_mask.dst_addr.addr_bytes, mask->dst.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	memcpy(enic_mask.s_addr.addr_bytes, mask->src.addr_bytes,
+	memcpy(enic_mask.src_addr.addr_bytes, mask->src.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
 	enic_spec.ether_type = spec->type;
 	enic_mask.ether_type = mask->type;
diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c
index 4f6da9f2d1..2be7e71f89 100644
--- a/drivers/net/mlx5/mlx5_txpp.c
+++ b/drivers/net/mlx5/mlx5_txpp.c
@@ -333,8 +333,8 @@ mlx5_txpp_fill_wqe_clock_queue(struct mlx5_dev_ctx_shared *sh)
 		/* Build test packet L2 header (Ethernet). */
 		dst = (uint8_t *)&es->inline_data;
 		eth_hdr = (struct rte_ether_hdr *)dst;
-		rte_eth_random_addr(&eth_hdr->d_addr.addr_bytes[0]);
-		rte_eth_random_addr(&eth_hdr->s_addr.addr_bytes[0]);
+		rte_eth_random_addr(&eth_hdr->dst_addr.addr_bytes[0]);
+		rte_eth_random_addr(&eth_hdr->src_addr.addr_bytes[0]);
 		eth_hdr->ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		/* Build test packet L3 header (IP v4). */
 		dst += sizeof(struct rte_ether_hdr);
diff --git a/examples/bond/main.c b/examples/bond/main.c
index a63ca70a7f..f8e8c1df35 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -422,8 +422,8 @@ static int lcore_main(__rte_unused void *arg1)
 					if (arp_hdr->arp_opcode == rte_cpu_to_be_16(RTE_ARP_OP_REQUEST)) {
 						arp_hdr->arp_opcode = rte_cpu_to_be_16(RTE_ARP_OP_REPLY);
 						/* Switch src and dst data and set bonding MAC */
-						rte_ether_addr_copy(&eth_hdr->s_addr, &eth_hdr->d_addr);
-						rte_ether_addr_copy(&bond_mac_addr, &eth_hdr->s_addr);
+						rte_ether_addr_copy(&eth_hdr->src_addr, &eth_hdr->dst_addr);
+						rte_ether_addr_copy(&bond_mac_addr, &eth_hdr->src_addr);
 						rte_ether_addr_copy(&arp_hdr->arp_data.arp_sha,
 								&arp_hdr->arp_data.arp_tha);
 						arp_hdr->arp_data.arp_tip = arp_hdr->arp_data.arp_sip;
@@ -443,8 +443,10 @@ static int lcore_main(__rte_unused void *arg1)
 				 }
 				ipv4_hdr = (struct rte_ipv4_hdr *)((char *)(eth_hdr + 1) + offset);
 				if (ipv4_hdr->dst_addr == bond_ip) {
-					rte_ether_addr_copy(&eth_hdr->s_addr, &eth_hdr->d_addr);
-					rte_ether_addr_copy(&bond_mac_addr, &eth_hdr->s_addr);
+					rte_ether_addr_copy(&eth_hdr->src_addr,
+							&eth_hdr->dst_addr);
+					rte_ether_addr_copy(&bond_mac_addr,
+							&eth_hdr->src_addr);
 					ipv4_hdr->dst_addr = ipv4_hdr->src_addr;
 					ipv4_hdr->src_addr = bond_ip;
 					rte_eth_tx_burst(BOND_PORT, 0, &pkts[i], 1);
@@ -519,8 +521,8 @@ static void cmd_obj_send_parsed(void *parsed_result,
 	created_pkt->pkt_len = pkt_size;
 
 	eth_hdr = rte_pktmbuf_mtod(created_pkt, struct rte_ether_hdr *);
-	rte_ether_addr_copy(&bond_mac_addr, &eth_hdr->s_addr);
-	memset(&eth_hdr->d_addr, 0xFF, RTE_ETHER_ADDR_LEN);
+	rte_ether_addr_copy(&bond_mac_addr, &eth_hdr->src_addr);
+	memset(&eth_hdr->dst_addr, 0xFF, RTE_ETHER_ADDR_LEN);
 	eth_hdr->ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_ARP);
 
 	arp_hdr = (struct rte_arp_hdr *)(
diff --git a/examples/ethtool/ethtool-app/main.c b/examples/ethtool/ethtool-app/main.c
index 21ed85c7d6..1bc675962b 100644
--- a/examples/ethtool/ethtool-app/main.c
+++ b/examples/ethtool/ethtool-app/main.c
@@ -172,8 +172,8 @@ static void process_frame(struct app_port *ptr_port,
 	struct rte_ether_hdr *ptr_mac_hdr;
 
 	ptr_mac_hdr = rte_pktmbuf_mtod(ptr_frame, struct rte_ether_hdr *);
-	rte_ether_addr_copy(&ptr_mac_hdr->s_addr, &ptr_mac_hdr->d_addr);
-	rte_ether_addr_copy(&ptr_port->mac_addr, &ptr_mac_hdr->s_addr);
+	rte_ether_addr_copy(&ptr_mac_hdr->src_addr, &ptr_mac_hdr->dst_addr);
+	rte_ether_addr_copy(&ptr_port->mac_addr, &ptr_mac_hdr->src_addr);
 }
 
 static int worker_main(__rte_unused void *ptr_data)
diff --git a/examples/eventdev_pipeline/pipeline_common.h b/examples/eventdev_pipeline/pipeline_common.h
index 6a4287602e..b12eb281e1 100644
--- a/examples/eventdev_pipeline/pipeline_common.h
+++ b/examples/eventdev_pipeline/pipeline_common.h
@@ -104,8 +104,8 @@ exchange_mac(struct rte_mbuf *m)
 
 	/* change mac addresses on packet (to use mbuf data) */
 	eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
-	rte_ether_addr_copy(&eth->d_addr, &addr);
-	rte_ether_addr_copy(&addr, &eth->d_addr);
+	rte_ether_addr_copy(&eth->dst_addr, &addr);
+	rte_ether_addr_copy(&addr, &eth->dst_addr);
 }
 
 static __rte_always_inline void
diff --git a/examples/flow_filtering/main.c b/examples/flow_filtering/main.c
index 29fb4b3d55..dd8a33d036 100644
--- a/examples/flow_filtering/main.c
+++ b/examples/flow_filtering/main.c
@@ -75,9 +75,9 @@ main_loop(void)
 					eth_hdr = rte_pktmbuf_mtod(m,
 							struct rte_ether_hdr *);
 					print_ether_addr("src=",
-							&eth_hdr->s_addr);
+							&eth_hdr->src_addr);
 					print_ether_addr(" - dst=",
-							&eth_hdr->d_addr);
+							&eth_hdr->dst_addr);
 					printf(" - queue=0x%x",
 							(unsigned int)i);
 					printf("\n");
diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index b3977a8be5..ff36aa7f1e 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -322,11 +322,11 @@ update_mac_addrs(struct rte_mbuf *m, uint32_t dest_portid)
 	/* 02:00:00:00:00:xx - overwriting 2 bytes of source address but
 	 * it's acceptable cause it gets overwritten by rte_ether_addr_copy
 	 */
-	tmp = &eth->d_addr.addr_bytes[0];
+	tmp = &eth->dst_addr.addr_bytes[0];
 	*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dest_portid << 40);
 
 	/* src addr */
-	rte_ether_addr_copy(&ioat_ports_eth_addr[dest_portid], &eth->s_addr);
+	rte_ether_addr_copy(&ioat_ports_eth_addr[dest_portid], &eth->src_addr);
 }
 
 /* Perform packet copy there is a user-defined function. 8< */
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index f245369720..a7f40970f2 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -362,13 +362,13 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
 		m->l2_len = sizeof(struct rte_ether_hdr);
 
 		/* 02:00:00:00:00:xx */
-		d_addr_bytes = &eth_hdr->d_addr.addr_bytes[0];
+		d_addr_bytes = &eth_hdr->dst_addr.addr_bytes[0];
 		*((uint64_t *)d_addr_bytes) = 0x000000000002 +
 			((uint64_t)port_out << 40);
 
 		/* src addr */
 		rte_ether_addr_copy(&ports_eth_addr[port_out],
-				&eth_hdr->s_addr);
+				&eth_hdr->src_addr);
 		eth_hdr->ether_type = ether_type;
 	}
 
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 8645ac790b..d611c7d016 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -413,11 +413,11 @@ reassemble(struct rte_mbuf *m, uint16_t portid, uint32_t queue,
 	/* if packet wasn't IPv4 or IPv6, it's forwarded to the port it came from */
 
 	/* 02:00:00:00:00:xx */
-	d_addr_bytes = &eth_hdr->d_addr.addr_bytes[0];
+	d_addr_bytes = &eth_hdr->dst_addr.addr_bytes[0];
 	*((uint64_t *)d_addr_bytes) = 0x000000000002 + ((uint64_t)dst_port << 40);
 
 	/* src addr */
-	rte_ether_addr_copy(&ports_eth_addr[dst_port], &eth_hdr->s_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port], &eth_hdr->src_addr);
 
 	send_single_packet(m, dst_port);
 }
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 7ad94cb822..7b01872c6f 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -545,9 +545,9 @@ prepare_tx_pkt(struct rte_mbuf *pkt, uint16_t port,
 		ethhdr->ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 	}
 
-	memcpy(&ethhdr->s_addr, &ethaddr_tbl[port].src,
+	memcpy(&ethhdr->src_addr, &ethaddr_tbl[port].src,
 			sizeof(struct rte_ether_addr));
-	memcpy(&ethhdr->d_addr, &ethaddr_tbl[port].dst,
+	memcpy(&ethhdr->dst_addr, &ethaddr_tbl[port].dst,
 			sizeof(struct rte_ether_addr));
 }
 
diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c
index c545497cee..61cf9f57fb 100644
--- a/examples/ipsec-secgw/ipsec_worker.c
+++ b/examples/ipsec-secgw/ipsec_worker.c
@@ -49,8 +49,8 @@ update_mac_addrs(struct rte_mbuf *pkt, uint16_t portid)
 	struct rte_ether_hdr *ethhdr;
 
 	ethhdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *);
-	memcpy(&ethhdr->s_addr, &ethaddr_tbl[portid].src, RTE_ETHER_ADDR_LEN);
-	memcpy(&ethhdr->d_addr, &ethaddr_tbl[portid].dst, RTE_ETHER_ADDR_LEN);
+	memcpy(&ethhdr->src_addr, &ethaddr_tbl[portid].src, RTE_ETHER_ADDR_LEN);
+	memcpy(&ethhdr->dst_addr, &ethaddr_tbl[portid].dst, RTE_ETHER_ADDR_LEN);
 }
 
 static inline void
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index cc527d7f6b..d10de30ddb 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -283,8 +283,8 @@ mcast_send_pkt(struct rte_mbuf *pkt, struct rte_ether_addr *dest_addr,
 		rte_pktmbuf_prepend(pkt, (uint16_t)sizeof(*ethdr));
 	RTE_ASSERT(ethdr != NULL);
 
-	rte_ether_addr_copy(dest_addr, &ethdr->d_addr);
-	rte_ether_addr_copy(&ports_eth_addr[port], &ethdr->s_addr);
+	rte_ether_addr_copy(dest_addr, &ethdr->dst_addr);
+	rte_ether_addr_copy(&ports_eth_addr[port], &ethdr->src_addr);
 	ethdr->ether_type = rte_be_to_cpu_16(RTE_ETHER_TYPE_IPV4);
 
 	/* Put new packet into the output queue */
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index 66d1491bf7..c2ffbdd506 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -617,11 +617,11 @@ l2fwd_mac_updating(struct rte_mbuf *m, uint16_t dest_portid)
 	eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 
 	/* 02:00:00:00:00:xx */
-	tmp = &eth->d_addr.addr_bytes[0];
+	tmp = &eth->dst_addr.addr_bytes[0];
 	*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dest_portid << 40);
 
 	/* src addr */
-	rte_ether_addr_copy(&l2fwd_ports_eth_addr[dest_portid], &eth->s_addr);
+	rte_ether_addr_copy(&l2fwd_ports_eth_addr[dest_portid], &eth->src_addr);
 }
 
 static void
diff --git a/examples/l2fwd-event/l2fwd_common.h b/examples/l2fwd-event/l2fwd_common.h
index 939221d45a..cecbd9b70e 100644
--- a/examples/l2fwd-event/l2fwd_common.h
+++ b/examples/l2fwd-event/l2fwd_common.h
@@ -92,11 +92,11 @@ l2fwd_mac_updating(struct rte_mbuf *m, uint32_t dest_port_id,
 	eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 
 	/* 02:00:00:00:00:xx */
-	tmp = &eth->d_addr.addr_bytes[0];
+	tmp = &eth->dst_addr.addr_bytes[0];
 	*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dest_port_id << 40);
 
 	/* src addr */
-	rte_ether_addr_copy(addr, &eth->s_addr);
+	rte_ether_addr_copy(addr, &eth->src_addr);
 }
 
 static __rte_always_inline struct l2fwd_resources *
diff --git a/examples/l2fwd-jobstats/main.c b/examples/l2fwd-jobstats/main.c
index afe7fe6ead..06280321b1 100644
--- a/examples/l2fwd-jobstats/main.c
+++ b/examples/l2fwd-jobstats/main.c
@@ -351,11 +351,11 @@ l2fwd_simple_forward(struct rte_mbuf *m, unsigned portid)
 	eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 
 	/* 02:00:00:00:00:xx */
-	tmp = &eth->d_addr.addr_bytes[0];
+	tmp = &eth->dst_addr.addr_bytes[0];
 	*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dst_port << 40);
 
 	/* src addr */
-	rte_ether_addr_copy(&l2fwd_ports_eth_addr[dst_port], &eth->s_addr);
+	rte_ether_addr_copy(&l2fwd_ports_eth_addr[dst_port], &eth->src_addr);
 
 	buffer = tx_buffer[dst_port];
 	sent = rte_eth_tx_buffer(dst_port, 0, buffer, m);
diff --git a/examples/l2fwd-keepalive/main.c b/examples/l2fwd-keepalive/main.c
index d0d979f5ba..07271affb4 100644
--- a/examples/l2fwd-keepalive/main.c
+++ b/examples/l2fwd-keepalive/main.c
@@ -177,11 +177,11 @@ l2fwd_simple_forward(struct rte_mbuf *m, unsigned portid)
 	eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 
 	/* 02:00:00:00:00:xx */
-	tmp = &eth->d_addr.addr_bytes[0];
+	tmp = &eth->dst_addr.addr_bytes[0];
 	*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dst_port << 40);
 
 	/* src addr */
-	rte_ether_addr_copy(&l2fwd_ports_eth_addr[dst_port], &eth->s_addr);
+	rte_ether_addr_copy(&l2fwd_ports_eth_addr[dst_port], &eth->src_addr);
 
 	buffer = tx_buffer[dst_port];
 	sent = rte_eth_tx_buffer(dst_port, 0, buffer, m);
diff --git a/examples/l2fwd/main.c b/examples/l2fwd/main.c
index 05532551a5..f3deeba0a6 100644
--- a/examples/l2fwd/main.c
+++ b/examples/l2fwd/main.c
@@ -170,11 +170,11 @@ l2fwd_mac_updating(struct rte_mbuf *m, unsigned dest_portid)
 	eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 
 	/* 02:00:00:00:00:xx */
-	tmp = &eth->d_addr.addr_bytes[0];
+	tmp = &eth->dst_addr.addr_bytes[0];
 	*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dest_portid << 40);
 
 	/* src addr */
-	rte_ether_addr_copy(&l2fwd_ports_eth_addr[dest_portid], &eth->s_addr);
+	rte_ether_addr_copy(&l2fwd_ports_eth_addr[dest_portid], &eth->src_addr);
 }
 
 /* Simple forward. 8< */
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index a1f457b564..60545f3059 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -1375,7 +1375,8 @@ send_single_packet(struct rte_mbuf *m, uint16_t port)
 
 	/* update src and dst mac*/
 	eh = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
-	memcpy(eh, &port_l2hdr[port], sizeof(eh->d_addr) + sizeof(eh->s_addr));
+	memcpy(eh, &port_l2hdr[port],
+			sizeof(eh->dst_addr) + sizeof(eh->src_addr));
 
 	qconf = &lcore_conf[lcore_id];
 	rte_eth_tx_buffer(port, qconf->tx_queue_id[port],
@@ -1743,8 +1744,9 @@ parse_eth_dest(const char *optarg)
 		return "port value exceeds RTE_MAX_ETHPORTS("
 			RTE_STR(RTE_MAX_ETHPORTS) ")";
 
-	if (cmdline_parse_etheraddr(NULL, port_end, &port_l2hdr[portid].d_addr,
-			sizeof(port_l2hdr[portid].d_addr)) < 0)
+	if (cmdline_parse_etheraddr(NULL, port_end,
+			&port_l2hdr[portid].dst_addr,
+			sizeof(port_l2hdr[portid].dst_addr)) < 0)
 		return "Invalid ethernet address";
 	return NULL;
 }
@@ -2002,8 +2004,9 @@ set_default_dest_mac(void)
 	uint32_t i;
 
 	for (i = 0; i != RTE_DIM(port_l2hdr); i++) {
-		port_l2hdr[i].d_addr.addr_bytes[0] = RTE_ETHER_LOCAL_ADMIN_ADDR;
-		port_l2hdr[i].d_addr.addr_bytes[5] = i;
+		port_l2hdr[i].dst_addr.addr_bytes[0] =
+				RTE_ETHER_LOCAL_ADMIN_ADDR;
+		port_l2hdr[i].dst_addr.addr_bytes[5] = i;
 	}
 }
 
@@ -2109,14 +2112,14 @@ main(int argc, char **argv)
 				"rte_eth_dev_adjust_nb_rx_tx_desc: err=%d, port=%d\n",
 				ret, portid);
 
-		ret = rte_eth_macaddr_get(portid, &port_l2hdr[portid].s_addr);
+		ret = rte_eth_macaddr_get(portid, &port_l2hdr[portid].src_addr);
 		if (ret < 0)
 			rte_exit(EXIT_FAILURE,
 				"rte_eth_macaddr_get: err=%d, port=%d\n",
 				ret, portid);
 
-		print_ethaddr("Dst MAC:", &port_l2hdr[portid].d_addr);
-		print_ethaddr(", Src MAC:", &port_l2hdr[portid].s_addr);
+		print_ethaddr("Dst MAC:", &port_l2hdr[portid].dst_addr);
+		print_ethaddr(", Src MAC:", &port_l2hdr[portid].src_addr);
 		printf(", ");
 
 		/* init memory */
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index aa7b8db44a..90456f8f33 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -717,7 +717,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint16_t portid,
 			dst_port = portid;
 
 		/* 02:00:00:00:00:xx */
-		d_addr_bytes = &eth_hdr->d_addr.addr_bytes[0];
+		d_addr_bytes = &eth_hdr->dst_addr.addr_bytes[0];
 		*((uint64_t *)d_addr_bytes) =
 			0x000000000002 + ((uint64_t)dst_port << 40);
 
@@ -729,7 +729,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint16_t portid,
 
 		/* src addr */
 		rte_ether_addr_copy(&ports_eth_addr[dst_port],
-				&eth_hdr->s_addr);
+				&eth_hdr->src_addr);
 
 		send_single_packet(m, dst_port);
 	} else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
@@ -755,7 +755,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint16_t portid,
 
 		/* src addr */
 		rte_ether_addr_copy(&ports_eth_addr[dst_port],
-				&eth_hdr->s_addr);
+				&eth_hdr->src_addr);
 
 		send_single_packet(m, dst_port);
 #else
diff --git a/examples/l3fwd/l3fwd_em.h b/examples/l3fwd/l3fwd_em.h
index b992a21da4..1eff591b48 100644
--- a/examples/l3fwd/l3fwd_em.h
+++ b/examples/l3fwd/l3fwd_em.h
@@ -40,7 +40,7 @@ l3fwd_em_handle_ipv4(struct rte_mbuf *m, uint16_t portid,
 
 	/* src addr */
 	rte_ether_addr_copy(&ports_eth_addr[dst_port],
-			&eth_hdr->s_addr);
+			&eth_hdr->src_addr);
 
 	return dst_port;
 }
@@ -68,7 +68,7 @@ l3fwd_em_handle_ipv6(struct rte_mbuf *m, uint16_t portid,
 
 	/* src addr */
 	rte_ether_addr_copy(&ports_eth_addr[dst_port],
-			&eth_hdr->s_addr);
+			&eth_hdr->src_addr);
 
 	return dst_port;
 }
diff --git a/examples/l3fwd/l3fwd_fib.c b/examples/l3fwd/l3fwd_fib.c
index f8d6a3ac39..c594877d96 100644
--- a/examples/l3fwd/l3fwd_fib.c
+++ b/examples/l3fwd/l3fwd_fib.c
@@ -94,7 +94,7 @@ fib_send_single(int nb_tx, struct lcore_conf *qconf,
 				struct rte_ether_hdr *);
 		*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[hops[j]];
 		rte_ether_addr_copy(&ports_eth_addr[hops[j]],
-				&eth_hdr->s_addr);
+				&eth_hdr->src_addr);
 
 		/* Send single packet. */
 		send_single_packet(qconf, pkts_burst[j], hops[j]);
diff --git a/examples/l3fwd/l3fwd_lpm.c b/examples/l3fwd/l3fwd_lpm.c
index 7200160164..227a6d7fa5 100644
--- a/examples/l3fwd/l3fwd_lpm.c
+++ b/examples/l3fwd/l3fwd_lpm.c
@@ -260,7 +260,7 @@ lpm_process_event_pkt(const struct lcore_conf *lconf, struct rte_mbuf *mbuf)
 
 	/* src addr */
 	rte_ether_addr_copy(&ports_eth_addr[mbuf->port],
-			&eth_hdr->s_addr);
+			&eth_hdr->src_addr);
 #endif
 	return mbuf->port;
 }
diff --git a/examples/l3fwd/l3fwd_lpm.h b/examples/l3fwd/l3fwd_lpm.h
index d730d72a20..dd2eae18b8 100644
--- a/examples/l3fwd/l3fwd_lpm.h
+++ b/examples/l3fwd/l3fwd_lpm.h
@@ -44,7 +44,7 @@ l3fwd_lpm_simple_forward(struct rte_mbuf *m, uint16_t portid,
 
 		/* src addr */
 		rte_ether_addr_copy(&ports_eth_addr[dst_port],
-				&eth_hdr->s_addr);
+				&eth_hdr->src_addr);
 
 		send_single_packet(qconf, m, dst_port);
 	} else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
@@ -66,7 +66,7 @@ l3fwd_lpm_simple_forward(struct rte_mbuf *m, uint16_t portid,
 
 		/* src addr */
 		rte_ether_addr_copy(&ports_eth_addr[dst_port],
-				&eth_hdr->s_addr);
+				&eth_hdr->src_addr);
 
 		send_single_packet(qconf, m, dst_port);
 	} else {
diff --git a/examples/link_status_interrupt/main.c b/examples/link_status_interrupt/main.c
index a0bc1e56d0..e4542df11f 100644
--- a/examples/link_status_interrupt/main.c
+++ b/examples/link_status_interrupt/main.c
@@ -182,11 +182,11 @@ lsi_simple_forward(struct rte_mbuf *m, unsigned portid)
 	eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 
 	/* 02:00:00:00:00:xx */
-	tmp = &eth->d_addr.addr_bytes[0];
+	tmp = &eth->dst_addr.addr_bytes[0];
 	*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dst_port << 40);
 
 	/* src addr */
-	rte_ether_addr_copy(&lsi_ports_eth_addr[dst_port], &eth->s_addr);
+	rte_ether_addr_copy(&lsi_ports_eth_addr[dst_port], &eth->src_addr);
 
 	buffer = tx_buffer[dst_port];
 	sent = rte_eth_tx_buffer(dst_port, 0, buffer, m);
diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
index 2f593abf26..b3024a40e6 100644
--- a/examples/performance-thread/l3fwd-thread/main.c
+++ b/examples/performance-thread/l3fwd-thread/main.c
@@ -1078,14 +1078,14 @@ simple_ipv4_fwd_8pkts(struct rte_mbuf *m[8], uint16_t portid)
 	*(uint64_t *)&eth_hdr[7]->d_addr = dest_eth_addr[dst_port[7]];
 
 	/* src addr */
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[0]], &eth_hdr[0]->s_addr);
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[1]], &eth_hdr[1]->s_addr);
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[2]], &eth_hdr[2]->s_addr);
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[3]], &eth_hdr[3]->s_addr);
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[4]], &eth_hdr[4]->s_addr);
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[5]], &eth_hdr[5]->s_addr);
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[6]], &eth_hdr[6]->s_addr);
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[7]], &eth_hdr[7]->s_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[0]], &eth_hdr[0]->src_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[1]], &eth_hdr[1]->src_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[2]], &eth_hdr[2]->src_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[3]], &eth_hdr[3]->src_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[4]], &eth_hdr[4]->src_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[5]], &eth_hdr[5]->src_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[6]], &eth_hdr[6]->src_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[7]], &eth_hdr[7]->src_addr);
 
 	send_single_packet(m[0], (uint8_t)dst_port[0]);
 	send_single_packet(m[1], (uint8_t)dst_port[1]);
@@ -1213,14 +1213,14 @@ simple_ipv6_fwd_8pkts(struct rte_mbuf *m[8], uint16_t portid)
 	*(uint64_t *)&eth_hdr[7]->d_addr = dest_eth_addr[dst_port[7]];
 
 	/* src addr */
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[0]], &eth_hdr[0]->s_addr);
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[1]], &eth_hdr[1]->s_addr);
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[2]], &eth_hdr[2]->s_addr);
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[3]], &eth_hdr[3]->s_addr);
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[4]], &eth_hdr[4]->s_addr);
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[5]], &eth_hdr[5]->s_addr);
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[6]], &eth_hdr[6]->s_addr);
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[7]], &eth_hdr[7]->s_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[0]], &eth_hdr[0]->src_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[1]], &eth_hdr[1]->src_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[2]], &eth_hdr[2]->src_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[3]], &eth_hdr[3]->src_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[4]], &eth_hdr[4]->src_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[5]], &eth_hdr[5]->src_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[6]], &eth_hdr[6]->src_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[7]], &eth_hdr[7]->src_addr);
 
 	send_single_packet(m[0], dst_port[0]);
 	send_single_packet(m[1], dst_port[1]);
@@ -1268,11 +1268,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint16_t portid)
 		++(ipv4_hdr->hdr_checksum);
 #endif
 		/* dst addr */
-		*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[dst_port];
+		*(uint64_t *)&eth_hdr->dst_addr = dest_eth_addr[dst_port];
 
 		/* src addr */
 		rte_ether_addr_copy(&ports_eth_addr[dst_port],
-				&eth_hdr->s_addr);
+				&eth_hdr->src_addr);
 
 		send_single_packet(m, dst_port);
 	} else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
@@ -1290,11 +1290,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint16_t portid)
 			dst_port = portid;
 
 		/* dst addr */
-		*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[dst_port];
+		*(uint64_t *)&eth_hdr->dst_addr = dest_eth_addr[dst_port];
 
 		/* src addr */
 		rte_ether_addr_copy(&ports_eth_addr[dst_port],
-				&eth_hdr->s_addr);
+				&eth_hdr->src_addr);
 
 		send_single_packet(m, dst_port);
 	} else
diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
index 4f32ade7fb..61e4ee0ea1 100644
--- a/examples/ptpclient/ptpclient.c
+++ b/examples/ptpclient/ptpclient.c
@@ -426,10 +426,10 @@ parse_fup(struct ptpv2_data_slave_ordinary *ptp_data)
 		created_pkt->data_len = pkt_size;
 		created_pkt->pkt_len = pkt_size;
 		eth_hdr = rte_pktmbuf_mtod(created_pkt, struct rte_ether_hdr *);
-		rte_ether_addr_copy(&eth_addr, &eth_hdr->s_addr);
+		rte_ether_addr_copy(&eth_addr, &eth_hdr->src_addr);
 
 		/* Set multicast address 01-1B-19-00-00-00. */
-		rte_ether_addr_copy(&eth_multicast, &eth_hdr->d_addr);
+		rte_ether_addr_copy(&eth_multicast, &eth_hdr->dst_addr);
 
 		eth_hdr->ether_type = htons(PTP_PROTOCOL);
 		ptp_msg = (struct ptp_message *)
@@ -449,14 +449,14 @@ parse_fup(struct ptpv2_data_slave_ordinary *ptp_data)
 		client_clkid =
 			&ptp_msg->delay_req.hdr.source_port_id.clock_id;
 
-		client_clkid->id[0] = eth_hdr->s_addr.addr_bytes[0];
-		client_clkid->id[1] = eth_hdr->s_addr.addr_bytes[1];
-		client_clkid->id[2] = eth_hdr->s_addr.addr_bytes[2];
+		client_clkid->id[0] = eth_hdr->src_addr.addr_bytes[0];
+		client_clkid->id[1] = eth_hdr->src_addr.addr_bytes[1];
+		client_clkid->id[2] = eth_hdr->src_addr.addr_bytes[2];
 		client_clkid->id[3] = 0xFF;
 		client_clkid->id[4] = 0xFE;
-		client_clkid->id[5] = eth_hdr->s_addr.addr_bytes[3];
-		client_clkid->id[6] = eth_hdr->s_addr.addr_bytes[4];
-		client_clkid->id[7] = eth_hdr->s_addr.addr_bytes[5];
+		client_clkid->id[5] = eth_hdr->src_addr.addr_bytes[3];
+		client_clkid->id[6] = eth_hdr->src_addr.addr_bytes[4];
+		client_clkid->id[7] = eth_hdr->src_addr.addr_bytes[5];
 
 		rte_memcpy(&ptp_data->client_clock_id,
 			   client_clkid,
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index d0bf1f31e3..f74408a92f 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -757,7 +757,7 @@ link_vmdq(struct vhost_dev *vdev, struct rte_mbuf *m)
 	/* Learn MAC address of guest device from packet */
 	pkt_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 
-	if (find_vhost_dev(&pkt_hdr->s_addr)) {
+	if (find_vhost_dev(&pkt_hdr->src_addr)) {
 		RTE_LOG(ERR, VHOST_DATA,
 			"(%d) device is using a registered MAC!\n",
 			vdev->vid);
@@ -765,7 +765,7 @@ link_vmdq(struct vhost_dev *vdev, struct rte_mbuf *m)
 	}
 
 	for (i = 0; i < RTE_ETHER_ADDR_LEN; i++)
-		vdev->mac_address.addr_bytes[i] = pkt_hdr->s_addr.addr_bytes[i];
+		vdev->mac_address.addr_bytes[i] = pkt_hdr->src_addr.addr_bytes[i];
 
 	/* vlan_tag currently uses the device_id. */
 	vdev->vlan_tag = vlan_tags[vdev->vid];
@@ -945,7 +945,7 @@ virtio_tx_local(struct vhost_dev *vdev, struct rte_mbuf *m)
 	uint16_t lcore_id = rte_lcore_id();
 	pkt_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 
-	dst_vdev = find_vhost_dev(&pkt_hdr->d_addr);
+	dst_vdev = find_vhost_dev(&pkt_hdr->dst_addr);
 	if (!dst_vdev)
 		return -1;
 
@@ -993,7 +993,7 @@ find_local_dest(struct vhost_dev *vdev, struct rte_mbuf *m,
 	struct rte_ether_hdr *pkt_hdr =
 		rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 
-	dst_vdev = find_vhost_dev(&pkt_hdr->d_addr);
+	dst_vdev = find_vhost_dev(&pkt_hdr->dst_addr);
 	if (!dst_vdev)
 		return 0;
 
@@ -1076,7 +1076,7 @@ virtio_tx_route(struct vhost_dev *vdev, struct rte_mbuf *m, uint16_t vlan_tag)
 
 
 	nh = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
-	if (unlikely(rte_is_broadcast_ether_addr(&nh->d_addr))) {
+	if (unlikely(rte_is_broadcast_ether_addr(&nh->dst_addr))) {
 		struct vhost_dev *vdev2;
 
 		TAILQ_FOREACH(vdev2, &vhost_dev_list, global_vdev_entry) {
diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c
index 755dcafa2f..ee7f4324e1 100644
--- a/examples/vmdq/main.c
+++ b/examples/vmdq/main.c
@@ -461,11 +461,11 @@ update_mac_address(struct rte_mbuf *m, unsigned dst_port)
 	eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 
 	/* 02:00:00:00:00:xx */
-	tmp = &eth->d_addr.addr_bytes[0];
+	tmp = &eth->dst_addr.addr_bytes[0];
 	*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dst_port << 40);
 
 	/* src addr */
-	rte_ether_addr_copy(&vmdq_ports_eth_addr[dst_port], &eth->s_addr);
+	rte_ether_addr_copy(&vmdq_ports_eth_addr[dst_port], &eth->src_addr);
 }
 
 /* When we receive a HUP signal, print out our stats */
diff --git a/examples/vmdq_dcb/main.c b/examples/vmdq_dcb/main.c
index 6d3c918d6d..14c20e6a8b 100644
--- a/examples/vmdq_dcb/main.c
+++ b/examples/vmdq_dcb/main.c
@@ -512,11 +512,11 @@ update_mac_address(struct rte_mbuf *m, unsigned dst_port)
 	eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 
 	/* 02:00:00:00:00:xx */
-	tmp = &eth->d_addr.addr_bytes[0];
+	tmp = &eth->dst_addr.addr_bytes[0];
 	*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dst_port << 40);
 
 	/* src addr */
-	rte_ether_addr_copy(&vmdq_ports_eth_addr[dst_port], &eth->s_addr);
+	rte_ether_addr_copy(&vmdq_ports_eth_addr[dst_port], &eth->src_addr);
 }
 
 /* When we receive a HUP signal, print out our stats */
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 7b1ed7f110..a89945061a 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -785,8 +785,8 @@ struct rte_flow_item_eth {
 /** Default mask for RTE_FLOW_ITEM_TYPE_ETH. */
 #ifndef __cplusplus
 static const struct rte_flow_item_eth rte_flow_item_eth_mask = {
-	.hdr.d_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.hdr.s_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 	.hdr.ether_type = RTE_BE16(0x0000),
 };
 #endif
diff --git a/lib/gro/gro_tcp4.c b/lib/gro/gro_tcp4.c
index feb5855144..aff22178e3 100644
--- a/lib/gro/gro_tcp4.c
+++ b/lib/gro/gro_tcp4.c
@@ -243,8 +243,8 @@ gro_tcp4_reassemble(struct rte_mbuf *pkt,
 	ip_id = is_atomic ? 0 : rte_be_to_cpu_16(ipv4_hdr->packet_id);
 	sent_seq = rte_be_to_cpu_32(tcp_hdr->sent_seq);
 
-	rte_ether_addr_copy(&(eth_hdr->s_addr), &(key.eth_saddr));
-	rte_ether_addr_copy(&(eth_hdr->d_addr), &(key.eth_daddr));
+	rte_ether_addr_copy(&(eth_hdr->src_addr), &(key.eth_saddr));
+	rte_ether_addr_copy(&(eth_hdr->dst_addr), &(key.eth_daddr));
 	key.ip_src_addr = ipv4_hdr->src_addr;
 	key.ip_dst_addr = ipv4_hdr->dst_addr;
 	key.src_port = tcp_hdr->src_port;
diff --git a/lib/gro/gro_udp4.c b/lib/gro/gro_udp4.c
index b8301296df..e78dda7874 100644
--- a/lib/gro/gro_udp4.c
+++ b/lib/gro/gro_udp4.c
@@ -238,8 +238,8 @@ gro_udp4_reassemble(struct rte_mbuf *pkt,
 	is_last_frag = ((frag_offset & RTE_IPV4_HDR_MF_FLAG) == 0) ? 1 : 0;
 	frag_offset = (uint16_t)(frag_offset & RTE_IPV4_HDR_OFFSET_MASK) << 3;
 
-	rte_ether_addr_copy(&(eth_hdr->s_addr), &(key.eth_saddr));
-	rte_ether_addr_copy(&(eth_hdr->d_addr), &(key.eth_daddr));
+	rte_ether_addr_copy(&(eth_hdr->src_addr), &(key.eth_saddr));
+	rte_ether_addr_copy(&(eth_hdr->dst_addr), &(key.eth_daddr));
 	key.ip_src_addr = ipv4_hdr->src_addr;
 	key.ip_dst_addr = ipv4_hdr->dst_addr;
 	key.ip_id = ip_id;
diff --git a/lib/gro/gro_vxlan_tcp4.c b/lib/gro/gro_vxlan_tcp4.c
index f3b6e603b9..2005899afe 100644
--- a/lib/gro/gro_vxlan_tcp4.c
+++ b/lib/gro/gro_vxlan_tcp4.c
@@ -358,8 +358,8 @@ gro_vxlan_tcp4_reassemble(struct rte_mbuf *pkt,
 
 	sent_seq = rte_be_to_cpu_32(tcp_hdr->sent_seq);
 
-	rte_ether_addr_copy(&(eth_hdr->s_addr), &(key.inner_key.eth_saddr));
-	rte_ether_addr_copy(&(eth_hdr->d_addr), &(key.inner_key.eth_daddr));
+	rte_ether_addr_copy(&(eth_hdr->src_addr), &(key.inner_key.eth_saddr));
+	rte_ether_addr_copy(&(eth_hdr->dst_addr), &(key.inner_key.eth_daddr));
 	key.inner_key.ip_src_addr = ipv4_hdr->src_addr;
 	key.inner_key.ip_dst_addr = ipv4_hdr->dst_addr;
 	key.inner_key.recv_ack = tcp_hdr->recv_ack;
@@ -368,8 +368,8 @@ gro_vxlan_tcp4_reassemble(struct rte_mbuf *pkt,
 
 	key.vxlan_hdr.vx_flags = vxlan_hdr->vx_flags;
 	key.vxlan_hdr.vx_vni = vxlan_hdr->vx_vni;
-	rte_ether_addr_copy(&(outer_eth_hdr->s_addr), &(key.outer_eth_saddr));
-	rte_ether_addr_copy(&(outer_eth_hdr->d_addr), &(key.outer_eth_daddr));
+	rte_ether_addr_copy(&(outer_eth_hdr->src_addr), &(key.outer_eth_saddr));
+	rte_ether_addr_copy(&(outer_eth_hdr->dst_addr), &(key.outer_eth_daddr));
 	key.outer_ip_src_addr = outer_ipv4_hdr->src_addr;
 	key.outer_ip_dst_addr = outer_ipv4_hdr->dst_addr;
 	key.outer_src_port = udp_hdr->src_port;
diff --git a/lib/gro/gro_vxlan_udp4.c b/lib/gro/gro_vxlan_udp4.c
index 37476361d5..4767c910bb 100644
--- a/lib/gro/gro_vxlan_udp4.c
+++ b/lib/gro/gro_vxlan_udp4.c
@@ -338,16 +338,16 @@ gro_vxlan_udp4_reassemble(struct rte_mbuf *pkt,
 	is_last_frag = ((frag_offset & RTE_IPV4_HDR_MF_FLAG) == 0) ? 1 : 0;
 	frag_offset = (uint16_t)(frag_offset & RTE_IPV4_HDR_OFFSET_MASK) << 3;
 
-	rte_ether_addr_copy(&(eth_hdr->s_addr), &(key.inner_key.eth_saddr));
-	rte_ether_addr_copy(&(eth_hdr->d_addr), &(key.inner_key.eth_daddr));
+	rte_ether_addr_copy(&(eth_hdr->src_addr), &(key.inner_key.eth_saddr));
+	rte_ether_addr_copy(&(eth_hdr->dst_addr), &(key.inner_key.eth_daddr));
 	key.inner_key.ip_src_addr = ipv4_hdr->src_addr;
 	key.inner_key.ip_dst_addr = ipv4_hdr->dst_addr;
 	key.inner_key.ip_id = ip_id;
 
 	key.vxlan_hdr.vx_flags = vxlan_hdr->vx_flags;
 	key.vxlan_hdr.vx_vni = vxlan_hdr->vx_vni;
-	rte_ether_addr_copy(&(outer_eth_hdr->s_addr), &(key.outer_eth_saddr));
-	rte_ether_addr_copy(&(outer_eth_hdr->d_addr), &(key.outer_eth_daddr));
+	rte_ether_addr_copy(&(outer_eth_hdr->src_addr), &(key.outer_eth_saddr));
+	rte_ether_addr_copy(&(outer_eth_hdr->dst_addr), &(key.outer_eth_daddr));
 	key.outer_ip_src_addr = outer_ipv4_hdr->src_addr;
 	key.outer_ip_dst_addr = outer_ipv4_hdr->dst_addr;
 	/* Note: It is unnecessary to save outer_src_port here because it can
diff --git a/lib/net/rte_arp.c b/lib/net/rte_arp.c
index 5c1e27b8c0..9f7eb6b375 100644
--- a/lib/net/rte_arp.c
+++ b/lib/net/rte_arp.c
@@ -29,8 +29,8 @@ rte_net_make_rarp_packet(struct rte_mempool *mpool,
 	}
 
 	/* Ethernet header. */
-	memset(eth_hdr->d_addr.addr_bytes, 0xff, RTE_ETHER_ADDR_LEN);
-	rte_ether_addr_copy(mac, &eth_hdr->s_addr);
+	memset(eth_hdr->dst_addr.addr_bytes, 0xff, RTE_ETHER_ADDR_LEN);
+	rte_ether_addr_copy(mac, &eth_hdr->src_addr);
 	eth_hdr->ether_type = RTE_BE16(RTE_ETHER_TYPE_RARP);
 
 	/* RARP header. */
diff --git a/lib/net/rte_ether.h b/lib/net/rte_ether.h
index d0eeb6f996..f722a97d06 100644
--- a/lib/net/rte_ether.h
+++ b/lib/net/rte_ether.h
@@ -267,34 +267,16 @@ __rte_experimental
 int
 rte_ether_unformat_addr(const char *str, struct rte_ether_addr *eth_addr);
 
-/* Windows Sockets headers contain `#define s_addr S_un.S_addr`.
- * Temporarily disable this macro to avoid conflict at definition.
- * Place source MAC address in both `s_addr` and `S_un.S_addr` fields,
- * so that access works either directly or through the macro.
- */
-#pragma push_macro("s_addr")
-#ifdef s_addr
-#undef s_addr
-#endif
-
 /**
  * Ethernet header: Contains the destination address, source address
  * and frame type.
  */
 struct rte_ether_hdr {
-	struct rte_ether_addr d_addr; /**< Destination address. */
-	RTE_STD_C11
-	union {
-		struct rte_ether_addr s_addr; /**< Source address. */
-		struct {
-			struct rte_ether_addr S_addr;
-		} S_un; /**< Do not use directly; use s_addr instead.*/
-	};
+	struct rte_ether_addr dst_addr; /**< Destination address. */
+	struct rte_ether_addr src_addr; /**< Source address. */
 	rte_be16_t ether_type; /**< Frame type. */
 } __rte_aligned(2);
 
-#pragma pop_macro("s_addr")
-
 /**
  * Ethernet VLAN Header.
  * Contains the 16-bit VLAN Tag Control Identifier and the Ethernet type
diff --git a/lib/pipeline/rte_table_action.c b/lib/pipeline/rte_table_action.c
index ad7904c0ee..4b0316bfed 100644
--- a/lib/pipeline/rte_table_action.c
+++ b/lib/pipeline/rte_table_action.c
@@ -615,8 +615,8 @@ encap_ether_apply(void *data,
 		RTE_ETHER_TYPE_IPV6;
 
 	/* Ethernet */
-	rte_ether_addr_copy(&p->ether.ether.da, &d->ether.d_addr);
-	rte_ether_addr_copy(&p->ether.ether.sa, &d->ether.s_addr);
+	rte_ether_addr_copy(&p->ether.ether.da, &d->ether.dst_addr);
+	rte_ether_addr_copy(&p->ether.ether.sa, &d->ether.src_addr);
 	d->ether.ether_type = rte_htons(ethertype);
 
 	return 0;
@@ -633,8 +633,8 @@ encap_vlan_apply(void *data,
 		RTE_ETHER_TYPE_IPV6;
 
 	/* Ethernet */
-	rte_ether_addr_copy(&p->vlan.ether.da, &d->ether.d_addr);
-	rte_ether_addr_copy(&p->vlan.ether.sa, &d->ether.s_addr);
+	rte_ether_addr_copy(&p->vlan.ether.da, &d->ether.dst_addr);
+	rte_ether_addr_copy(&p->vlan.ether.sa, &d->ether.src_addr);
 	d->ether.ether_type = rte_htons(RTE_ETHER_TYPE_VLAN);
 
 	/* VLAN */
@@ -657,8 +657,8 @@ encap_qinq_apply(void *data,
 		RTE_ETHER_TYPE_IPV6;
 
 	/* Ethernet */
-	rte_ether_addr_copy(&p->qinq.ether.da, &d->ether.d_addr);
-	rte_ether_addr_copy(&p->qinq.ether.sa, &d->ether.s_addr);
+	rte_ether_addr_copy(&p->qinq.ether.da, &d->ether.dst_addr);
+	rte_ether_addr_copy(&p->qinq.ether.sa, &d->ether.src_addr);
 	d->ether.ether_type = rte_htons(RTE_ETHER_TYPE_QINQ);
 
 	/* SVLAN */
@@ -683,8 +683,8 @@ encap_qinq_pppoe_apply(void *data,
 	struct encap_qinq_pppoe_data *d = data;
 
 	/* Ethernet */
-	rte_ether_addr_copy(&p->qinq.ether.da, &d->ether.d_addr);
-	rte_ether_addr_copy(&p->qinq.ether.sa, &d->ether.s_addr);
+	rte_ether_addr_copy(&p->qinq.ether.da, &d->ether.dst_addr);
+	rte_ether_addr_copy(&p->qinq.ether.sa, &d->ether.src_addr);
 	d->ether.ether_type = rte_htons(RTE_ETHER_TYPE_VLAN);
 
 	/* SVLAN */
@@ -719,8 +719,8 @@ encap_mpls_apply(void *data,
 	uint32_t i;
 
 	/* Ethernet */
-	rte_ether_addr_copy(&p->mpls.ether.da, &d->ether.d_addr);
-	rte_ether_addr_copy(&p->mpls.ether.sa, &d->ether.s_addr);
+	rte_ether_addr_copy(&p->mpls.ether.da, &d->ether.dst_addr);
+	rte_ether_addr_copy(&p->mpls.ether.sa, &d->ether.src_addr);
 	d->ether.ether_type = rte_htons(ethertype);
 
 	/* MPLS */
@@ -746,8 +746,8 @@ encap_pppoe_apply(void *data,
 	struct encap_pppoe_data *d = data;
 
 	/* Ethernet */
-	rte_ether_addr_copy(&p->pppoe.ether.da, &d->ether.d_addr);
-	rte_ether_addr_copy(&p->pppoe.ether.sa, &d->ether.s_addr);
+	rte_ether_addr_copy(&p->pppoe.ether.da, &d->ether.dst_addr);
+	rte_ether_addr_copy(&p->pppoe.ether.sa, &d->ether.src_addr);
 	d->ether.ether_type = rte_htons(RTE_ETHER_TYPE_PPPOE_SESSION);
 
 	/* PPPoE and PPP*/
@@ -777,9 +777,9 @@ encap_vxlan_apply(void *data,
 
 			/* Ethernet */
 			rte_ether_addr_copy(&p->vxlan.ether.da,
-					&d->ether.d_addr);
+					&d->ether.dst_addr);
 			rte_ether_addr_copy(&p->vxlan.ether.sa,
-					&d->ether.s_addr);
+					&d->ether.src_addr);
 			d->ether.ether_type = rte_htons(RTE_ETHER_TYPE_VLAN);
 
 			/* VLAN */
@@ -818,9 +818,9 @@ encap_vxlan_apply(void *data,
 
 			/* Ethernet */
 			rte_ether_addr_copy(&p->vxlan.ether.da,
-					&d->ether.d_addr);
+					&d->ether.dst_addr);
 			rte_ether_addr_copy(&p->vxlan.ether.sa,
-					&d->ether.s_addr);
+					&d->ether.src_addr);
 			d->ether.ether_type = rte_htons(RTE_ETHER_TYPE_IPV4);
 
 			/* IPv4*/
@@ -855,9 +855,9 @@ encap_vxlan_apply(void *data,
 
 			/* Ethernet */
 			rte_ether_addr_copy(&p->vxlan.ether.da,
-					&d->ether.d_addr);
+					&d->ether.dst_addr);
 			rte_ether_addr_copy(&p->vxlan.ether.sa,
-					&d->ether.s_addr);
+					&d->ether.src_addr);
 			d->ether.ether_type = rte_htons(RTE_ETHER_TYPE_VLAN);
 
 			/* VLAN */
@@ -896,9 +896,9 @@ encap_vxlan_apply(void *data,
 
 			/* Ethernet */
 			rte_ether_addr_copy(&p->vxlan.ether.da,
-					&d->ether.d_addr);
+					&d->ether.dst_addr);
 			rte_ether_addr_copy(&p->vxlan.ether.sa,
-					&d->ether.s_addr);
+					&d->ether.src_addr);
 			d->ether.ether_type = rte_htons(RTE_ETHER_TYPE_IPV6);
 
 			/* IPv6*/
-- 
2.29.3


^ permalink raw reply	[relevance 1%]

* Re: [dpdk-dev] [PATCH v3 4/7] ethdev: make burst functions to use new flat array
  2021-10-01 14:02  2%     ` [dpdk-dev] [PATCH v3 4/7] ethdev: make burst functions to use new flat array Konstantin Ananyev
@ 2021-10-01 16:46  3%       ` Ferruh Yigit
  2021-10-01 17:40  0%         ` Ananyev, Konstantin
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-10-01 16:46 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, mdr, jay.jayatheerthan

On 10/1/2021 3:02 PM, Konstantin Ananyev wrote:
> Rework 'fast' burst functions to use rte_eth_fp_ops[].
> While it is an API/ABI breakage, this change is intended to be
> transparent for both users (no changes in user app is required) and
> PMD developers (no changes in PMD is required).
> One extra thing to note - RX/TX callback invocation will cause extra
> function call with these changes. That might cause some insignificant
> slowdown for code-path where RX/TX callbacks are heavily involved.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

<...>

>  static inline int
>  rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
>  {
> -	struct rte_eth_dev *dev;
> +	struct rte_eth_fp_ops *p;
> +	void *qd;
> +
> +	if (port_id >= RTE_MAX_ETHPORTS ||
> +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> +		RTE_ETHDEV_LOG(ERR,
> +			"Invalid port_id=%u or queue_id=%u\n",
> +			port_id, queue_id);
> +		return -EINVAL;
> +	}

Should the checkes wrapped with '#ifdef RTE_ETHDEV_DEBUG_RX' like others?

<...>

> +++ b/lib/ethdev/version.map
> @@ -247,11 +247,16 @@ EXPERIMENTAL {
>  	rte_mtr_meter_policy_delete;
>  	rte_mtr_meter_policy_update;
>  	rte_mtr_meter_policy_validate;
> +
> +	# added in 21.05

s/21.05/21.11/

> +	__rte_eth_rx_epilog;
> +	__rte_eth_tx_prolog;

These are directly called by application and must be part of ABI, but marked as
'internal' and has '__rte' prefix to highligh it, this may be confusing.
What about making them proper, non-internal, API?

>  };
>  
>  INTERNAL {
>  	global:
>  
> +	rte_eth_fp_ops;

This variable is accessed in inline function, so accessed by application, not
sure if it suits the 'internal' object definition, internal should be only for
objects accessed by other parts of DPDK.
I think this can be added to 'DPDK_22'.

>  	rte_eth_dev_allocate;
>  	rte_eth_dev_allocated;
>  	rte_eth_dev_attach_secondary;
> 


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v2 10/10] doc: update release note
  @ 2021-10-01 16:59  4%   ` Fan Zhang
  0 siblings, 0 replies; 200+ results
From: Fan Zhang @ 2021-10-01 16:59 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang

This patch updates the release note to describe qat refactor
changes made.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 doc/guides/rel_notes/release_21_11.rst | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 3ade7fe5ac..02a61be76b 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -157,6 +157,10 @@ API Changes
   the crypto/security operation. This field will be used to communicate
   events such as soft expiry with IPsec in lookaside mode.
 
+* common/qat: QAT PMD is refactored to divide generation specific control
+  path code into dedicated files. This change also applies qat compression,
+  qat symmetric crypto, and qat asymmetric crypto.
+
 
 ABI Changes
 -----------
-- 
2.25.1


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v3 0/7] hide eth dev related structures
  2021-10-01 14:02  4%   ` [dpdk-dev] [PATCH v3 0/7] hide eth dev related structures Konstantin Ananyev
                       ` (3 preceding siblings ...)
  2021-10-01 14:02  9%     ` [dpdk-dev] [PATCH v3 7/7] ethdev: hide eth dev related structures Konstantin Ananyev
@ 2021-10-01 17:02  0%     ` Ferruh Yigit
  2021-10-04 13:55  4%     ` [dpdk-dev] [PATCH v4 " Konstantin Ananyev
  5 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2021-10-01 17:02 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, mdr, jay.jayatheerthan

On 10/1/2021 3:02 PM, Konstantin Ananyev wrote:
> v3 changes:
>  - Changes in public struct naming (Jerin/Haiyue)
>  - Split patches
>  - Update docs
>  - Shamelessly included Andrew's patch:
>    https://patches.dpdk.org/project/dpdk/patch/20210928154856.1015020-1-andrew.rybchenko@oktetlabs.ru/
>    into these series.
>    I have to do similar thing here, so decided to avoid duplicated effort.   
> 
> The aim of these patch series is to make rte_ethdev core data structures
> (rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback, etc.) internal to
> DPDK and not visible to the user.
> That should allow future possible changes to core ethdev related structures
> to be transparent to the user and help to improve ABI/API stability.
> Note that current ethdev API is preserved, but it is a formal ABI break.
> 
> The work is based on previous discussions at:
> https://www.mail-archive.com/dev@dpdk.org/msg211405.html
> https://www.mail-archive.com/dev@dpdk.org/msg216685.html
> and consists of the following main points:
> 1. Copy public 'fast' function pointers (rx_pkt_burst(), etc.) and
>    related data pointer from rte_eth_dev into a separate flat array.
>    We keep it public to still be able to use inline functions for these
>    'fast' calls (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
>    Note that apart from function pointers itself, each element of this
>    flat array also contains two opaque pointers for each ethdev:
>    1) a pointer to an array of internal queue data pointers
>    2)  points to array of queue callback data pointers.
>    Note that exposing this extra information allows us to avoid extra
>    changes inside PMD level, plus should help to avoid possible
>    performance degradation.
> 2. Change implementation of 'fast' inline ethdev functions
>    (rte_eth_rx_burst(), etc.) to use new public flat array.
>    While it is an ABI breakage, this change is intended to be transparent
>    for both users (no changes in user app is required) and PMD developers
>    (no changes in PMD is required).
>    One extra note - with new implementation RX/TX callback invocation
>    will cost one extra function call with this changes. That might cause
>    some slowdown for code-path with RX/TX callbacks heavily involved.
>    Hope such trade-off is acceptable for the community.
> 3. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
>    things into internal header: <ethdev_driver.h>.
> 
> That approach was selected to:
>   - Avoid(/minimize) possible performance losses.
>   - Minimize required changes inside PMDs.
>  

Overall +1 to the approach.

Only 'metrics' library is failing to build and it also needs to include driver
header:

diff --git a/lib/metrics/rte_metrics_telemetry.c
b/lib/metrics/rte_metrics_telemetry.c
index 269f8ef6135c..5be21b2e86c5 100644
--- a/lib/metrics/rte_metrics_telemetry.c
+++ b/lib/metrics/rte_metrics_telemetry.c
@@ -2,7 +2,7 @@
  * Copyright(c) 2020 Intel Corporation
  */

-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_string_fns.h>
 #ifdef RTE_LIB_TELEMETRY
 #include <telemetry_internal.h>

> Performance testing results (ICX 2.0GHz, E810 (ice)):
>  - testpmd macswap fwd mode, plus
>    a) no RX/TX callbacks:
>       no actual slowdown observed
>    b) bpf-load rx 0 0 JM ./dpdk.org/examples/bpf/t3.o:
>       ~2% slowdown
>  - l3fwd: no actual slowdown observed
> 
> Would like to thank Ferruh and Jerrin for reviewing and testing previous
> versions of these series. All other interested parties please don't be shy
> and provide your feedback.
> 
> Konstantin Ananyev (7):
>   ethdev: allocate max space for internal queue array
>   ethdev: change input parameters for rx_queue_count
>   ethdev: copy ethdev 'fast' API into separate structure
>   ethdev: make burst functions to use new flat array
>   ethdev: add API to retrieve multiple ethernet addresses
>   ethdev: remove legacy Rx descriptor done API
>   ethdev: hide eth dev related structures
> 

<...>


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3 4/7] ethdev: make burst functions to use new flat array
  2021-10-01 16:46  3%       ` Ferruh Yigit
@ 2021-10-01 17:40  0%         ` Ananyev, Konstantin
  2021-10-04  8:46  3%           ` Ferruh Yigit
  0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2021-10-01 17:40 UTC (permalink / raw)
  To: Yigit, Ferruh, dev
  Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
	maxime.coquelin, Xia, Chenbo, thomas, mdr, Jayatheerthan, Jay



> On 10/1/2021 3:02 PM, Konstantin Ananyev wrote:
> > Rework 'fast' burst functions to use rte_eth_fp_ops[].
> > While it is an API/ABI breakage, this change is intended to be
> > transparent for both users (no changes in user app is required) and
> > PMD developers (no changes in PMD is required).
> > One extra thing to note - RX/TX callback invocation will cause extra
> > function call with these changes. That might cause some insignificant
> > slowdown for code-path where RX/TX callbacks are heavily involved.
> >
> > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 
> <...>
> 
> >  static inline int
> >  rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
> >  {
> > -	struct rte_eth_dev *dev;
> > +	struct rte_eth_fp_ops *p;
> > +	void *qd;
> > +
> > +	if (port_id >= RTE_MAX_ETHPORTS ||
> > +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> > +		RTE_ETHDEV_LOG(ERR,
> > +			"Invalid port_id=%u or queue_id=%u\n",
> > +			port_id, queue_id);
> > +		return -EINVAL;
> > +	}
> 
> Should the checkes wrapped with '#ifdef RTE_ETHDEV_DEBUG_RX' like others?

Original rte_eth_rx_queue_count() always have similar checks enabled,
that's why I also kept them 'always on'. 

> 
> <...>
> 
> > +++ b/lib/ethdev/version.map
> > @@ -247,11 +247,16 @@ EXPERIMENTAL {
> >  	rte_mtr_meter_policy_delete;
> >  	rte_mtr_meter_policy_update;
> >  	rte_mtr_meter_policy_validate;
> > +
> > +	# added in 21.05
> 
> s/21.05/21.11/
> 
> > +	__rte_eth_rx_epilog;
> > +	__rte_eth_tx_prolog;
> 
> These are directly called by application and must be part of ABI, but marked as
> 'internal' and has '__rte' prefix to highligh it, this may be confusing.
> What about making them proper, non-internal, API?

Hmm not sure what do you suggest here.
We don't want users to call them explicitly.
They are sort of helpers for rte_eth_rx_burst/rte_eth_tx_burst.
So I did what I thought is our usual policy for such semi-internal thigns:
have '@intenal' in comments, but in version.map put them under EXPERIMETAL/global
section.

What do you think it should be instead?
 
> >  };
> >
> >  INTERNAL {
> >  	global:
> >
> > +	rte_eth_fp_ops;
> 
> This variable is accessed in inline function, so accessed by application, not
> sure if it suits the 'internal' object definition, internal should be only for
> objects accessed by other parts of DPDK.
> I think this can be added to 'DPDK_22'.
> 
> >  	rte_eth_dev_allocate;
> >  	rte_eth_dev_allocated;
> >  	rte_eth_dev_attach_secondary;
> >


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH 1/3] ethdev: update modify field flow action
  @ 2021-10-01 19:52  9%   ` Viacheslav Ovsiienko
  2021-10-04  9:38  0%     ` Ori Kam
  0 siblings, 1 reply; 200+ results
From: Viacheslav Ovsiienko @ 2021-10-01 19:52 UTC (permalink / raw)
  To: dev; +Cc: rasland, matan, shahafs, orika, getelson, thomas

The generic modify field flow action introduced in [1] has
some issues related to the immediate source operand:

  - immediate source can be presented either as an unsigned
    64-bit integer or pointer to data pattern in memory.
    There was no explicit pointer field defined in the union

  - the byte ordering for 64-bit integer was not specified.
    Many fields have lesser lengths and byte ordering
    is crucial.

  - how the bit offset is applied to the immediate source
    field was not defined and documented

  - 64-bit integer size is not enough to provide MAC and
    IPv6 addresses

In order to cover the issues and exclude any ambiguities
the following is done:

  - introduce the explicit pointer field
    in rte_flow_action_modify_data structure

  - replace the 64-bit unsigned integer with 16-byte array

  - update the modify field flow action documentation

[1] commit 73b68f4c54a0 ("ethdev: introduce generic modify flow action")

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 doc/guides/prog_guide/rte_flow.rst     |  8 ++++++++
 doc/guides/rel_notes/release_21_11.rst |  7 +++++++
 lib/ethdev/rte_flow.h                  | 15 ++++++++++++---
 3 files changed, 27 insertions(+), 3 deletions(-)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 2b42d5ec8c..a54760a7b4 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2835,6 +2835,14 @@ a packet to any other part of it.
 ``value`` sets an immediate value to be used as a source or points to a
 location of the value in memory. It is used instead of ``level`` and ``offset``
 for ``RTE_FLOW_FIELD_VALUE`` and ``RTE_FLOW_FIELD_POINTER`` respectively.
+The data in memory should be presented exactly in the same byte order and
+length as in the relevant flow item, i.e. data for field with type
+RTE_FLOW_FIELD_MAC_DST should follow the conventions of dst field
+in rte_flow_item_eth structure, with type RTE_FLOW_FIELD_IPV6_SRC -
+rte_flow_item_ipv6 conventions, and so on. The bitfield exatracted from the
+memory being applied as second operation parameter is defined by width and
+the destination field offset. If the field size is large than 16 bytes the
+pattern can be provided as pointer only.
 
 .. _table_rte_flow_action_modify_field:
 
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 73e377a007..7db6cccab0 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -170,6 +170,10 @@ API Changes
   the crypto/security operation. This field will be used to communicate
   events such as soft expiry with IPsec in lookaside mode.
 
+* ethdev: ``rte_flow_action_modify_data`` structure udpdated, immediate data
+  array is extended, data pointer field is explicitly added to union, the
+  action behavior is defined in more strict fashion and documentation uddated.
+
 
 ABI Changes
 -----------
@@ -206,6 +210,9 @@ ABI Changes
   and hard expiry limits. Limits can be either in number of packets or bytes.
 
 
+* ethdev: ``rte_flow_action_modify_data`` structure udpdated.
+
+
 Known Issues
 ------------
 
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 7b1ed7f110..af4c693ead 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3204,6 +3204,9 @@ enum rte_flow_field_id {
 };
 
 /**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
  * Field description for MODIFY_FIELD action.
  */
 struct rte_flow_action_modify_data {
@@ -3217,10 +3220,16 @@ struct rte_flow_action_modify_data {
 			uint32_t offset;
 		};
 		/**
-		 * Immediate value for RTE_FLOW_FIELD_VALUE or
-		 * memory address for RTE_FLOW_FIELD_POINTER.
+		 * Immediate value for RTE_FLOW_FIELD_VALUE, presented in the
+		 * same byte order and length as in relevant rte_flow_item_xxx.
 		 */
-		uint64_t value;
+		uint8_t value[16];
+		/*
+		 * Memory address for RTE_FLOW_FIELD_POINTER, memory layout
+		 * should be the same as for relevant field in the
+		 * rte_flow_item_xxx structure.
+		 */
+		void *pvalue;
 	};
 };
 
-- 
2.18.1


^ permalink raw reply	[relevance 9%]

* Re: [dpdk-dev] [PATCH v3 3/7] ethdev: copy ethdev 'fast' API into separate structure
@ 2021-10-02  0:14  0% Pavan Nikhilesh Bhagavatula
  2021-10-03 20:58  0% ` Ananyev, Konstantin
  0 siblings, 1 reply; 200+ results
From: Pavan Nikhilesh Bhagavatula @ 2021-10-02  0:14 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: xiaoyun.li, Anoob Joseph, Jerin Jacob Kollanukkaran,
	Nithin Kumar Dabilpuram, Ankur Dwivedi, shepard.siegel, ed.czeck,
	john.miller, Igor Russkikh, ajit.khaparde, somnath.kotur,
	rahul.lakkireddy, hemant.agrawal, sachin.saxena, haiyue.wang,
	johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	Kiran Kumar Kokkilagadda, andrew.rybchenko, Maciej Czekaj [C],
	jiawenwu, jianwang, maxime.coquelin, chenbo.xia, thomas,
	ferruh.yigit, mdr, jay.jayatheerthan

>Copy public function pointers (rx_pkt_burst(), etc.) and related
>pointers to internal data from rte_eth_dev structure into a
>separate flat array. That array will remain in a public header.
>The intention here is to make rte_eth_dev and related structures
>internal.
>That should allow future possible changes to core eth_dev structures
>to be transparent to the user and help to avoid ABI/API breakages.
>The plan is to keep minimal part of data from rte_eth_dev public,
>so we still can use inline functions for 'fast' calls
>(like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
>
>Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>---
> lib/ethdev/ethdev_private.c  | 52
>++++++++++++++++++++++++++++++++++++
> lib/ethdev/ethdev_private.h  |  7 +++++
> lib/ethdev/rte_ethdev.c      | 17 ++++++++++++
> lib/ethdev/rte_ethdev_core.h | 45
>+++++++++++++++++++++++++++++++
> 4 files changed, 121 insertions(+)
>

<snip>

>+void
>+eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo)
>+{
>+	static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
>+	static const struct rte_eth_fp_ops dummy_ops = {
>+		.rx_pkt_burst = dummy_eth_rx_burst,
>+		.tx_pkt_burst = dummy_eth_tx_burst,
>+		.rxq = {.data = dummy_data, .clbk = dummy_data,},
>+		.txq = {.data = dummy_data, .clbk = dummy_data,},
>+	};
>+
>+	*fpo = dummy_ops;
>+}
>+
>+void
>+eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
>+		const struct rte_eth_dev *dev)
>+{
>+	fpo->rx_pkt_burst = dev->rx_pkt_burst;
>+	fpo->tx_pkt_burst = dev->tx_pkt_burst;
>+	fpo->tx_pkt_prepare = dev->tx_pkt_prepare;
>+	fpo->rx_queue_count = dev->rx_queue_count;
>+	fpo->rx_descriptor_status = dev->rx_descriptor_status;
>+	fpo->tx_descriptor_status = dev->tx_descriptor_status;
>+
>+	fpo->rxq.data = dev->data->rx_queues;
>+	fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
>+
>+	fpo->txq.data = dev->data->tx_queues;
>+	fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
>+}
>diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
>index 3724429577..40333e7651 100644
>--- a/lib/ethdev/ethdev_private.h
>+++ b/lib/ethdev/ethdev_private.h
>@@ -26,4 +26,11 @@ eth_find_device(const struct rte_eth_dev
>*_start, rte_eth_cmp_t cmp,
> /* Parse devargs value for representor parameter. */
> int rte_eth_devargs_parse_representor_ports(char *str, void *data);
>
>+/* reset eth 'fast' API to dummy values */
>+void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
>+
>+/* setup eth 'fast' API to ethdev values */
>+void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
>+		const struct rte_eth_dev *dev);
>+
> #endif /* _ETH_PRIVATE_H_ */
>diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
>index 424bc260fa..9fbb1bc3db 100644
>--- a/lib/ethdev/rte_ethdev.c
>+++ b/lib/ethdev/rte_ethdev.c
>@@ -44,6 +44,9 @@
> static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
> struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
>
>+/* public 'fast' API */
>+struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
>+
> /* spinlock for eth device callbacks */
> static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
>
>@@ -1788,6 +1791,9 @@ rte_eth_dev_start(uint16_t port_id)
> 		(*dev->dev_ops->link_update)(dev, 0);
> 	}
>
>+	/* expose selection of PMD rx/tx function */
>+	eth_dev_fp_ops_setup(rte_eth_fp_ops + port_id, dev);
>+

Secondary process will not set these properly I believe as it might not
call start() if it does primary process ops will not be set.

One simple solution is to call ops_setup() around rte_eth_dev_attach_secondary()
but if application doesn't invoke start() on Primary the ops will not be set for it.


> 	rte_ethdev_trace_start(port_id);
> 	return 0;
> }
>@@ -1810,6 +1816,9 @@ rte_eth_dev_stop(uint16_t port_id)
> 		return 0;
> 	}
>
>+	/* point rx/tx functions to dummy ones */
>+	eth_dev_fp_ops_reset(rte_eth_fp_ops + port_id);
>+
> 	dev->data->dev_started = 0;
> 	ret = (*dev->dev_ops->dev_stop)(dev);
> 	rte_ethdev_trace_stop(port_id, ret);
>2.26.3


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] net: promote make rarp packet API as stable
  @ 2021-10-02  8:57  0%   ` David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2021-10-02  8:57 UTC (permalink / raw)
  To: Xiao Wang; +Cc: dev, Olivier Matz, Stephen Hemminger, Xia, Chenbo

On Thu, Sep 16, 2021 at 1:38 PM Olivier Matz <olivier.matz@6wind.com> wrote:
>
> On Wed, Sep 08, 2021 at 06:59:15PM +0800, Xiao Wang wrote:
> > rte_net_make_rarp_packet was introduced in version v18.02, there was no
> > change in this public API since then, and it's still being used by vhost
> > lib and virtio driver, so promote it as stable ABI.
> >
> > Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
>

Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>


Applied, thanks.

-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v2 04/13] eventdev: move inline APIs into separate structure
  @ 2021-10-03  8:27  2%   ` pbhagavatula
    1 sibling, 0 replies; 200+ results
From: pbhagavatula @ 2021-10-03  8:27 UTC (permalink / raw)
  To: jerinj, Ray Kinsella; +Cc: dev, Pavan Nikhilesh

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Move fastpath inline function pointers from rte_eventdev into a
separate structure accessed via a flat array.
The intension is to make rte_eventdev and related structures private
to avoid future API/ABI breakages.`

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
---
 lib/eventdev/eventdev_pmd.h      |  25 +++++++
 lib/eventdev/eventdev_pmd_pci.h  |   5 +-
 lib/eventdev/eventdev_private.c  | 112 +++++++++++++++++++++++++++++++
 lib/eventdev/meson.build         |   1 +
 lib/eventdev/rte_eventdev.c      |  12 +++-
 lib/eventdev/rte_eventdev_core.h |  28 ++++++++
 lib/eventdev/version.map         |   5 ++
 7 files changed, 186 insertions(+), 2 deletions(-)
 create mode 100644 lib/eventdev/eventdev_private.c

diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index 7eb2aa0520..2f88dbd6d8 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -1189,4 +1189,29 @@ __rte_internal
 int
 rte_event_pmd_release(struct rte_eventdev *eventdev);
 
+/**
+ * Reset eventdevice fastpath APIs to dummy values.
+ *
+ * @param fp_ops
+ * The *fp_ops* pointer to reset.
+ */
+__rte_internal
+void
+event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op);
+
+/**
+ * Set eventdevice fastpath APIs to event device values.
+ *
+ * @param fp_ops
+ * The *fp_ops* pointer to set.
+ */
+__rte_internal
+void
+event_dev_fp_ops_set(struct rte_event_fp_ops *fp_ops,
+		     const struct rte_eventdev *dev);
+
+#ifdef __cplusplus
+}
+#endif
+
 #endif /* _RTE_EVENTDEV_PMD_H_ */
diff --git a/lib/eventdev/eventdev_pmd_pci.h b/lib/eventdev/eventdev_pmd_pci.h
index 2f12a5eb24..563b579a77 100644
--- a/lib/eventdev/eventdev_pmd_pci.h
+++ b/lib/eventdev/eventdev_pmd_pci.h
@@ -67,8 +67,11 @@ rte_event_pmd_pci_probe_named(struct rte_pci_driver *pci_drv,
 
 	/* Invoke PMD device initialization function */
 	retval = devinit(eventdev);
-	if (retval == 0)
+	if (retval == 0) {
+		event_dev_fp_ops_set(rte_event_fp_ops + eventdev->data->dev_id,
+				     eventdev);
 		return 0;
+	}
 
 	RTE_EDEV_LOG_ERR("driver %s: (vendor_id=0x%x device_id=0x%x)"
 			" failed", pci_drv->driver.name,
diff --git a/lib/eventdev/eventdev_private.c b/lib/eventdev/eventdev_private.c
new file mode 100644
index 0000000000..9084833847
--- /dev/null
+++ b/lib/eventdev/eventdev_private.c
@@ -0,0 +1,112 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "eventdev_pmd.h"
+#include "rte_eventdev.h"
+
+static uint16_t
+dummy_event_enqueue(__rte_unused void *port,
+		    __rte_unused const struct rte_event *ev)
+{
+	RTE_EDEV_LOG_ERR(
+		"event enqueue requested for unconfigured event device");
+	return 0;
+}
+
+static uint16_t
+dummy_event_enqueue_burst(__rte_unused void *port,
+			  __rte_unused const struct rte_event ev[],
+			  __rte_unused uint16_t nb_events)
+{
+	RTE_EDEV_LOG_ERR(
+		"event enqueue burst requested for unconfigured event device");
+	return 0;
+}
+
+static uint16_t
+dummy_event_dequeue(__rte_unused void *port, __rte_unused struct rte_event *ev,
+		    __rte_unused uint64_t timeout_ticks)
+{
+	RTE_EDEV_LOG_ERR(
+		"event dequeue requested for unconfigured event device");
+	return 0;
+}
+
+static uint16_t
+dummy_event_dequeue_burst(__rte_unused void *port,
+			  __rte_unused struct rte_event ev[],
+			  __rte_unused uint16_t nb_events,
+			  __rte_unused uint64_t timeout_ticks)
+{
+	RTE_EDEV_LOG_ERR(
+		"event dequeue burst requested for unconfigured event device");
+	return 0;
+}
+
+static uint16_t
+dummy_event_tx_adapter_enqueue(__rte_unused void *port,
+			       __rte_unused struct rte_event ev[],
+			       __rte_unused uint16_t nb_events)
+{
+	RTE_EDEV_LOG_ERR(
+		"event Tx adapter enqueue requested for unconfigured event device");
+	return 0;
+}
+
+static uint16_t
+dummy_event_tx_adapter_enqueue_same_dest(__rte_unused void *port,
+					 __rte_unused struct rte_event ev[],
+					 __rte_unused uint16_t nb_events)
+{
+	RTE_EDEV_LOG_ERR(
+		"event Tx adapter enqueue same destination requested for unconfigured event device");
+	return 0;
+}
+
+static uint16_t
+dummy_event_crypto_adapter_enqueue(__rte_unused void *port,
+				   __rte_unused struct rte_event ev[],
+				   __rte_unused uint16_t nb_events)
+{
+	RTE_EDEV_LOG_ERR(
+		"event crypto adapter enqueue requested for unconfigured event device");
+	return 0;
+}
+
+void
+event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op)
+{
+	static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
+	static const struct rte_event_fp_ops dummy = {
+		.enqueue = dummy_event_enqueue,
+		.enqueue_burst = dummy_event_enqueue_burst,
+		.enqueue_new_burst = dummy_event_enqueue_burst,
+		.enqueue_forward_burst = dummy_event_enqueue_burst,
+		.dequeue = dummy_event_dequeue,
+		.dequeue_burst = dummy_event_dequeue_burst,
+		.txa_enqueue = dummy_event_tx_adapter_enqueue,
+		.txa_enqueue_same_dest =
+			dummy_event_tx_adapter_enqueue_same_dest,
+		.ca_enqueue = dummy_event_crypto_adapter_enqueue,
+		.data = dummy_data,
+	};
+
+	*fp_op = dummy;
+}
+
+void
+event_dev_fp_ops_set(struct rte_event_fp_ops *fp_op,
+		     const struct rte_eventdev *dev)
+{
+	fp_op->enqueue = dev->enqueue;
+	fp_op->enqueue_burst = dev->enqueue_burst;
+	fp_op->enqueue_new_burst = dev->enqueue_new_burst;
+	fp_op->enqueue_forward_burst = dev->enqueue_forward_burst;
+	fp_op->dequeue = dev->dequeue;
+	fp_op->dequeue_burst = dev->dequeue_burst;
+	fp_op->txa_enqueue = dev->txa_enqueue;
+	fp_op->txa_enqueue_same_dest = dev->txa_enqueue_same_dest;
+	fp_op->ca_enqueue = dev->ca_enqueue;
+	fp_op->data = dev->data->ports;
+}
diff --git a/lib/eventdev/meson.build b/lib/eventdev/meson.build
index 8b51fde361..9051ff04b7 100644
--- a/lib/eventdev/meson.build
+++ b/lib/eventdev/meson.build
@@ -8,6 +8,7 @@ else
 endif
 
 sources = files(
+        'eventdev_private.c',
         'rte_eventdev.c',
         'rte_event_ring.c',
         'eventdev_trace_points.c',
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index bfcfa31cd1..f14a887340 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -46,6 +46,9 @@ static struct rte_eventdev_global eventdev_globals = {
 	.nb_devs		= 0
 };
 
+/* Public fastpath APIs. */
+struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS];
+
 /* Event dev north bound API implementation */
 
 uint8_t
@@ -300,8 +303,8 @@ int
 rte_event_dev_configure(uint8_t dev_id,
 			const struct rte_event_dev_config *dev_conf)
 {
-	struct rte_eventdev *dev;
 	struct rte_event_dev_info info;
+	struct rte_eventdev *dev;
 	int diag;
 
 	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
@@ -470,10 +473,13 @@ rte_event_dev_configure(uint8_t dev_id,
 		return diag;
 	}
 
+	event_dev_fp_ops_reset(rte_event_fp_ops + dev_id);
+
 	/* Configure the device */
 	diag = (*dev->dev_ops->dev_configure)(dev);
 	if (diag != 0) {
 		RTE_EDEV_LOG_ERR("dev%d dev_configure = %d", dev_id, diag);
+		event_dev_fp_ops_reset(rte_event_fp_ops + dev_id);
 		event_dev_queue_config(dev, 0);
 		event_dev_port_config(dev, 0);
 	}
@@ -1244,6 +1250,8 @@ rte_event_dev_start(uint8_t dev_id)
 	else
 		return diag;
 
+	event_dev_fp_ops_set(rte_event_fp_ops + dev_id, dev);
+
 	return 0;
 }
 
@@ -1284,6 +1292,7 @@ rte_event_dev_stop(uint8_t dev_id)
 	dev->data->dev_started = 0;
 	(*dev->dev_ops->dev_stop)(dev);
 	rte_eventdev_trace_stop(dev_id);
+	event_dev_fp_ops_reset(rte_event_fp_ops + dev_id);
 }
 
 int
@@ -1302,6 +1311,7 @@ rte_event_dev_close(uint8_t dev_id)
 		return -EBUSY;
 	}
 
+	event_dev_fp_ops_reset(rte_event_fp_ops + dev_id);
 	rte_eventdev_trace_close(dev_id);
 	return (*dev->dev_ops->dev_close)(dev);
 }
diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h
index 115b97e431..4461073101 100644
--- a/lib/eventdev/rte_eventdev_core.h
+++ b/lib/eventdev/rte_eventdev_core.h
@@ -39,6 +39,34 @@ typedef uint16_t (*event_crypto_adapter_enqueue_t)(void *port,
 						   uint16_t nb_events);
 /**< @internal Enqueue burst of events on crypto adapter */
 
+struct rte_event_fp_ops {
+	event_enqueue_t enqueue;
+	/**< PMD enqueue function. */
+	event_enqueue_burst_t enqueue_burst;
+	/**< PMD enqueue burst function. */
+	event_enqueue_burst_t enqueue_new_burst;
+	/**< PMD enqueue burst new function. */
+	event_enqueue_burst_t enqueue_forward_burst;
+	/**< PMD enqueue burst fwd function. */
+	event_dequeue_t dequeue;
+	/**< PMD dequeue function. */
+	event_dequeue_burst_t dequeue_burst;
+	/**< PMD dequeue burst function. */
+	event_tx_adapter_enqueue_t txa_enqueue;
+	/**< PMD Tx adapter enqueue function. */
+	event_tx_adapter_enqueue_t txa_enqueue_same_dest;
+	/**< PMD Tx adapter enqueue same destination function. */
+	event_crypto_adapter_enqueue_t ca_enqueue;
+	/**< PMD Crypto adapter enqueue function. */
+	uintptr_t reserved[2];
+
+	void **data;
+	/**< points to array of internal port data pointers */
+	uintptr_t reserved2[4];
+} __rte_cache_aligned;
+
+extern struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS];
+
 #define RTE_EVENTDEV_NAME_MAX_LEN (64)
 /**< @internal Max length of name of event PMD */
 
diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
index 5f1fe412a4..33ab447d4b 100644
--- a/lib/eventdev/version.map
+++ b/lib/eventdev/version.map
@@ -85,6 +85,9 @@ DPDK_22 {
 	rte_event_timer_cancel_burst;
 	rte_eventdevs;
 
+	#added in 21.11
+	rte_event_fp_ops;
+
 	local: *;
 };
 
@@ -141,6 +144,8 @@ EXPERIMENTAL {
 INTERNAL {
 	global:
 
+	event_dev_fp_ops_reset;
+	event_dev_fp_ops_set;
 	rte_event_pmd_selftest_seqn_dynfield_offset;
 	rte_event_pmd_allocate;
 	rte_event_pmd_get_named_dev;
-- 
2.17.1


^ permalink raw reply	[relevance 2%]

* Re: [dpdk-dev] [PATCH v4] doc: remove event crypto metadata deprecation note
  2021-09-27 15:22  3%   ` [dpdk-dev] [PATCH v4] doc: remove event crypto metadata deprecation note Shijith Thotton
@ 2021-10-03  9:48  0%     ` Gujjar, Abhinandan S
  0 siblings, 0 replies; 200+ results
From: Gujjar, Abhinandan S @ 2021-10-03  9:48 UTC (permalink / raw)
  To: Shijith Thotton, dev; +Cc: adwivedi, anoobj, gakhil, jerinj, pbhagavatula

Acked-by: Abhinandan Gujjar <Abhinandan.gujjar@intel.com>

> -----Original Message-----
> From: Shijith Thotton <sthotton@marvell.com>
> Sent: Monday, September 27, 2021 8:53 PM
> To: dev@dpdk.org
> Cc: Shijith Thotton <sthotton@marvell.com>; Gujjar, Abhinandan S
> <abhinandan.gujjar@intel.com>; adwivedi@marvell.com;
> anoobj@marvell.com; gakhil@marvell.com; jerinj@marvell.com;
> pbhagavatula@marvell.com
> Subject: [PATCH v4] doc: remove event crypto metadata deprecation note
> 
> Proposed change to event crypto metadata is not done as per deprecation
> note. Instead, comments are updated in spec to improve readability.
> 
> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
> ---
> v4:
> * Removed changes as per deprecation note.
> * Updated spec comments.
> 
> v3:
> * Updated ABI section of release notes.
> 
> v2:
> * Updated deprecation notice.
> 
> v1:
> * Rebased.
> 
>  doc/guides/rel_notes/deprecation.rst    | 6 ------
>  lib/eventdev/rte_event_crypto_adapter.h | 1 +
>  2 files changed, 1 insertion(+), 6 deletions(-)
> 
> diff --git a/doc/guides/rel_notes/deprecation.rst
> b/doc/guides/rel_notes/deprecation.rst
> index bf1e07c0a8..fae3abd282 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -254,12 +254,6 @@ Deprecation Notices
>    An 8-byte reserved field will be added to the structure ``rte_event_timer``
> to
>    support future extensions.
> 
> -* eventdev: Reserved bytes of ``rte_event_crypto_request`` is a space
> holder
> -  for ``response_info``. Both should be decoupled for better clarity.
> -  New space for ``response_info`` can be made by changing
> -  ``rte_event_crypto_metadata`` type to structure from union.
> -  This change is targeted for DPDK 21.11.
> -
>  * metrics: The function ``rte_metrics_init`` will have a non-void return
>    in order to notify errors instead of calling ``rte_exit``.
> 
> diff --git a/lib/eventdev/rte_event_crypto_adapter.h
> b/lib/eventdev/rte_event_crypto_adapter.h
> index 27fb628eef..edbd5c61a3 100644
> --- a/lib/eventdev/rte_event_crypto_adapter.h
> +++ b/lib/eventdev/rte_event_crypto_adapter.h
> @@ -227,6 +227,7 @@ union rte_event_crypto_metadata {
>  	struct rte_event_crypto_request request_info;
>  	/**< Request information to be filled in by application
>  	 * for RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode.
> +	 * First 8 bytes of request_info is reserved for response_info.
>  	 */
>  	struct rte_event response_info;
>  	/**< Response information to be filled in by application
> --
> 2.25.1


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v1 2/7] eal/interrupts: implement get set APIs
  @ 2021-10-03 18:05  3%     ` Dmitry Kozlyuk
  2021-10-04 10:37  0%       ` [dpdk-dev] [EXT] " Harman Kalra
  0 siblings, 1 reply; 200+ results
From: Dmitry Kozlyuk @ 2021-10-03 18:05 UTC (permalink / raw)
  To: Harman Kalra; +Cc: dev, Ray Kinsella

2021-09-03 18:10 (UTC+0530), Harman Kalra:
> [...]
> diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
> new file mode 100644
> index 0000000000..2e4fed96f0
> --- /dev/null
> +++ b/lib/eal/common/eal_common_interrupts.c
> @@ -0,0 +1,506 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(C) 2021 Marvell.
> + */
> +
> +#include <stdlib.h>
> +#include <string.h>
> +
> +#include <rte_errno.h>
> +#include <rte_log.h>
> +#include <rte_malloc.h>
> +
> +#include <rte_interrupts.h>
> +
> +
> +struct rte_intr_handle *rte_intr_handle_instance_alloc(int size,
> +						       bool from_hugepage)

Since the purpose of the series is to reduce future ABI breakages,
how about making the second parameter "flags" to have some spare bits?
(If not removing it completely per David's suggestion.)

> +{
> +	struct rte_intr_handle *intr_handle;
> +	int i;
> +
> +	if (from_hugepage)
> +		intr_handle = rte_zmalloc(NULL,
> +					  size * sizeof(struct rte_intr_handle),
> +					  0);
> +	else
> +		intr_handle = calloc(1, size * sizeof(struct rte_intr_handle));
> +	if (!intr_handle) {
> +		RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
> +		rte_errno = ENOMEM;
> +		return NULL;
> +	}
> +
> +	for (i = 0; i < size; i++) {
> +		intr_handle[i].nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
> +		intr_handle[i].alloc_from_hugepage = from_hugepage;
> +	}
> +
> +	return intr_handle;
> +}
> +
> +struct rte_intr_handle *rte_intr_handle_instance_index_get(
> +				struct rte_intr_handle *intr_handle, int index)

If rte_intr_handle_instance_alloc() returns a pointer to an array,
this function is useless since the user can simply manipulate a pointer.
If we want to make a distinction between a single struct rte_intr_handle and a
commonly allocated bunch of such (but why?), then they should be represented
by distinct types.

> +{
> +	if (intr_handle == NULL) {
> +		RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> +		rte_errno = ENOMEM;

Why it's sometimes ENOMEM and sometimes ENOTSUP when the handle is not
allocated?

> +		return NULL;
> +	}
> +
> +	return &intr_handle[index];
> +}
> +
> +int rte_intr_handle_instance_index_set(struct rte_intr_handle *intr_handle,
> +				       const struct rte_intr_handle *src,
> +				       int index)

See above regarding the "index" parameter. If it can be removed, a better name
for this function would be rte_intr_handle_copy().

> +{
> +	if (intr_handle == NULL) {
> +		RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> +		rte_errno = ENOTSUP;
> +		goto fail;
> +	}
> +
> +	if (src == NULL) {
> +		RTE_LOG(ERR, EAL, "Source interrupt instance unallocated\n");
> +		rte_errno = EINVAL;
> +		goto fail;
> +	}
> +
> +	if (index < 0) {
> +		RTE_LOG(ERR, EAL, "Index cany be negative");
> +		rte_errno = EINVAL;
> +		goto fail;
> +	}

How about making this parameter "size_t"?

> +
> +	intr_handle[index].fd = src->fd;
> +	intr_handle[index].vfio_dev_fd = src->vfio_dev_fd;
> +	intr_handle[index].type = src->type;
> +	intr_handle[index].max_intr = src->max_intr;
> +	intr_handle[index].nb_efd = src->nb_efd;
> +	intr_handle[index].efd_counter_size = src->efd_counter_size;
> +
> +	memcpy(intr_handle[index].efds, src->efds, src->nb_intr);
> +	memcpy(intr_handle[index].elist, src->elist, src->nb_intr);
> +
> +	return 0;
> +fail:
> +	return rte_errno;

Should be (-rte_errno) per documentation.
Please check all functions in this file that return an "int" status.

> [...]
> +int rte_intr_handle_dev_fd_get(const struct rte_intr_handle *intr_handle)
> +{
> +	if (intr_handle == NULL) {
> +		RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> +		rte_errno = ENOTSUP;
> +		goto fail;
> +	}
> +
> +	return intr_handle->vfio_dev_fd;
> +fail:
> +	return rte_errno;
> +}

Returning a errno value instead of an FD is very error-prone.
Probably returning (-1) is both safe and convenient?

> +
> +int rte_intr_handle_max_intr_set(struct rte_intr_handle *intr_handle,
> +				 int max_intr)
> +{
> +	if (intr_handle == NULL) {
> +		RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> +		rte_errno = ENOTSUP;
> +		goto fail;
> +	}
> +
> +	if (max_intr > intr_handle->nb_intr) {
> +		RTE_LOG(ERR, EAL, "Max_intr=%d greater than PLT_MAX_RXTX_INTR_VEC_ID=%d",

Seems like this common/cnxk name leaked here by mistake?

> +			max_intr, intr_handle->nb_intr);
> +		rte_errno = ERANGE;
> +		goto fail;
> +	}
> +
> +	intr_handle->max_intr = max_intr;
> +
> +	return 0;
> +fail:
> +	return rte_errno;
> +}
> +
> +int rte_intr_handle_max_intr_get(const struct rte_intr_handle *intr_handle)
> +{
> +	if (intr_handle == NULL) {
> +		RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> +		rte_errno = ENOTSUP;
> +		goto fail;
> +	}
> +
> +	return intr_handle->max_intr;
> +fail:
> +	return rte_errno;
> +}

Should be negative per documentation and to avoid returning a positive value
that cannot be distinguished from a successful return.
Please also check other functions in this file returning an "int" result
(not status).

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v3 3/7] ethdev: copy ethdev 'fast' API into separate structure
  2021-10-02  0:14  0% [dpdk-dev] [PATCH v3 3/7] ethdev: copy ethdev 'fast' API into separate structure Pavan Nikhilesh Bhagavatula
@ 2021-10-03 20:58  0% ` Ananyev, Konstantin
  2021-10-03 21:10  0%   ` Pavan Nikhilesh Bhagavatula
  0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2021-10-03 20:58 UTC (permalink / raw)
  To: Pavan Nikhilesh Bhagavatula, dev
  Cc: Li, Xiaoyun, Anoob Joseph, Jerin Jacob Kollanukkaran,
	Nithin Kumar Dabilpuram, Ankur Dwivedi, shepard.siegel, ed.czeck,
	john.miller, Igor Russkikh, ajit.khaparde, somnath.kotur,
	rahul.lakkireddy, hemant.agrawal, sachin.saxena, Wang, Haiyue,
	Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W, humin29,
	yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang, Qiming,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	Kiran Kumar Kokkilagadda, andrew.rybchenko, Maciej Czekaj [C],
	jiawenwu, jianwang, maxime.coquelin, Xia, Chenbo, thomas, Yigit,
	Ferruh, mdr, Jayatheerthan, Jay


> >Copy public function pointers (rx_pkt_burst(), etc.) and related
> >pointers to internal data from rte_eth_dev structure into a
> >separate flat array. That array will remain in a public header.
> >The intention here is to make rte_eth_dev and related structures
> >internal.
> >That should allow future possible changes to core eth_dev structures
> >to be transparent to the user and help to avoid ABI/API breakages.
> >The plan is to keep minimal part of data from rte_eth_dev public,
> >so we still can use inline functions for 'fast' calls
> >(like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
> >
> >Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> >---
> > lib/ethdev/ethdev_private.c  | 52
> >++++++++++++++++++++++++++++++++++++
> > lib/ethdev/ethdev_private.h  |  7 +++++
> > lib/ethdev/rte_ethdev.c      | 17 ++++++++++++
> > lib/ethdev/rte_ethdev_core.h | 45
> >+++++++++++++++++++++++++++++++
> > 4 files changed, 121 insertions(+)
> >
> 
> <snip>
> 
> >+void
> >+eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo)
> >+{
> >+	static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
> >+	static const struct rte_eth_fp_ops dummy_ops = {
> >+		.rx_pkt_burst = dummy_eth_rx_burst,
> >+		.tx_pkt_burst = dummy_eth_tx_burst,
> >+		.rxq = {.data = dummy_data, .clbk = dummy_data,},
> >+		.txq = {.data = dummy_data, .clbk = dummy_data,},
> >+	};
> >+
> >+	*fpo = dummy_ops;
> >+}
> >+
> >+void
> >+eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> >+		const struct rte_eth_dev *dev)
> >+{
> >+	fpo->rx_pkt_burst = dev->rx_pkt_burst;
> >+	fpo->tx_pkt_burst = dev->tx_pkt_burst;
> >+	fpo->tx_pkt_prepare = dev->tx_pkt_prepare;
> >+	fpo->rx_queue_count = dev->rx_queue_count;
> >+	fpo->rx_descriptor_status = dev->rx_descriptor_status;
> >+	fpo->tx_descriptor_status = dev->tx_descriptor_status;
> >+
> >+	fpo->rxq.data = dev->data->rx_queues;
> >+	fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
> >+
> >+	fpo->txq.data = dev->data->tx_queues;
> >+	fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
> >+}
> >diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
> >index 3724429577..40333e7651 100644
> >--- a/lib/ethdev/ethdev_private.h
> >+++ b/lib/ethdev/ethdev_private.h
> >@@ -26,4 +26,11 @@ eth_find_device(const struct rte_eth_dev
> >*_start, rte_eth_cmp_t cmp,
> > /* Parse devargs value for representor parameter. */
> > int rte_eth_devargs_parse_representor_ports(char *str, void *data);
> >
> >+/* reset eth 'fast' API to dummy values */
> >+void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
> >+
> >+/* setup eth 'fast' API to ethdev values */
> >+void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> >+		const struct rte_eth_dev *dev);
> >+
> > #endif /* _ETH_PRIVATE_H_ */
> >diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> >index 424bc260fa..9fbb1bc3db 100644
> >--- a/lib/ethdev/rte_ethdev.c
> >+++ b/lib/ethdev/rte_ethdev.c
> >@@ -44,6 +44,9 @@
> > static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
> > struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
> >
> >+/* public 'fast' API */
> >+struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
> >+
> > /* spinlock for eth device callbacks */
> > static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
> >
> >@@ -1788,6 +1791,9 @@ rte_eth_dev_start(uint16_t port_id)
> > 		(*dev->dev_ops->link_update)(dev, 0);
> > 	}
> >
> >+	/* expose selection of PMD rx/tx function */
> >+	eth_dev_fp_ops_setup(rte_eth_fp_ops + port_id, dev);
> >+
> 
> Secondary process will not set these properly I believe as it might not
> call start() if it does primary process ops will not be set.

That's a very good point, have to admit - I missed that part.

> 
> One simple solution is to call ops_setup() around rte_eth_dev_attach_secondary()
> but if application doesn't invoke start() on Primary the ops will not be set for it.

I think rte_eth_dev_attach_secondary() wouldn't work, as majority of the PMDs setup
fast ops function pointers after it.
From reading the code rte_eth_dev_probing_finish() seems like a good choice -
as it is always the final point in device initialization for secondary process.

BTW, we also need something similar at de-init phase.
rte_eth_dev_release_port() seems like a good candidate for it.

 
> 
> > 	rte_ethdev_trace_start(port_id);
> > 	return 0;
> > }
> >@@ -1810,6 +1816,9 @@ rte_eth_dev_stop(uint16_t port_id)
> > 		return 0;
> > 	}
> >
> >+	/* point rx/tx functions to dummy ones */
> >+	eth_dev_fp_ops_reset(rte_eth_fp_ops + port_id);
> >+
> > 	dev->data->dev_started = 0;
> > 	ret = (*dev->dev_ops->dev_stop)(dev);
> > 	rte_ethdev_trace_stop(port_id, ret);
> >2.26.3


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3 1/7] ethdev: allocate max space for internal queue array
  @ 2021-10-03 21:05  3% ` Ananyev, Konstantin
  0 siblings, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2021-10-03 21:05 UTC (permalink / raw)
  To: Pavan Nikhilesh Bhagavatula, dev
  Cc: Li, Xiaoyun, Anoob Joseph, Jerin Jacob Kollanukkaran,
	Nithin Kumar Dabilpuram, Ankur Dwivedi, shepard.siegel, ed.czeck,
	john.miller, Igor Russkikh, ajit.khaparde, somnath.kotur,
	rahul.lakkireddy, hemant.agrawal, sachin.saxena, Wang, Haiyue,
	Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W, humin29,
	yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang, Qiming,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	Kiran Kumar Kokkilagadda, andrew.rybchenko, Maciej Czekaj [C],
	jiawenwu, jianwang, maxime.coquelin, Xia, Chenbo, thomas, Yigit,
	Ferruh, mdr, Jayatheerthan, Jay



> 
> >At queue configure stage always allocate space for maximum possible
> >number (RTE_MAX_QUEUES_PER_PORT) of queue pointers.
> >That will allow 'fast' inline functions (eth_rx_burst, etc.) to refer
> >pointer to internal queue data without extra checking of current
> >number
> >of configured queues.
> >That would help in future to hide rte_eth_dev and related structures.
> >
> >Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> >---
> > lib/ethdev/rte_ethdev.c | 36 +++++++++---------------------------
> > 1 file changed, 9 insertions(+), 27 deletions(-)
> >
> >diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> >index daf5ca9242..424bc260fa 100644
> >--- a/lib/ethdev/rte_ethdev.c
> >+++ b/lib/ethdev/rte_ethdev.c
> >@@ -898,7 +898,8 @@ eth_dev_rx_queue_config(struct rte_eth_dev
> >*dev, uint16_t nb_queues)
> >
> > 	if (dev->data->rx_queues == NULL && nb_queues != 0) { /*
> >first time configuration */
> > 		dev->data->rx_queues = rte_zmalloc("ethdev-
> >>rx_queues",
> >-				sizeof(dev->data->rx_queues[0]) *
> >nb_queues,
> >+				sizeof(dev->data->rx_queues[0]) *
> >+				RTE_MAX_QUEUES_PER_PORT,
> > 				RTE_CACHE_LINE_SIZE);
> > 		if (dev->data->rx_queues == NULL) {
> > 			dev->data->nb_rx_queues = 0;
> 
> We could get rid of this zmalloc by declaring rx_queues as array of
> pointers, it would make code much simpler.
> I believe the original code dates back to "Initial" release.

Yep we can, and yes it will simplify this peace of code.
The main reason I decided no to do this change now -
it will change layout of the_eth_dev_data structure.
In this series I tried to mininize(/avoid) changes in rte_eth_dev and rte_eth_dev_data,
as much as possible to avoid any unforeseen performance and functional impacts.
If we'll manage to make rte_eth_dev and rte_eth_dev_data private we can in future
consider that one and other changes in rte_eth_dev and rte_eth_dev_data layouts
without worrying about ABI breakage.  

> 
> 
> >@@ -909,21 +910,11 @@ eth_dev_rx_queue_config(struct
> >rte_eth_dev *dev, uint16_t nb_queues)
> >
> > 		rxq = dev->data->rx_queues;
> >
> >-		for (i = nb_queues; i < old_nb_queues; i++)
> >+		for (i = nb_queues; i < old_nb_queues; i++) {
> > 			(*dev->dev_ops->rx_queue_release)(rxq[i]);
> >-		rxq = rte_realloc(rxq, sizeof(rxq[0]) * nb_queues,
> >-				RTE_CACHE_LINE_SIZE);
> >-		if (rxq == NULL)
> >-			return -(ENOMEM);
> >-		if (nb_queues > old_nb_queues) {
> >-			uint16_t new_qs = nb_queues -
> >old_nb_queues;
> >-
> >-			memset(rxq + old_nb_queues, 0,
> >-				sizeof(rxq[0]) * new_qs);
> >+			rxq[i] = NULL;
> > 		}
> >
> >-		dev->data->rx_queues = rxq;
> >-
> > 	} else if (dev->data->rx_queues != NULL && nb_queues == 0) {
> > 		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
> >>rx_queue_release, -ENOTSUP);
> >
> >@@ -1138,8 +1129,9 @@ eth_dev_tx_queue_config(struct
> >rte_eth_dev *dev, uint16_t nb_queues)
> >
> > 	if (dev->data->tx_queues == NULL && nb_queues != 0) { /*
> >first time configuration */
> > 		dev->data->tx_queues = rte_zmalloc("ethdev-
> >>tx_queues",
> >-						   sizeof(dev->data-
> >>tx_queues[0]) * nb_queues,
> >-
> >RTE_CACHE_LINE_SIZE);
> >+				sizeof(dev->data->tx_queues[0]) *
> >+				RTE_MAX_QUEUES_PER_PORT,
> >+				RTE_CACHE_LINE_SIZE);
> > 		if (dev->data->tx_queues == NULL) {
> > 			dev->data->nb_tx_queues = 0;
> > 			return -(ENOMEM);
> >@@ -1149,21 +1141,11 @@ eth_dev_tx_queue_config(struct
> >rte_eth_dev *dev, uint16_t nb_queues)
> >
> > 		txq = dev->data->tx_queues;
> >
> >-		for (i = nb_queues; i < old_nb_queues; i++)
> >+		for (i = nb_queues; i < old_nb_queues; i++) {
> > 			(*dev->dev_ops->tx_queue_release)(txq[i]);
> >-		txq = rte_realloc(txq, sizeof(txq[0]) * nb_queues,
> >-				  RTE_CACHE_LINE_SIZE);
> >-		if (txq == NULL)
> >-			return -ENOMEM;
> >-		if (nb_queues > old_nb_queues) {
> >-			uint16_t new_qs = nb_queues -
> >old_nb_queues;
> >-
> >-			memset(txq + old_nb_queues, 0,
> >-			       sizeof(txq[0]) * new_qs);
> >+			txq[i] = NULL;
> > 		}
> >
> >-		dev->data->tx_queues = txq;
> >-
> > 	} else if (dev->data->tx_queues != NULL && nb_queues == 0) {
> > 		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
> >>tx_queue_release, -ENOTSUP);
> >
> >--
> >2.26.3


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v4 1/3] security: add SA config option for inner pkt csum
  2021-09-30 12:58  4% ` [dpdk-dev] [PATCH v4 1/3] security: " Archana Muniganti
@ 2021-10-03 21:09  0%   ` Ananyev, Konstantin
  0 siblings, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2021-10-03 21:09 UTC (permalink / raw)
  To: Archana Muniganti, gakhil, Nicolau, Radu, Zhang, Roy Fan, hemant.agrawal
  Cc: anoobj, ktejasree, adwivedi, jerinj, dev



> 
> Add inner packet IPv4 hdr and L4 checksum enable options
> in conf. These will be used in case of protocol offload.
> Per SA, application could specify whether the
> checksum(compute/verify) can be offloaded to security device.
> 
> Signed-off-by: Archana Muniganti <marchana@marvell.com>
> ---
>  doc/guides/cryptodevs/features/default.ini |  1 +
>  doc/guides/rel_notes/deprecation.rst       |  4 +--
>  doc/guides/rel_notes/release_21_11.rst     |  4 +++
>  lib/cryptodev/rte_cryptodev.h              |  2 ++
>  lib/security/rte_security.h                | 31 ++++++++++++++++++++++
>  5 files changed, 40 insertions(+), 2 deletions(-)
> 
> diff --git a/doc/guides/cryptodevs/features/default.ini b/doc/guides/cryptodevs/features/default.ini
> index c24814de98..96d95ddc81 100644
> --- a/doc/guides/cryptodevs/features/default.ini
> +++ b/doc/guides/cryptodevs/features/default.ini
> @@ -33,6 +33,7 @@ Non-Byte aligned data  =
>  Sym raw data path API  =
>  Cipher multiple data units =
>  Cipher wrapped key     =
> +Inner checksum         =
> 
>  ;
>  ; Supported crypto algorithms of a default crypto driver.
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 05fc2fdee7..8308e00ed4 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -232,8 +232,8 @@ Deprecation Notices
>    IPsec payload MSS (Maximum Segment Size), and ESN (Extended Sequence Number).
> 
>  * security: The IPsec SA config options ``struct rte_security_ipsec_sa_options``
> -  will be updated with new fields to support new features like IPsec inner
> -  checksum, TSO in case of protocol offload.
> +  will be updated with new fields to support new features like TSO in case of
> +  protocol offload.
> 
>  * ipsec: The structure ``rte_ipsec_sa_prm`` will be extended with a new field
>    ``hdr_l3_len`` to configure tunnel L3 header length.
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index 3ade7fe5ac..5480f05a99 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -196,6 +196,10 @@ ABI Changes
>    ``rte_security_ipsec_xform`` to allow applications to configure SA soft
>    and hard expiry limits. Limits can be either in number of packets or bytes.
> 
> +* security: The new options ``ip_csum_enable`` and ``l4_csum_enable`` were added
> +  in structure ``rte_security_ipsec_sa_options`` to indicate whether inner
> +  packet IPv4 header checksum and L4 checksum need to be offloaded to
> +  security device.
> 
>  Known Issues
>  ------------
> diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
> index bb01f0f195..d9271a6c45 100644
> --- a/lib/cryptodev/rte_cryptodev.h
> +++ b/lib/cryptodev/rte_cryptodev.h
> @@ -479,6 +479,8 @@ rte_cryptodev_asym_get_xform_enum(enum rte_crypto_asym_xform_type *xform_enum,
>  /**< Support operations on multiple data-units message */
>  #define RTE_CRYPTODEV_FF_CIPHER_WRAPPED_KEY		(1ULL << 26)
>  /**< Support wrapped key in cipher xform  */
> +#define RTE_CRYPTODEV_FF_SECURITY_INNER_CSUM		(1ULL << 27)
> +/**< Support inner checksum computation/verification */
> 
>  /**
>   * Get the name of a crypto device feature flag
> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> index ab1a6e1f65..0c5636377e 100644
> --- a/lib/security/rte_security.h
> +++ b/lib/security/rte_security.h
> @@ -230,6 +230,37 @@ struct rte_security_ipsec_sa_options {
>  	 * * 0: Do not match UDP ports
>  	 */
>  	uint32_t udp_ports_verify : 1;
> +
> +	/** Compute/verify inner packet IPv4 header checksum in tunnel mode
> +	 *
> +	 * * 1: For outbound, compute inner packet IPv4 header checksum
> +	 *      before tunnel encapsulation and for inbound, verify after
> +	 *      tunnel decapsulation.
> +	 * * 0: Inner packet IP header checksum is not computed/verified.
> +	 *
> +	 * The checksum verification status would be set in mbuf using
> +	 * PKT_RX_IP_CKSUM_xxx flags.
> +	 *
> +	 * Inner IP checksum computation can also be enabled(per operation)
> +	 * by setting the flag PKT_TX_IP_CKSUM in mbuf.
> +	 */
> +	uint32_t ip_csum_enable : 1;
> +
> +	/** Compute/verify inner packet L4 checksum in tunnel mode
> +	 *
> +	 * * 1: For outbound, compute inner packet L4 checksum before
> +	 *      tunnel encapsulation and for inbound, verify after
> +	 *      tunnel decapsulation.
> +	 * * 0: Inner packet L4 checksum is not computed/verified.
> +	 *
> +	 * The checksum verification status would be set in mbuf using
> +	 * PKT_RX_L4_CKSUM_xxx flags.
> +	 *
> +	 * Inner L4 checksum computation can also be enabled(per operation)
> +	 * by setting the flags PKT_TX_TCP_CKSUM or PKT_TX_SCTP_CKSUM or
> +	 * PKT_TX_UDP_CKSUM or PKT_TX_L4_MASK in mbuf.
> +	 */
> +	uint32_t l4_csum_enable : 1;
>  };
> 
>  /** IPSec security association direction */
> --

Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

> 2.22.0


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3 3/7] ethdev: copy ethdev 'fast' API into separate structure
  2021-10-03 20:58  0% ` Ananyev, Konstantin
@ 2021-10-03 21:10  0%   ` Pavan Nikhilesh Bhagavatula
  0 siblings, 0 replies; 200+ results
From: Pavan Nikhilesh Bhagavatula @ 2021-10-03 21:10 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev
  Cc: Li, Xiaoyun, Anoob Joseph, Jerin Jacob Kollanukkaran,
	Nithin Kumar Dabilpuram, Ankur Dwivedi, shepard.siegel, ed.czeck,
	john.miller, Igor Russkikh, ajit.khaparde, somnath.kotur,
	rahul.lakkireddy, hemant.agrawal, sachin.saxena, Wang, Haiyue,
	Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W, humin29,
	yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang, Qiming,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	Kiran Kumar Kokkilagadda, andrew.rybchenko, Maciej Czekaj [C],
	jiawenwu, jianwang, maxime.coquelin, Xia, Chenbo, thomas, Yigit,
	Ferruh, mdr, Jayatheerthan, Jay

>
>> >Copy public function pointers (rx_pkt_burst(), etc.) and related
>> >pointers to internal data from rte_eth_dev structure into a
>> >separate flat array. That array will remain in a public header.
>> >The intention here is to make rte_eth_dev and related structures
>> >internal.
>> >That should allow future possible changes to core eth_dev
>structures
>> >to be transparent to the user and help to avoid ABI/API breakages.
>> >The plan is to keep minimal part of data from rte_eth_dev public,
>> >so we still can use inline functions for 'fast' calls
>> >(like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
>> >
>> >Signed-off-by: Konstantin Ananyev
><konstantin.ananyev@intel.com>
>> >---
>> > lib/ethdev/ethdev_private.c  | 52
>> >++++++++++++++++++++++++++++++++++++
>> > lib/ethdev/ethdev_private.h  |  7 +++++
>> > lib/ethdev/rte_ethdev.c      | 17 ++++++++++++
>> > lib/ethdev/rte_ethdev_core.h | 45
>> >+++++++++++++++++++++++++++++++
>> > 4 files changed, 121 insertions(+)
>> >
>>
>> <snip>
>>
>> >+void
>> >+eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo)
>> >+{
>> >+	static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
>> >+	static const struct rte_eth_fp_ops dummy_ops = {
>> >+		.rx_pkt_burst = dummy_eth_rx_burst,
>> >+		.tx_pkt_burst = dummy_eth_tx_burst,
>> >+		.rxq = {.data = dummy_data, .clbk = dummy_data,},
>> >+		.txq = {.data = dummy_data, .clbk = dummy_data,},
>> >+	};
>> >+
>> >+	*fpo = dummy_ops;
>> >+}
>> >+
>> >+void
>> >+eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
>> >+		const struct rte_eth_dev *dev)
>> >+{
>> >+	fpo->rx_pkt_burst = dev->rx_pkt_burst;
>> >+	fpo->tx_pkt_burst = dev->tx_pkt_burst;
>> >+	fpo->tx_pkt_prepare = dev->tx_pkt_prepare;
>> >+	fpo->rx_queue_count = dev->rx_queue_count;
>> >+	fpo->rx_descriptor_status = dev->rx_descriptor_status;
>> >+	fpo->tx_descriptor_status = dev->tx_descriptor_status;
>> >+
>> >+	fpo->rxq.data = dev->data->rx_queues;
>> >+	fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
>> >+
>> >+	fpo->txq.data = dev->data->tx_queues;
>> >+	fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
>> >+}
>> >diff --git a/lib/ethdev/ethdev_private.h
>b/lib/ethdev/ethdev_private.h
>> >index 3724429577..40333e7651 100644
>> >--- a/lib/ethdev/ethdev_private.h
>> >+++ b/lib/ethdev/ethdev_private.h
>> >@@ -26,4 +26,11 @@ eth_find_device(const struct rte_eth_dev
>> >*_start, rte_eth_cmp_t cmp,
>> > /* Parse devargs value for representor parameter. */
>> > int rte_eth_devargs_parse_representor_ports(char *str, void
>*data);
>> >
>> >+/* reset eth 'fast' API to dummy values */
>> >+void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
>> >+
>> >+/* setup eth 'fast' API to ethdev values */
>> >+void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
>> >+		const struct rte_eth_dev *dev);
>> >+
>> > #endif /* _ETH_PRIVATE_H_ */
>> >diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
>> >index 424bc260fa..9fbb1bc3db 100644
>> >--- a/lib/ethdev/rte_ethdev.c
>> >+++ b/lib/ethdev/rte_ethdev.c
>> >@@ -44,6 +44,9 @@
>> > static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
>> > struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
>> >
>> >+/* public 'fast' API */
>> >+struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
>> >+
>> > /* spinlock for eth device callbacks */
>> > static rte_spinlock_t eth_dev_cb_lock =
>RTE_SPINLOCK_INITIALIZER;
>> >
>> >@@ -1788,6 +1791,9 @@ rte_eth_dev_start(uint16_t port_id)
>> > 		(*dev->dev_ops->link_update)(dev, 0);
>> > 	}
>> >
>> >+	/* expose selection of PMD rx/tx function */
>> >+	eth_dev_fp_ops_setup(rte_eth_fp_ops + port_id, dev);
>> >+
>>
>> Secondary process will not set these properly I believe as it might not
>> call start() if it does primary process ops will not be set.
>
>That's a very good point, have to admit - I missed that part.
>
>>
>> One simple solution is to call ops_setup() around
>rte_eth_dev_attach_secondary()
>> but if application doesn't invoke start() on Primary the ops will not be
>set for it.
>
>I think rte_eth_dev_attach_secondary() wouldn't work, as majority of
>the PMDs setup
>fast ops function pointers after it.
>From reading the code rte_eth_dev_probing_finish() seems like a good
>choice -
>as it is always the final point in device initialization for secondary
>process.

Ack, make sense to me, I did a similar thing for event device in
http://patches.dpdk.org/project/dpdk/patch/20211003082710.8398-4-pbhagavatula@marvell.com/

>
>BTW, we also need something similar at de-init phase.
>rte_eth_dev_release_port() seems like a good candidate for it.
>

Hindsight I should have added reset to rte_event_pmd_pci_remove(), I will add it in next version.

>
>>
>> > 	rte_ethdev_trace_start(port_id);
>> > 	return 0;
>> > }
>> >@@ -1810,6 +1816,9 @@ rte_eth_dev_stop(uint16_t port_id)
>> > 		return 0;
>> > 	}
>> >
>> >+	/* point rx/tx functions to dummy ones */
>> >+	eth_dev_fp_ops_reset(rte_eth_fp_ops + port_id);
>> >+
>> > 	dev->data->dev_started = 0;
>> > 	ret = (*dev->dev_ops->dev_stop)(dev);
>> > 	rte_ethdev_trace_stop(port_id, ret);
>> >2.26.3


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v3 1/2] hash: split x86 and SW hash CRC intrinsics
  @ 2021-10-03 23:00  1% ` pbhagavatula
  2021-10-04  5:52  1%   ` [dpdk-dev] [PATCH v4 " pbhagavatula
  0 siblings, 1 reply; 200+ results
From: pbhagavatula @ 2021-10-03 23:00 UTC (permalink / raw)
  To: ruifeng.wang, konstantin.ananyev, jerinj, Yipeng Wang,
	Sameh Gobriel, Bruce Richardson, Vladimir Medvedkin
  Cc: dev, Pavan Nikhilesh

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Split x86 and SW hash crc intrinsics into a separate files.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 v3 Changes:
 - Split x86 and SW hash crc functions into separate files.
 - Rename `rte_crc_arm64.h` to `hash_crc_arm64.h` as it is internal and not
 installed by meson build.

 v2 Changes:
 - Don't remove `rte_crc_arm64.h` for ABI purposes.
 - Revert function pointer approach for performance reasons.
 - Select the best available algorithm based on the arch when user passes an
 unsupported crc32 algorithm.

 lib/hash/hash_crc_sw.h  | 419 ++++++++++++++++++++++++++++++++++++++++
 lib/hash/hash_crc_x86.h |  62 ++++++
 lib/hash/rte_hash_crc.h | 396 +------------------------------------
 3 files changed, 483 insertions(+), 394 deletions(-)
 create mode 100644 lib/hash/hash_crc_sw.h
 create mode 100644 lib/hash/hash_crc_x86.h

diff --git a/lib/hash/hash_crc_sw.h b/lib/hash/hash_crc_sw.h
new file mode 100644
index 0000000000..4790a0970b
--- /dev/null
+++ b/lib/hash/hash_crc_sw.h
@@ -0,0 +1,419 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2014 Intel Corporation
+ */
+
+#ifndef _HASH_CRC_SW_H_
+#define _HASH_CRC_SW_H_
+
+/* Lookup tables for software implementation of CRC32C */
+static const uint32_t crc32c_tables[8][256] = {
+	{0x00000000, 0xF26B8303, 0xE13B70F7, 0x1350F3F4, 0xC79A971F, 0x35F1141C,
+	 0x26A1E7E8, 0xD4CA64EB, 0x8AD958CF, 0x78B2DBCC, 0x6BE22838, 0x9989AB3B,
+	 0x4D43CFD0, 0xBF284CD3, 0xAC78BF27, 0x5E133C24, 0x105EC76F, 0xE235446C,
+	 0xF165B798, 0x030E349B, 0xD7C45070, 0x25AFD373, 0x36FF2087, 0xC494A384,
+	 0x9A879FA0, 0x68EC1CA3, 0x7BBCEF57, 0x89D76C54, 0x5D1D08BF, 0xAF768BBC,
+	 0xBC267848, 0x4E4DFB4B, 0x20BD8EDE, 0xD2D60DDD, 0xC186FE29, 0x33ED7D2A,
+	 0xE72719C1, 0x154C9AC2, 0x061C6936, 0xF477EA35, 0xAA64D611, 0x580F5512,
+	 0x4B5FA6E6, 0xB93425E5, 0x6DFE410E, 0x9F95C20D, 0x8CC531F9, 0x7EAEB2FA,
+	 0x30E349B1, 0xC288CAB2, 0xD1D83946, 0x23B3BA45, 0xF779DEAE, 0x05125DAD,
+	 0x1642AE59, 0xE4292D5A, 0xBA3A117E, 0x4851927D, 0x5B016189, 0xA96AE28A,
+	 0x7DA08661, 0x8FCB0562, 0x9C9BF696, 0x6EF07595, 0x417B1DBC, 0xB3109EBF,
+	 0xA0406D4B, 0x522BEE48, 0x86E18AA3, 0x748A09A0, 0x67DAFA54, 0x95B17957,
+	 0xCBA24573, 0x39C9C670, 0x2A993584, 0xD8F2B687, 0x0C38D26C, 0xFE53516F,
+	 0xED03A29B, 0x1F682198, 0x5125DAD3, 0xA34E59D0, 0xB01EAA24, 0x42752927,
+	 0x96BF4DCC, 0x64D4CECF, 0x77843D3B, 0x85EFBE38, 0xDBFC821C, 0x2997011F,
+	 0x3AC7F2EB, 0xC8AC71E8, 0x1C661503, 0xEE0D9600, 0xFD5D65F4, 0x0F36E6F7,
+	 0x61C69362, 0x93AD1061, 0x80FDE395, 0x72966096, 0xA65C047D, 0x5437877E,
+	 0x4767748A, 0xB50CF789, 0xEB1FCBAD, 0x197448AE, 0x0A24BB5A, 0xF84F3859,
+	 0x2C855CB2, 0xDEEEDFB1, 0xCDBE2C45, 0x3FD5AF46, 0x7198540D, 0x83F3D70E,
+	 0x90A324FA, 0x62C8A7F9, 0xB602C312, 0x44694011, 0x5739B3E5, 0xA55230E6,
+	 0xFB410CC2, 0x092A8FC1, 0x1A7A7C35, 0xE811FF36, 0x3CDB9BDD, 0xCEB018DE,
+	 0xDDE0EB2A, 0x2F8B6829, 0x82F63B78, 0x709DB87B, 0x63CD4B8F, 0x91A6C88C,
+	 0x456CAC67, 0xB7072F64, 0xA457DC90, 0x563C5F93, 0x082F63B7, 0xFA44E0B4,
+	 0xE9141340, 0x1B7F9043, 0xCFB5F4A8, 0x3DDE77AB, 0x2E8E845F, 0xDCE5075C,
+	 0x92A8FC17, 0x60C37F14, 0x73938CE0, 0x81F80FE3, 0x55326B08, 0xA759E80B,
+	 0xB4091BFF, 0x466298FC, 0x1871A4D8, 0xEA1A27DB, 0xF94AD42F, 0x0B21572C,
+	 0xDFEB33C7, 0x2D80B0C4, 0x3ED04330, 0xCCBBC033, 0xA24BB5A6, 0x502036A5,
+	 0x4370C551, 0xB11B4652, 0x65D122B9, 0x97BAA1BA, 0x84EA524E, 0x7681D14D,
+	 0x2892ED69, 0xDAF96E6A, 0xC9A99D9E, 0x3BC21E9D, 0xEF087A76, 0x1D63F975,
+	 0x0E330A81, 0xFC588982, 0xB21572C9, 0x407EF1CA, 0x532E023E, 0xA145813D,
+	 0x758FE5D6, 0x87E466D5, 0x94B49521, 0x66DF1622, 0x38CC2A06, 0xCAA7A905,
+	 0xD9F75AF1, 0x2B9CD9F2, 0xFF56BD19, 0x0D3D3E1A, 0x1E6DCDEE, 0xEC064EED,
+	 0xC38D26C4, 0x31E6A5C7, 0x22B65633, 0xD0DDD530, 0x0417B1DB, 0xF67C32D8,
+	 0xE52CC12C, 0x1747422F, 0x49547E0B, 0xBB3FFD08, 0xA86F0EFC, 0x5A048DFF,
+	 0x8ECEE914, 0x7CA56A17, 0x6FF599E3, 0x9D9E1AE0, 0xD3D3E1AB, 0x21B862A8,
+	 0x32E8915C, 0xC083125F, 0x144976B4, 0xE622F5B7, 0xF5720643, 0x07198540,
+	 0x590AB964, 0xAB613A67, 0xB831C993, 0x4A5A4A90, 0x9E902E7B, 0x6CFBAD78,
+	 0x7FAB5E8C, 0x8DC0DD8F, 0xE330A81A, 0x115B2B19, 0x020BD8ED, 0xF0605BEE,
+	 0x24AA3F05, 0xD6C1BC06, 0xC5914FF2, 0x37FACCF1, 0x69E9F0D5, 0x9B8273D6,
+	 0x88D28022, 0x7AB90321, 0xAE7367CA, 0x5C18E4C9, 0x4F48173D, 0xBD23943E,
+	 0xF36E6F75, 0x0105EC76, 0x12551F82, 0xE03E9C81, 0x34F4F86A, 0xC69F7B69,
+	 0xD5CF889D, 0x27A40B9E, 0x79B737BA, 0x8BDCB4B9, 0x988C474D, 0x6AE7C44E,
+	 0xBE2DA0A5, 0x4C4623A6, 0x5F16D052, 0xAD7D5351},
+	{0x00000000, 0x13A29877, 0x274530EE, 0x34E7A899, 0x4E8A61DC, 0x5D28F9AB,
+	 0x69CF5132, 0x7A6DC945, 0x9D14C3B8, 0x8EB65BCF, 0xBA51F356, 0xA9F36B21,
+	 0xD39EA264, 0xC03C3A13, 0xF4DB928A, 0xE7790AFD, 0x3FC5F181, 0x2C6769F6,
+	 0x1880C16F, 0x0B225918, 0x714F905D, 0x62ED082A, 0x560AA0B3, 0x45A838C4,
+	 0xA2D13239, 0xB173AA4E, 0x859402D7, 0x96369AA0, 0xEC5B53E5, 0xFFF9CB92,
+	 0xCB1E630B, 0xD8BCFB7C, 0x7F8BE302, 0x6C297B75, 0x58CED3EC, 0x4B6C4B9B,
+	 0x310182DE, 0x22A31AA9, 0x1644B230, 0x05E62A47, 0xE29F20BA, 0xF13DB8CD,
+	 0xC5DA1054, 0xD6788823, 0xAC154166, 0xBFB7D911, 0x8B507188, 0x98F2E9FF,
+	 0x404E1283, 0x53EC8AF4, 0x670B226D, 0x74A9BA1A, 0x0EC4735F, 0x1D66EB28,
+	 0x298143B1, 0x3A23DBC6, 0xDD5AD13B, 0xCEF8494C, 0xFA1FE1D5, 0xE9BD79A2,
+	 0x93D0B0E7, 0x80722890, 0xB4958009, 0xA737187E, 0xFF17C604, 0xECB55E73,
+	 0xD852F6EA, 0xCBF06E9D, 0xB19DA7D8, 0xA23F3FAF, 0x96D89736, 0x857A0F41,
+	 0x620305BC, 0x71A19DCB, 0x45463552, 0x56E4AD25, 0x2C896460, 0x3F2BFC17,
+	 0x0BCC548E, 0x186ECCF9, 0xC0D23785, 0xD370AFF2, 0xE797076B, 0xF4359F1C,
+	 0x8E585659, 0x9DFACE2E, 0xA91D66B7, 0xBABFFEC0, 0x5DC6F43D, 0x4E646C4A,
+	 0x7A83C4D3, 0x69215CA4, 0x134C95E1, 0x00EE0D96, 0x3409A50F, 0x27AB3D78,
+	 0x809C2506, 0x933EBD71, 0xA7D915E8, 0xB47B8D9F, 0xCE1644DA, 0xDDB4DCAD,
+	 0xE9537434, 0xFAF1EC43, 0x1D88E6BE, 0x0E2A7EC9, 0x3ACDD650, 0x296F4E27,
+	 0x53028762, 0x40A01F15, 0x7447B78C, 0x67E52FFB, 0xBF59D487, 0xACFB4CF0,
+	 0x981CE469, 0x8BBE7C1E, 0xF1D3B55B, 0xE2712D2C, 0xD69685B5, 0xC5341DC2,
+	 0x224D173F, 0x31EF8F48, 0x050827D1, 0x16AABFA6, 0x6CC776E3, 0x7F65EE94,
+	 0x4B82460D, 0x5820DE7A, 0xFBC3FAF9, 0xE861628E, 0xDC86CA17, 0xCF245260,
+	 0xB5499B25, 0xA6EB0352, 0x920CABCB, 0x81AE33BC, 0x66D73941, 0x7575A136,
+	 0x419209AF, 0x523091D8, 0x285D589D, 0x3BFFC0EA, 0x0F186873, 0x1CBAF004,
+	 0xC4060B78, 0xD7A4930F, 0xE3433B96, 0xF0E1A3E1, 0x8A8C6AA4, 0x992EF2D3,
+	 0xADC95A4A, 0xBE6BC23D, 0x5912C8C0, 0x4AB050B7, 0x7E57F82E, 0x6DF56059,
+	 0x1798A91C, 0x043A316B, 0x30DD99F2, 0x237F0185, 0x844819FB, 0x97EA818C,
+	 0xA30D2915, 0xB0AFB162, 0xCAC27827, 0xD960E050, 0xED8748C9, 0xFE25D0BE,
+	 0x195CDA43, 0x0AFE4234, 0x3E19EAAD, 0x2DBB72DA, 0x57D6BB9F, 0x447423E8,
+	 0x70938B71, 0x63311306, 0xBB8DE87A, 0xA82F700D, 0x9CC8D894, 0x8F6A40E3,
+	 0xF50789A6, 0xE6A511D1, 0xD242B948, 0xC1E0213F, 0x26992BC2, 0x353BB3B5,
+	 0x01DC1B2C, 0x127E835B, 0x68134A1E, 0x7BB1D269, 0x4F567AF0, 0x5CF4E287,
+	 0x04D43CFD, 0x1776A48A, 0x23910C13, 0x30339464, 0x4A5E5D21, 0x59FCC556,
+	 0x6D1B6DCF, 0x7EB9F5B8, 0x99C0FF45, 0x8A626732, 0xBE85CFAB, 0xAD2757DC,
+	 0xD74A9E99, 0xC4E806EE, 0xF00FAE77, 0xE3AD3600, 0x3B11CD7C, 0x28B3550B,
+	 0x1C54FD92, 0x0FF665E5, 0x759BACA0, 0x663934D7, 0x52DE9C4E, 0x417C0439,
+	 0xA6050EC4, 0xB5A796B3, 0x81403E2A, 0x92E2A65D, 0xE88F6F18, 0xFB2DF76F,
+	 0xCFCA5FF6, 0xDC68C781, 0x7B5FDFFF, 0x68FD4788, 0x5C1AEF11, 0x4FB87766,
+	 0x35D5BE23, 0x26772654, 0x12908ECD, 0x013216BA, 0xE64B1C47, 0xF5E98430,
+	 0xC10E2CA9, 0xD2ACB4DE, 0xA8C17D9B, 0xBB63E5EC, 0x8F844D75, 0x9C26D502,
+	 0x449A2E7E, 0x5738B609, 0x63DF1E90, 0x707D86E7, 0x0A104FA2, 0x19B2D7D5,
+	 0x2D557F4C, 0x3EF7E73B, 0xD98EEDC6, 0xCA2C75B1, 0xFECBDD28, 0xED69455F,
+	 0x97048C1A, 0x84A6146D, 0xB041BCF4, 0xA3E32483},
+	{0x00000000, 0xA541927E, 0x4F6F520D, 0xEA2EC073, 0x9EDEA41A, 0x3B9F3664,
+	 0xD1B1F617, 0x74F06469, 0x38513EC5, 0x9D10ACBB, 0x773E6CC8, 0xD27FFEB6,
+	 0xA68F9ADF, 0x03CE08A1, 0xE9E0C8D2, 0x4CA15AAC, 0x70A27D8A, 0xD5E3EFF4,
+	 0x3FCD2F87, 0x9A8CBDF9, 0xEE7CD990, 0x4B3D4BEE, 0xA1138B9D, 0x045219E3,
+	 0x48F3434F, 0xEDB2D131, 0x079C1142, 0xA2DD833C, 0xD62DE755, 0x736C752B,
+	 0x9942B558, 0x3C032726, 0xE144FB14, 0x4405696A, 0xAE2BA919, 0x0B6A3B67,
+	 0x7F9A5F0E, 0xDADBCD70, 0x30F50D03, 0x95B49F7D, 0xD915C5D1, 0x7C5457AF,
+	 0x967A97DC, 0x333B05A2, 0x47CB61CB, 0xE28AF3B5, 0x08A433C6, 0xADE5A1B8,
+	 0x91E6869E, 0x34A714E0, 0xDE89D493, 0x7BC846ED, 0x0F382284, 0xAA79B0FA,
+	 0x40577089, 0xE516E2F7, 0xA9B7B85B, 0x0CF62A25, 0xE6D8EA56, 0x43997828,
+	 0x37691C41, 0x92288E3F, 0x78064E4C, 0xDD47DC32, 0xC76580D9, 0x622412A7,
+	 0x880AD2D4, 0x2D4B40AA, 0x59BB24C3, 0xFCFAB6BD, 0x16D476CE, 0xB395E4B0,
+	 0xFF34BE1C, 0x5A752C62, 0xB05BEC11, 0x151A7E6F, 0x61EA1A06, 0xC4AB8878,
+	 0x2E85480B, 0x8BC4DA75, 0xB7C7FD53, 0x12866F2D, 0xF8A8AF5E, 0x5DE93D20,
+	 0x29195949, 0x8C58CB37, 0x66760B44, 0xC337993A, 0x8F96C396, 0x2AD751E8,
+	 0xC0F9919B, 0x65B803E5, 0x1148678C, 0xB409F5F2, 0x5E273581, 0xFB66A7FF,
+	 0x26217BCD, 0x8360E9B3, 0x694E29C0, 0xCC0FBBBE, 0xB8FFDFD7, 0x1DBE4DA9,
+	 0xF7908DDA, 0x52D11FA4, 0x1E704508, 0xBB31D776, 0x511F1705, 0xF45E857B,
+	 0x80AEE112, 0x25EF736C, 0xCFC1B31F, 0x6A802161, 0x56830647, 0xF3C29439,
+	 0x19EC544A, 0xBCADC634, 0xC85DA25D, 0x6D1C3023, 0x8732F050, 0x2273622E,
+	 0x6ED23882, 0xCB93AAFC, 0x21BD6A8F, 0x84FCF8F1, 0xF00C9C98, 0x554D0EE6,
+	 0xBF63CE95, 0x1A225CEB, 0x8B277743, 0x2E66E53D, 0xC448254E, 0x6109B730,
+	 0x15F9D359, 0xB0B84127, 0x5A968154, 0xFFD7132A, 0xB3764986, 0x1637DBF8,
+	 0xFC191B8B, 0x595889F5, 0x2DA8ED9C, 0x88E97FE2, 0x62C7BF91, 0xC7862DEF,
+	 0xFB850AC9, 0x5EC498B7, 0xB4EA58C4, 0x11ABCABA, 0x655BAED3, 0xC01A3CAD,
+	 0x2A34FCDE, 0x8F756EA0, 0xC3D4340C, 0x6695A672, 0x8CBB6601, 0x29FAF47F,
+	 0x5D0A9016, 0xF84B0268, 0x1265C21B, 0xB7245065, 0x6A638C57, 0xCF221E29,
+	 0x250CDE5A, 0x804D4C24, 0xF4BD284D, 0x51FCBA33, 0xBBD27A40, 0x1E93E83E,
+	 0x5232B292, 0xF77320EC, 0x1D5DE09F, 0xB81C72E1, 0xCCEC1688, 0x69AD84F6,
+	 0x83834485, 0x26C2D6FB, 0x1AC1F1DD, 0xBF8063A3, 0x55AEA3D0, 0xF0EF31AE,
+	 0x841F55C7, 0x215EC7B9, 0xCB7007CA, 0x6E3195B4, 0x2290CF18, 0x87D15D66,
+	 0x6DFF9D15, 0xC8BE0F6B, 0xBC4E6B02, 0x190FF97C, 0xF321390F, 0x5660AB71,
+	 0x4C42F79A, 0xE90365E4, 0x032DA597, 0xA66C37E9, 0xD29C5380, 0x77DDC1FE,
+	 0x9DF3018D, 0x38B293F3, 0x7413C95F, 0xD1525B21, 0x3B7C9B52, 0x9E3D092C,
+	 0xEACD6D45, 0x4F8CFF3B, 0xA5A23F48, 0x00E3AD36, 0x3CE08A10, 0x99A1186E,
+	 0x738FD81D, 0xD6CE4A63, 0xA23E2E0A, 0x077FBC74, 0xED517C07, 0x4810EE79,
+	 0x04B1B4D5, 0xA1F026AB, 0x4BDEE6D8, 0xEE9F74A6, 0x9A6F10CF, 0x3F2E82B1,
+	 0xD50042C2, 0x7041D0BC, 0xAD060C8E, 0x08479EF0, 0xE2695E83, 0x4728CCFD,
+	 0x33D8A894, 0x96993AEA, 0x7CB7FA99, 0xD9F668E7, 0x9557324B, 0x3016A035,
+	 0xDA386046, 0x7F79F238, 0x0B899651, 0xAEC8042F, 0x44E6C45C, 0xE1A75622,
+	 0xDDA47104, 0x78E5E37A, 0x92CB2309, 0x378AB177, 0x437AD51E, 0xE63B4760,
+	 0x0C158713, 0xA954156D, 0xE5F54FC1, 0x40B4DDBF, 0xAA9A1DCC, 0x0FDB8FB2,
+	 0x7B2BEBDB, 0xDE6A79A5, 0x3444B9D6, 0x91052BA8},
+	{0x00000000, 0xDD45AAB8, 0xBF672381, 0x62228939, 0x7B2231F3, 0xA6679B4B,
+	 0xC4451272, 0x1900B8CA, 0xF64463E6, 0x2B01C95E, 0x49234067, 0x9466EADF,
+	 0x8D665215, 0x5023F8AD, 0x32017194, 0xEF44DB2C, 0xE964B13D, 0x34211B85,
+	 0x560392BC, 0x8B463804, 0x924680CE, 0x4F032A76, 0x2D21A34F, 0xF06409F7,
+	 0x1F20D2DB, 0xC2657863, 0xA047F15A, 0x7D025BE2, 0x6402E328, 0xB9474990,
+	 0xDB65C0A9, 0x06206A11, 0xD725148B, 0x0A60BE33, 0x6842370A, 0xB5079DB2,
+	 0xAC072578, 0x71428FC0, 0x136006F9, 0xCE25AC41, 0x2161776D, 0xFC24DDD5,
+	 0x9E0654EC, 0x4343FE54, 0x5A43469E, 0x8706EC26, 0xE524651F, 0x3861CFA7,
+	 0x3E41A5B6, 0xE3040F0E, 0x81268637, 0x5C632C8F, 0x45639445, 0x98263EFD,
+	 0xFA04B7C4, 0x27411D7C, 0xC805C650, 0x15406CE8, 0x7762E5D1, 0xAA274F69,
+	 0xB327F7A3, 0x6E625D1B, 0x0C40D422, 0xD1057E9A, 0xABA65FE7, 0x76E3F55F,
+	 0x14C17C66, 0xC984D6DE, 0xD0846E14, 0x0DC1C4AC, 0x6FE34D95, 0xB2A6E72D,
+	 0x5DE23C01, 0x80A796B9, 0xE2851F80, 0x3FC0B538, 0x26C00DF2, 0xFB85A74A,
+	 0x99A72E73, 0x44E284CB, 0x42C2EEDA, 0x9F874462, 0xFDA5CD5B, 0x20E067E3,
+	 0x39E0DF29, 0xE4A57591, 0x8687FCA8, 0x5BC25610, 0xB4868D3C, 0x69C32784,
+	 0x0BE1AEBD, 0xD6A40405, 0xCFA4BCCF, 0x12E11677, 0x70C39F4E, 0xAD8635F6,
+	 0x7C834B6C, 0xA1C6E1D4, 0xC3E468ED, 0x1EA1C255, 0x07A17A9F, 0xDAE4D027,
+	 0xB8C6591E, 0x6583F3A6, 0x8AC7288A, 0x57828232, 0x35A00B0B, 0xE8E5A1B3,
+	 0xF1E51979, 0x2CA0B3C1, 0x4E823AF8, 0x93C79040, 0x95E7FA51, 0x48A250E9,
+	 0x2A80D9D0, 0xF7C57368, 0xEEC5CBA2, 0x3380611A, 0x51A2E823, 0x8CE7429B,
+	 0x63A399B7, 0xBEE6330F, 0xDCC4BA36, 0x0181108E, 0x1881A844, 0xC5C402FC,
+	 0xA7E68BC5, 0x7AA3217D, 0x52A0C93F, 0x8FE56387, 0xEDC7EABE, 0x30824006,
+	 0x2982F8CC, 0xF4C75274, 0x96E5DB4D, 0x4BA071F5, 0xA4E4AAD9, 0x79A10061,
+	 0x1B838958, 0xC6C623E0, 0xDFC69B2A, 0x02833192, 0x60A1B8AB, 0xBDE41213,
+	 0xBBC47802, 0x6681D2BA, 0x04A35B83, 0xD9E6F13B, 0xC0E649F1, 0x1DA3E349,
+	 0x7F816A70, 0xA2C4C0C8, 0x4D801BE4, 0x90C5B15C, 0xF2E73865, 0x2FA292DD,
+	 0x36A22A17, 0xEBE780AF, 0x89C50996, 0x5480A32E, 0x8585DDB4, 0x58C0770C,
+	 0x3AE2FE35, 0xE7A7548D, 0xFEA7EC47, 0x23E246FF, 0x41C0CFC6, 0x9C85657E,
+	 0x73C1BE52, 0xAE8414EA, 0xCCA69DD3, 0x11E3376B, 0x08E38FA1, 0xD5A62519,
+	 0xB784AC20, 0x6AC10698, 0x6CE16C89, 0xB1A4C631, 0xD3864F08, 0x0EC3E5B0,
+	 0x17C35D7A, 0xCA86F7C2, 0xA8A47EFB, 0x75E1D443, 0x9AA50F6F, 0x47E0A5D7,
+	 0x25C22CEE, 0xF8878656, 0xE1873E9C, 0x3CC29424, 0x5EE01D1D, 0x83A5B7A5,
+	 0xF90696D8, 0x24433C60, 0x4661B559, 0x9B241FE1, 0x8224A72B, 0x5F610D93,
+	 0x3D4384AA, 0xE0062E12, 0x0F42F53E, 0xD2075F86, 0xB025D6BF, 0x6D607C07,
+	 0x7460C4CD, 0xA9256E75, 0xCB07E74C, 0x16424DF4, 0x106227E5, 0xCD278D5D,
+	 0xAF050464, 0x7240AEDC, 0x6B401616, 0xB605BCAE, 0xD4273597, 0x09629F2F,
+	 0xE6264403, 0x3B63EEBB, 0x59416782, 0x8404CD3A, 0x9D0475F0, 0x4041DF48,
+	 0x22635671, 0xFF26FCC9, 0x2E238253, 0xF36628EB, 0x9144A1D2, 0x4C010B6A,
+	 0x5501B3A0, 0x88441918, 0xEA669021, 0x37233A99, 0xD867E1B5, 0x05224B0D,
+	 0x6700C234, 0xBA45688C, 0xA345D046, 0x7E007AFE, 0x1C22F3C7, 0xC167597F,
+	 0xC747336E, 0x1A0299D6, 0x782010EF, 0xA565BA57, 0xBC65029D, 0x6120A825,
+	 0x0302211C, 0xDE478BA4, 0x31035088, 0xEC46FA30, 0x8E647309, 0x5321D9B1,
+	 0x4A21617B, 0x9764CBC3, 0xF54642FA, 0x2803E842},
+	{0x00000000, 0x38116FAC, 0x7022DF58, 0x4833B0F4, 0xE045BEB0, 0xD854D11C,
+	 0x906761E8, 0xA8760E44, 0xC5670B91, 0xFD76643D, 0xB545D4C9, 0x8D54BB65,
+	 0x2522B521, 0x1D33DA8D, 0x55006A79, 0x6D1105D5, 0x8F2261D3, 0xB7330E7F,
+	 0xFF00BE8B, 0xC711D127, 0x6F67DF63, 0x5776B0CF, 0x1F45003B, 0x27546F97,
+	 0x4A456A42, 0x725405EE, 0x3A67B51A, 0x0276DAB6, 0xAA00D4F2, 0x9211BB5E,
+	 0xDA220BAA, 0xE2336406, 0x1BA8B557, 0x23B9DAFB, 0x6B8A6A0F, 0x539B05A3,
+	 0xFBED0BE7, 0xC3FC644B, 0x8BCFD4BF, 0xB3DEBB13, 0xDECFBEC6, 0xE6DED16A,
+	 0xAEED619E, 0x96FC0E32, 0x3E8A0076, 0x069B6FDA, 0x4EA8DF2E, 0x76B9B082,
+	 0x948AD484, 0xAC9BBB28, 0xE4A80BDC, 0xDCB96470, 0x74CF6A34, 0x4CDE0598,
+	 0x04EDB56C, 0x3CFCDAC0, 0x51EDDF15, 0x69FCB0B9, 0x21CF004D, 0x19DE6FE1,
+	 0xB1A861A5, 0x89B90E09, 0xC18ABEFD, 0xF99BD151, 0x37516AAE, 0x0F400502,
+	 0x4773B5F6, 0x7F62DA5A, 0xD714D41E, 0xEF05BBB2, 0xA7360B46, 0x9F2764EA,
+	 0xF236613F, 0xCA270E93, 0x8214BE67, 0xBA05D1CB, 0x1273DF8F, 0x2A62B023,
+	 0x625100D7, 0x5A406F7B, 0xB8730B7D, 0x806264D1, 0xC851D425, 0xF040BB89,
+	 0x5836B5CD, 0x6027DA61, 0x28146A95, 0x10050539, 0x7D1400EC, 0x45056F40,
+	 0x0D36DFB4, 0x3527B018, 0x9D51BE5C, 0xA540D1F0, 0xED736104, 0xD5620EA8,
+	 0x2CF9DFF9, 0x14E8B055, 0x5CDB00A1, 0x64CA6F0D, 0xCCBC6149, 0xF4AD0EE5,
+	 0xBC9EBE11, 0x848FD1BD, 0xE99ED468, 0xD18FBBC4, 0x99BC0B30, 0xA1AD649C,
+	 0x09DB6AD8, 0x31CA0574, 0x79F9B580, 0x41E8DA2C, 0xA3DBBE2A, 0x9BCAD186,
+	 0xD3F96172, 0xEBE80EDE, 0x439E009A, 0x7B8F6F36, 0x33BCDFC2, 0x0BADB06E,
+	 0x66BCB5BB, 0x5EADDA17, 0x169E6AE3, 0x2E8F054F, 0x86F90B0B, 0xBEE864A7,
+	 0xF6DBD453, 0xCECABBFF, 0x6EA2D55C, 0x56B3BAF0, 0x1E800A04, 0x269165A8,
+	 0x8EE76BEC, 0xB6F60440, 0xFEC5B4B4, 0xC6D4DB18, 0xABC5DECD, 0x93D4B161,
+	 0xDBE70195, 0xE3F66E39, 0x4B80607D, 0x73910FD1, 0x3BA2BF25, 0x03B3D089,
+	 0xE180B48F, 0xD991DB23, 0x91A26BD7, 0xA9B3047B, 0x01C50A3F, 0x39D46593,
+	 0x71E7D567, 0x49F6BACB, 0x24E7BF1E, 0x1CF6D0B2, 0x54C56046, 0x6CD40FEA,
+	 0xC4A201AE, 0xFCB36E02, 0xB480DEF6, 0x8C91B15A, 0x750A600B, 0x4D1B0FA7,
+	 0x0528BF53, 0x3D39D0FF, 0x954FDEBB, 0xAD5EB117, 0xE56D01E3, 0xDD7C6E4F,
+	 0xB06D6B9A, 0x887C0436, 0xC04FB4C2, 0xF85EDB6E, 0x5028D52A, 0x6839BA86,
+	 0x200A0A72, 0x181B65DE, 0xFA2801D8, 0xC2396E74, 0x8A0ADE80, 0xB21BB12C,
+	 0x1A6DBF68, 0x227CD0C4, 0x6A4F6030, 0x525E0F9C, 0x3F4F0A49, 0x075E65E5,
+	 0x4F6DD511, 0x777CBABD, 0xDF0AB4F9, 0xE71BDB55, 0xAF286BA1, 0x9739040D,
+	 0x59F3BFF2, 0x61E2D05E, 0x29D160AA, 0x11C00F06, 0xB9B60142, 0x81A76EEE,
+	 0xC994DE1A, 0xF185B1B6, 0x9C94B463, 0xA485DBCF, 0xECB66B3B, 0xD4A70497,
+	 0x7CD10AD3, 0x44C0657F, 0x0CF3D58B, 0x34E2BA27, 0xD6D1DE21, 0xEEC0B18D,
+	 0xA6F30179, 0x9EE26ED5, 0x36946091, 0x0E850F3D, 0x46B6BFC9, 0x7EA7D065,
+	 0x13B6D5B0, 0x2BA7BA1C, 0x63940AE8, 0x5B856544, 0xF3F36B00, 0xCBE204AC,
+	 0x83D1B458, 0xBBC0DBF4, 0x425B0AA5, 0x7A4A6509, 0x3279D5FD, 0x0A68BA51,
+	 0xA21EB415, 0x9A0FDBB9, 0xD23C6B4D, 0xEA2D04E1, 0x873C0134, 0xBF2D6E98,
+	 0xF71EDE6C, 0xCF0FB1C0, 0x6779BF84, 0x5F68D028, 0x175B60DC, 0x2F4A0F70,
+	 0xCD796B76, 0xF56804DA, 0xBD5BB42E, 0x854ADB82, 0x2D3CD5C6, 0x152DBA6A,
+	 0x5D1E0A9E, 0x650F6532, 0x081E60E7, 0x300F0F4B, 0x783CBFBF, 0x402DD013,
+	 0xE85BDE57, 0xD04AB1FB, 0x9879010F, 0xA0686EA3},
+	{0x00000000, 0xEF306B19, 0xDB8CA0C3, 0x34BCCBDA, 0xB2F53777, 0x5DC55C6E,
+	 0x697997B4, 0x8649FCAD, 0x6006181F, 0x8F367306, 0xBB8AB8DC, 0x54BAD3C5,
+	 0xD2F32F68, 0x3DC34471, 0x097F8FAB, 0xE64FE4B2, 0xC00C303E, 0x2F3C5B27,
+	 0x1B8090FD, 0xF4B0FBE4, 0x72F90749, 0x9DC96C50, 0xA975A78A, 0x4645CC93,
+	 0xA00A2821, 0x4F3A4338, 0x7B8688E2, 0x94B6E3FB, 0x12FF1F56, 0xFDCF744F,
+	 0xC973BF95, 0x2643D48C, 0x85F4168D, 0x6AC47D94, 0x5E78B64E, 0xB148DD57,
+	 0x370121FA, 0xD8314AE3, 0xEC8D8139, 0x03BDEA20, 0xE5F20E92, 0x0AC2658B,
+	 0x3E7EAE51, 0xD14EC548, 0x570739E5, 0xB83752FC, 0x8C8B9926, 0x63BBF23F,
+	 0x45F826B3, 0xAAC84DAA, 0x9E748670, 0x7144ED69, 0xF70D11C4, 0x183D7ADD,
+	 0x2C81B107, 0xC3B1DA1E, 0x25FE3EAC, 0xCACE55B5, 0xFE729E6F, 0x1142F576,
+	 0x970B09DB, 0x783B62C2, 0x4C87A918, 0xA3B7C201, 0x0E045BEB, 0xE13430F2,
+	 0xD588FB28, 0x3AB89031, 0xBCF16C9C, 0x53C10785, 0x677DCC5F, 0x884DA746,
+	 0x6E0243F4, 0x813228ED, 0xB58EE337, 0x5ABE882E, 0xDCF77483, 0x33C71F9A,
+	 0x077BD440, 0xE84BBF59, 0xCE086BD5, 0x213800CC, 0x1584CB16, 0xFAB4A00F,
+	 0x7CFD5CA2, 0x93CD37BB, 0xA771FC61, 0x48419778, 0xAE0E73CA, 0x413E18D3,
+	 0x7582D309, 0x9AB2B810, 0x1CFB44BD, 0xF3CB2FA4, 0xC777E47E, 0x28478F67,
+	 0x8BF04D66, 0x64C0267F, 0x507CEDA5, 0xBF4C86BC, 0x39057A11, 0xD6351108,
+	 0xE289DAD2, 0x0DB9B1CB, 0xEBF65579, 0x04C63E60, 0x307AF5BA, 0xDF4A9EA3,
+	 0x5903620E, 0xB6330917, 0x828FC2CD, 0x6DBFA9D4, 0x4BFC7D58, 0xA4CC1641,
+	 0x9070DD9B, 0x7F40B682, 0xF9094A2F, 0x16392136, 0x2285EAEC, 0xCDB581F5,
+	 0x2BFA6547, 0xC4CA0E5E, 0xF076C584, 0x1F46AE9D, 0x990F5230, 0x763F3929,
+	 0x4283F2F3, 0xADB399EA, 0x1C08B7D6, 0xF338DCCF, 0xC7841715, 0x28B47C0C,
+	 0xAEFD80A1, 0x41CDEBB8, 0x75712062, 0x9A414B7B, 0x7C0EAFC9, 0x933EC4D0,
+	 0xA7820F0A, 0x48B26413, 0xCEFB98BE, 0x21CBF3A7, 0x1577387D, 0xFA475364,
+	 0xDC0487E8, 0x3334ECF1, 0x0788272B, 0xE8B84C32, 0x6EF1B09F, 0x81C1DB86,
+	 0xB57D105C, 0x5A4D7B45, 0xBC029FF7, 0x5332F4EE, 0x678E3F34, 0x88BE542D,
+	 0x0EF7A880, 0xE1C7C399, 0xD57B0843, 0x3A4B635A, 0x99FCA15B, 0x76CCCA42,
+	 0x42700198, 0xAD406A81, 0x2B09962C, 0xC439FD35, 0xF08536EF, 0x1FB55DF6,
+	 0xF9FAB944, 0x16CAD25D, 0x22761987, 0xCD46729E, 0x4B0F8E33, 0xA43FE52A,
+	 0x90832EF0, 0x7FB345E9, 0x59F09165, 0xB6C0FA7C, 0x827C31A6, 0x6D4C5ABF,
+	 0xEB05A612, 0x0435CD0B, 0x308906D1, 0xDFB96DC8, 0x39F6897A, 0xD6C6E263,
+	 0xE27A29B9, 0x0D4A42A0, 0x8B03BE0D, 0x6433D514, 0x508F1ECE, 0xBFBF75D7,
+	 0x120CEC3D, 0xFD3C8724, 0xC9804CFE, 0x26B027E7, 0xA0F9DB4A, 0x4FC9B053,
+	 0x7B757B89, 0x94451090, 0x720AF422, 0x9D3A9F3B, 0xA98654E1, 0x46B63FF8,
+	 0xC0FFC355, 0x2FCFA84C, 0x1B736396, 0xF443088F, 0xD200DC03, 0x3D30B71A,
+	 0x098C7CC0, 0xE6BC17D9, 0x60F5EB74, 0x8FC5806D, 0xBB794BB7, 0x544920AE,
+	 0xB206C41C, 0x5D36AF05, 0x698A64DF, 0x86BA0FC6, 0x00F3F36B, 0xEFC39872,
+	 0xDB7F53A8, 0x344F38B1, 0x97F8FAB0, 0x78C891A9, 0x4C745A73, 0xA344316A,
+	 0x250DCDC7, 0xCA3DA6DE, 0xFE816D04, 0x11B1061D, 0xF7FEE2AF, 0x18CE89B6,
+	 0x2C72426C, 0xC3422975, 0x450BD5D8, 0xAA3BBEC1, 0x9E87751B, 0x71B71E02,
+	 0x57F4CA8E, 0xB8C4A197, 0x8C786A4D, 0x63480154, 0xE501FDF9, 0x0A3196E0,
+	 0x3E8D5D3A, 0xD1BD3623, 0x37F2D291, 0xD8C2B988, 0xEC7E7252, 0x034E194B,
+	 0x8507E5E6, 0x6A378EFF, 0x5E8B4525, 0xB1BB2E3C},
+	{0x00000000, 0x68032CC8, 0xD0065990, 0xB8057558, 0xA5E0C5D1, 0xCDE3E919,
+	 0x75E69C41, 0x1DE5B089, 0x4E2DFD53, 0x262ED19B, 0x9E2BA4C3, 0xF628880B,
+	 0xEBCD3882, 0x83CE144A, 0x3BCB6112, 0x53C84DDA, 0x9C5BFAA6, 0xF458D66E,
+	 0x4C5DA336, 0x245E8FFE, 0x39BB3F77, 0x51B813BF, 0xE9BD66E7, 0x81BE4A2F,
+	 0xD27607F5, 0xBA752B3D, 0x02705E65, 0x6A7372AD, 0x7796C224, 0x1F95EEEC,
+	 0xA7909BB4, 0xCF93B77C, 0x3D5B83BD, 0x5558AF75, 0xED5DDA2D, 0x855EF6E5,
+	 0x98BB466C, 0xF0B86AA4, 0x48BD1FFC, 0x20BE3334, 0x73767EEE, 0x1B755226,
+	 0xA370277E, 0xCB730BB6, 0xD696BB3F, 0xBE9597F7, 0x0690E2AF, 0x6E93CE67,
+	 0xA100791B, 0xC90355D3, 0x7106208B, 0x19050C43, 0x04E0BCCA, 0x6CE39002,
+	 0xD4E6E55A, 0xBCE5C992, 0xEF2D8448, 0x872EA880, 0x3F2BDDD8, 0x5728F110,
+	 0x4ACD4199, 0x22CE6D51, 0x9ACB1809, 0xF2C834C1, 0x7AB7077A, 0x12B42BB2,
+	 0xAAB15EEA, 0xC2B27222, 0xDF57C2AB, 0xB754EE63, 0x0F519B3B, 0x6752B7F3,
+	 0x349AFA29, 0x5C99D6E1, 0xE49CA3B9, 0x8C9F8F71, 0x917A3FF8, 0xF9791330,
+	 0x417C6668, 0x297F4AA0, 0xE6ECFDDC, 0x8EEFD114, 0x36EAA44C, 0x5EE98884,
+	 0x430C380D, 0x2B0F14C5, 0x930A619D, 0xFB094D55, 0xA8C1008F, 0xC0C22C47,
+	 0x78C7591F, 0x10C475D7, 0x0D21C55E, 0x6522E996, 0xDD279CCE, 0xB524B006,
+	 0x47EC84C7, 0x2FEFA80F, 0x97EADD57, 0xFFE9F19F, 0xE20C4116, 0x8A0F6DDE,
+	 0x320A1886, 0x5A09344E, 0x09C17994, 0x61C2555C, 0xD9C72004, 0xB1C40CCC,
+	 0xAC21BC45, 0xC422908D, 0x7C27E5D5, 0x1424C91D, 0xDBB77E61, 0xB3B452A9,
+	 0x0BB127F1, 0x63B20B39, 0x7E57BBB0, 0x16549778, 0xAE51E220, 0xC652CEE8,
+	 0x959A8332, 0xFD99AFFA, 0x459CDAA2, 0x2D9FF66A, 0x307A46E3, 0x58796A2B,
+	 0xE07C1F73, 0x887F33BB, 0xF56E0EF4, 0x9D6D223C, 0x25685764, 0x4D6B7BAC,
+	 0x508ECB25, 0x388DE7ED, 0x808892B5, 0xE88BBE7D, 0xBB43F3A7, 0xD340DF6F,
+	 0x6B45AA37, 0x034686FF, 0x1EA33676, 0x76A01ABE, 0xCEA56FE6, 0xA6A6432E,
+	 0x6935F452, 0x0136D89A, 0xB933ADC2, 0xD130810A, 0xCCD53183, 0xA4D61D4B,
+	 0x1CD36813, 0x74D044DB, 0x27180901, 0x4F1B25C9, 0xF71E5091, 0x9F1D7C59,
+	 0x82F8CCD0, 0xEAFBE018, 0x52FE9540, 0x3AFDB988, 0xC8358D49, 0xA036A181,
+	 0x1833D4D9, 0x7030F811, 0x6DD54898, 0x05D66450, 0xBDD31108, 0xD5D03DC0,
+	 0x8618701A, 0xEE1B5CD2, 0x561E298A, 0x3E1D0542, 0x23F8B5CB, 0x4BFB9903,
+	 0xF3FEEC5B, 0x9BFDC093, 0x546E77EF, 0x3C6D5B27, 0x84682E7F, 0xEC6B02B7,
+	 0xF18EB23E, 0x998D9EF6, 0x2188EBAE, 0x498BC766, 0x1A438ABC, 0x7240A674,
+	 0xCA45D32C, 0xA246FFE4, 0xBFA34F6D, 0xD7A063A5, 0x6FA516FD, 0x07A63A35,
+	 0x8FD9098E, 0xE7DA2546, 0x5FDF501E, 0x37DC7CD6, 0x2A39CC5F, 0x423AE097,
+	 0xFA3F95CF, 0x923CB907, 0xC1F4F4DD, 0xA9F7D815, 0x11F2AD4D, 0x79F18185,
+	 0x6414310C, 0x0C171DC4, 0xB412689C, 0xDC114454, 0x1382F328, 0x7B81DFE0,
+	 0xC384AAB8, 0xAB878670, 0xB66236F9, 0xDE611A31, 0x66646F69, 0x0E6743A1,
+	 0x5DAF0E7B, 0x35AC22B3, 0x8DA957EB, 0xE5AA7B23, 0xF84FCBAA, 0x904CE762,
+	 0x2849923A, 0x404ABEF2, 0xB2828A33, 0xDA81A6FB, 0x6284D3A3, 0x0A87FF6B,
+	 0x17624FE2, 0x7F61632A, 0xC7641672, 0xAF673ABA, 0xFCAF7760, 0x94AC5BA8,
+	 0x2CA92EF0, 0x44AA0238, 0x594FB2B1, 0x314C9E79, 0x8949EB21, 0xE14AC7E9,
+	 0x2ED97095, 0x46DA5C5D, 0xFEDF2905, 0x96DC05CD, 0x8B39B544, 0xE33A998C,
+	 0x5B3FECD4, 0x333CC01C, 0x60F48DC6, 0x08F7A10E, 0xB0F2D456, 0xD8F1F89E,
+	 0xC5144817, 0xAD1764DF, 0x15121187, 0x7D113D4F},
+	{0x00000000, 0x493C7D27, 0x9278FA4E, 0xDB448769, 0x211D826D, 0x6821FF4A,
+	 0xB3657823, 0xFA590504, 0x423B04DA, 0x0B0779FD, 0xD043FE94, 0x997F83B3,
+	 0x632686B7, 0x2A1AFB90, 0xF15E7CF9, 0xB86201DE, 0x847609B4, 0xCD4A7493,
+	 0x160EF3FA, 0x5F328EDD, 0xA56B8BD9, 0xEC57F6FE, 0x37137197, 0x7E2F0CB0,
+	 0xC64D0D6E, 0x8F717049, 0x5435F720, 0x1D098A07, 0xE7508F03, 0xAE6CF224,
+	 0x7528754D, 0x3C14086A, 0x0D006599, 0x443C18BE, 0x9F789FD7, 0xD644E2F0,
+	 0x2C1DE7F4, 0x65219AD3, 0xBE651DBA, 0xF759609D, 0x4F3B6143, 0x06071C64,
+	 0xDD439B0D, 0x947FE62A, 0x6E26E32E, 0x271A9E09, 0xFC5E1960, 0xB5626447,
+	 0x89766C2D, 0xC04A110A, 0x1B0E9663, 0x5232EB44, 0xA86BEE40, 0xE1579367,
+	 0x3A13140E, 0x732F6929, 0xCB4D68F7, 0x827115D0, 0x593592B9, 0x1009EF9E,
+	 0xEA50EA9A, 0xA36C97BD, 0x782810D4, 0x31146DF3, 0x1A00CB32, 0x533CB615,
+	 0x8878317C, 0xC1444C5B, 0x3B1D495F, 0x72213478, 0xA965B311, 0xE059CE36,
+	 0x583BCFE8, 0x1107B2CF, 0xCA4335A6, 0x837F4881, 0x79264D85, 0x301A30A2,
+	 0xEB5EB7CB, 0xA262CAEC, 0x9E76C286, 0xD74ABFA1, 0x0C0E38C8, 0x453245EF,
+	 0xBF6B40EB, 0xF6573DCC, 0x2D13BAA5, 0x642FC782, 0xDC4DC65C, 0x9571BB7B,
+	 0x4E353C12, 0x07094135, 0xFD504431, 0xB46C3916, 0x6F28BE7F, 0x2614C358,
+	 0x1700AEAB, 0x5E3CD38C, 0x857854E5, 0xCC4429C2, 0x361D2CC6, 0x7F2151E1,
+	 0xA465D688, 0xED59ABAF, 0x553BAA71, 0x1C07D756, 0xC743503F, 0x8E7F2D18,
+	 0x7426281C, 0x3D1A553B, 0xE65ED252, 0xAF62AF75, 0x9376A71F, 0xDA4ADA38,
+	 0x010E5D51, 0x48322076, 0xB26B2572, 0xFB575855, 0x2013DF3C, 0x692FA21B,
+	 0xD14DA3C5, 0x9871DEE2, 0x4335598B, 0x0A0924AC, 0xF05021A8, 0xB96C5C8F,
+	 0x6228DBE6, 0x2B14A6C1, 0x34019664, 0x7D3DEB43, 0xA6796C2A, 0xEF45110D,
+	 0x151C1409, 0x5C20692E, 0x8764EE47, 0xCE589360, 0x763A92BE, 0x3F06EF99,
+	 0xE44268F0, 0xAD7E15D7, 0x572710D3, 0x1E1B6DF4, 0xC55FEA9D, 0x8C6397BA,
+	 0xB0779FD0, 0xF94BE2F7, 0x220F659E, 0x6B3318B9, 0x916A1DBD, 0xD856609A,
+	 0x0312E7F3, 0x4A2E9AD4, 0xF24C9B0A, 0xBB70E62D, 0x60346144, 0x29081C63,
+	 0xD3511967, 0x9A6D6440, 0x4129E329, 0x08159E0E, 0x3901F3FD, 0x703D8EDA,
+	 0xAB7909B3, 0xE2457494, 0x181C7190, 0x51200CB7, 0x8A648BDE, 0xC358F6F9,
+	 0x7B3AF727, 0x32068A00, 0xE9420D69, 0xA07E704E, 0x5A27754A, 0x131B086D,
+	 0xC85F8F04, 0x8163F223, 0xBD77FA49, 0xF44B876E, 0x2F0F0007, 0x66337D20,
+	 0x9C6A7824, 0xD5560503, 0x0E12826A, 0x472EFF4D, 0xFF4CFE93, 0xB67083B4,
+	 0x6D3404DD, 0x240879FA, 0xDE517CFE, 0x976D01D9, 0x4C2986B0, 0x0515FB97,
+	 0x2E015D56, 0x673D2071, 0xBC79A718, 0xF545DA3F, 0x0F1CDF3B, 0x4620A21C,
+	 0x9D642575, 0xD4585852, 0x6C3A598C, 0x250624AB, 0xFE42A3C2, 0xB77EDEE5,
+	 0x4D27DBE1, 0x041BA6C6, 0xDF5F21AF, 0x96635C88, 0xAA7754E2, 0xE34B29C5,
+	 0x380FAEAC, 0x7133D38B, 0x8B6AD68F, 0xC256ABA8, 0x19122CC1, 0x502E51E6,
+	 0xE84C5038, 0xA1702D1F, 0x7A34AA76, 0x3308D751, 0xC951D255, 0x806DAF72,
+	 0x5B29281B, 0x1215553C, 0x230138CF, 0x6A3D45E8, 0xB179C281, 0xF845BFA6,
+	 0x021CBAA2, 0x4B20C785, 0x906440EC, 0xD9583DCB, 0x613A3C15, 0x28064132,
+	 0xF342C65B, 0xBA7EBB7C, 0x4027BE78, 0x091BC35F, 0xD25F4436, 0x9B633911,
+	 0xA777317B, 0xEE4B4C5C, 0x350FCB35, 0x7C33B612, 0x866AB316, 0xCF56CE31,
+	 0x14124958, 0x5D2E347F, 0xE54C35A1, 0xAC704886, 0x7734CFEF, 0x3E08B2C8,
+	 0xC451B7CC, 0x8D6DCAEB, 0x56294D82, 0x1F1530A5},
+};
+
+#define CRC32_UPD(crc, n)                                                      \
+	(crc32c_tables[(n)][(crc)&0xFF] ^                                      \
+	 crc32c_tables[(n)-1][((crc) >> 8) & 0xFF])
+
+static inline uint32_t
+crc32c_1byte(uint8_t data, uint32_t init_val)
+{
+	uint32_t crc;
+	crc = init_val;
+	crc ^= data;
+
+	return crc32c_tables[0][crc & 0xff] ^ (crc >> 8);
+}
+
+static inline uint32_t
+crc32c_2bytes(uint16_t data, uint32_t init_val)
+{
+	uint32_t crc;
+	crc = init_val;
+	crc ^= data;
+
+	crc = CRC32_UPD(crc, 1) ^ (crc >> 16);
+
+	return crc;
+}
+
+static inline uint32_t
+crc32c_1word(uint32_t data, uint32_t init_val)
+{
+	uint32_t crc, term1, term2;
+	crc = init_val;
+	crc ^= data;
+
+	term1 = CRC32_UPD(crc, 3);
+	term2 = crc >> 16;
+	crc = term1 ^ CRC32_UPD(term2, 1);
+
+	return crc;
+}
+
+static inline uint32_t
+crc32c_2words(uint64_t data, uint32_t init_val)
+{
+	uint32_t crc, term1, term2;
+	union {
+		uint64_t u64;
+		uint32_t u32[2];
+	} d;
+	d.u64 = data;
+
+	crc = init_val;
+	crc ^= d.u32[0];
+
+	term1 = CRC32_UPD(crc, 7);
+	term2 = crc >> 16;
+	crc = term1 ^ CRC32_UPD(term2, 5);
+	term1 = CRC32_UPD(d.u32[1], 3);
+	term2 = d.u32[1] >> 16;
+	crc ^= term1 ^ CRC32_UPD(term2, 1);
+
+	return crc;
+}
+
+#endif /* HASH_CRC_SW */
diff --git a/lib/hash/hash_crc_x86.h b/lib/hash/hash_crc_x86.h
new file mode 100644
index 0000000000..b80a742afa
--- /dev/null
+++ b/lib/hash/hash_crc_x86.h
@@ -0,0 +1,62 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2014 Intel Corporation
+ */
+
+#ifndef _HASH_CRC_X86_H_
+#define _HASH_CRC_X86_H_
+
+static inline uint32_t
+crc32c_sse42_u8(uint8_t data, uint32_t init_val)
+{
+	__asm__ volatile(
+			"crc32b %[data], %[init_val];"
+			: [init_val] "+r" (init_val)
+			: [data] "rm" (data));
+	return init_val;
+}
+
+static inline uint32_t
+crc32c_sse42_u16(uint16_t data, uint32_t init_val)
+{
+	__asm__ volatile(
+			"crc32w %[data], %[init_val];"
+			: [init_val] "+r" (init_val)
+			: [data] "rm" (data));
+	return init_val;
+}
+
+static inline uint32_t
+crc32c_sse42_u32(uint32_t data, uint32_t init_val)
+{
+	__asm__ volatile(
+			"crc32l %[data], %[init_val];"
+			: [init_val] "+r" (init_val)
+			: [data] "rm" (data));
+	return init_val;
+}
+
+static inline uint32_t
+crc32c_sse42_u64_mimic(uint64_t data, uint64_t init_val)
+{
+	union {
+		uint32_t u32[2];
+		uint64_t u64;
+	} d;
+
+	d.u64 = data;
+	init_val = crc32c_sse42_u32(d.u32[0], (uint32_t)init_val);
+	init_val = crc32c_sse42_u32(d.u32[1], (uint32_t)init_val);
+	return (uint32_t)init_val;
+}
+
+static inline uint32_t
+crc32c_sse42_u64(uint64_t data, uint64_t init_val)
+{
+	__asm__ volatile(
+			"crc32q %[data], %[init_val];"
+			: [init_val] "+r" (init_val)
+			: [data] "rm" (data));
+	return (uint32_t)init_val;
+}
+
+#endif
diff --git a/lib/hash/rte_hash_crc.h b/lib/hash/rte_hash_crc.h
index 3e131aa6bb..1cc8f84fe2 100644
--- a/lib/hash/rte_hash_crc.h
+++ b/lib/hash/rte_hash_crc.h
@@ -21,400 +21,7 @@ extern "C" {
 #include <rte_branch_prediction.h>
 #include <rte_common.h>

-/* Lookup tables for software implementation of CRC32C */
-static const uint32_t crc32c_tables[8][256] = {{
- 0x00000000, 0xF26B8303, 0xE13B70F7, 0x1350F3F4, 0xC79A971F, 0x35F1141C, 0x26A1E7E8, 0xD4CA64EB,
- 0x8AD958CF, 0x78B2DBCC, 0x6BE22838, 0x9989AB3B, 0x4D43CFD0, 0xBF284CD3, 0xAC78BF27, 0x5E133C24,
- 0x105EC76F, 0xE235446C, 0xF165B798, 0x030E349B, 0xD7C45070, 0x25AFD373, 0x36FF2087, 0xC494A384,
- 0x9A879FA0, 0x68EC1CA3, 0x7BBCEF57, 0x89D76C54, 0x5D1D08BF, 0xAF768BBC, 0xBC267848, 0x4E4DFB4B,
- 0x20BD8EDE, 0xD2D60DDD, 0xC186FE29, 0x33ED7D2A, 0xE72719C1, 0x154C9AC2, 0x061C6936, 0xF477EA35,
- 0xAA64D611, 0x580F5512, 0x4B5FA6E6, 0xB93425E5, 0x6DFE410E, 0x9F95C20D, 0x8CC531F9, 0x7EAEB2FA,
- 0x30E349B1, 0xC288CAB2, 0xD1D83946, 0x23B3BA45, 0xF779DEAE, 0x05125DAD, 0x1642AE59, 0xE4292D5A,
- 0xBA3A117E, 0x4851927D, 0x5B016189, 0xA96AE28A, 0x7DA08661, 0x8FCB0562, 0x9C9BF696, 0x6EF07595,
- 0x417B1DBC, 0xB3109EBF, 0xA0406D4B, 0x522BEE48, 0x86E18AA3, 0x748A09A0, 0x67DAFA54, 0x95B17957,
- 0xCBA24573, 0x39C9C670, 0x2A993584, 0xD8F2B687, 0x0C38D26C, 0xFE53516F, 0xED03A29B, 0x1F682198,
- 0x5125DAD3, 0xA34E59D0, 0xB01EAA24, 0x42752927, 0x96BF4DCC, 0x64D4CECF, 0x77843D3B, 0x85EFBE38,
- 0xDBFC821C, 0x2997011F, 0x3AC7F2EB, 0xC8AC71E8, 0x1C661503, 0xEE0D9600, 0xFD5D65F4, 0x0F36E6F7,
- 0x61C69362, 0x93AD1061, 0x80FDE395, 0x72966096, 0xA65C047D, 0x5437877E, 0x4767748A, 0xB50CF789,
- 0xEB1FCBAD, 0x197448AE, 0x0A24BB5A, 0xF84F3859, 0x2C855CB2, 0xDEEEDFB1, 0xCDBE2C45, 0x3FD5AF46,
- 0x7198540D, 0x83F3D70E, 0x90A324FA, 0x62C8A7F9, 0xB602C312, 0x44694011, 0x5739B3E5, 0xA55230E6,
- 0xFB410CC2, 0x092A8FC1, 0x1A7A7C35, 0xE811FF36, 0x3CDB9BDD, 0xCEB018DE, 0xDDE0EB2A, 0x2F8B6829,
- 0x82F63B78, 0x709DB87B, 0x63CD4B8F, 0x91A6C88C, 0x456CAC67, 0xB7072F64, 0xA457DC90, 0x563C5F93,
- 0x082F63B7, 0xFA44E0B4, 0xE9141340, 0x1B7F9043, 0xCFB5F4A8, 0x3DDE77AB, 0x2E8E845F, 0xDCE5075C,
- 0x92A8FC17, 0x60C37F14, 0x73938CE0, 0x81F80FE3, 0x55326B08, 0xA759E80B, 0xB4091BFF, 0x466298FC,
- 0x1871A4D8, 0xEA1A27DB, 0xF94AD42F, 0x0B21572C, 0xDFEB33C7, 0x2D80B0C4, 0x3ED04330, 0xCCBBC033,
- 0xA24BB5A6, 0x502036A5, 0x4370C551, 0xB11B4652, 0x65D122B9, 0x97BAA1BA, 0x84EA524E, 0x7681D14D,
- 0x2892ED69, 0xDAF96E6A, 0xC9A99D9E, 0x3BC21E9D, 0xEF087A76, 0x1D63F975, 0x0E330A81, 0xFC588982,
- 0xB21572C9, 0x407EF1CA, 0x532E023E, 0xA145813D, 0x758FE5D6, 0x87E466D5, 0x94B49521, 0x66DF1622,
- 0x38CC2A06, 0xCAA7A905, 0xD9F75AF1, 0x2B9CD9F2, 0xFF56BD19, 0x0D3D3E1A, 0x1E6DCDEE, 0xEC064EED,
- 0xC38D26C4, 0x31E6A5C7, 0x22B65633, 0xD0DDD530, 0x0417B1DB, 0xF67C32D8, 0xE52CC12C, 0x1747422F,
- 0x49547E0B, 0xBB3FFD08, 0xA86F0EFC, 0x5A048DFF, 0x8ECEE914, 0x7CA56A17, 0x6FF599E3, 0x9D9E1AE0,
- 0xD3D3E1AB, 0x21B862A8, 0x32E8915C, 0xC083125F, 0x144976B4, 0xE622F5B7, 0xF5720643, 0x07198540,
- 0x590AB964, 0xAB613A67, 0xB831C993, 0x4A5A4A90, 0x9E902E7B, 0x6CFBAD78, 0x7FAB5E8C, 0x8DC0DD8F,
- 0xE330A81A, 0x115B2B19, 0x020BD8ED, 0xF0605BEE, 0x24AA3F05, 0xD6C1BC06, 0xC5914FF2, 0x37FACCF1,
- 0x69E9F0D5, 0x9B8273D6, 0x88D28022, 0x7AB90321, 0xAE7367CA, 0x5C18E4C9, 0x4F48173D, 0xBD23943E,
- 0xF36E6F75, 0x0105EC76, 0x12551F82, 0xE03E9C81, 0x34F4F86A, 0xC69F7B69, 0xD5CF889D, 0x27A40B9E,
- 0x79B737BA, 0x8BDCB4B9, 0x988C474D, 0x6AE7C44E, 0xBE2DA0A5, 0x4C4623A6, 0x5F16D052, 0xAD7D5351
-},
-{
- 0x00000000, 0x13A29877, 0x274530EE, 0x34E7A899, 0x4E8A61DC, 0x5D28F9AB, 0x69CF5132, 0x7A6DC945,
- 0x9D14C3B8, 0x8EB65BCF, 0xBA51F356, 0xA9F36B21, 0xD39EA264, 0xC03C3A13, 0xF4DB928A, 0xE7790AFD,
- 0x3FC5F181, 0x2C6769F6, 0x1880C16F, 0x0B225918, 0x714F905D, 0x62ED082A, 0x560AA0B3, 0x45A838C4,
- 0xA2D13239, 0xB173AA4E, 0x859402D7, 0x96369AA0, 0xEC5B53E5, 0xFFF9CB92, 0xCB1E630B, 0xD8BCFB7C,
- 0x7F8BE302, 0x6C297B75, 0x58CED3EC, 0x4B6C4B9B, 0x310182DE, 0x22A31AA9, 0x1644B230, 0x05E62A47,
- 0xE29F20BA, 0xF13DB8CD, 0xC5DA1054, 0xD6788823, 0xAC154166, 0xBFB7D911, 0x8B507188, 0x98F2E9FF,
- 0x404E1283, 0x53EC8AF4, 0x670B226D, 0x74A9BA1A, 0x0EC4735F, 0x1D66EB28, 0x298143B1, 0x3A23DBC6,
- 0xDD5AD13B, 0xCEF8494C, 0xFA1FE1D5, 0xE9BD79A2, 0x93D0B0E7, 0x80722890, 0xB4958009, 0xA737187E,
- 0xFF17C604, 0xECB55E73, 0xD852F6EA, 0xCBF06E9D, 0xB19DA7D8, 0xA23F3FAF, 0x96D89736, 0x857A0F41,
- 0x620305BC, 0x71A19DCB, 0x45463552, 0x56E4AD25, 0x2C896460, 0x3F2BFC17, 0x0BCC548E, 0x186ECCF9,
- 0xC0D23785, 0xD370AFF2, 0xE797076B, 0xF4359F1C, 0x8E585659, 0x9DFACE2E, 0xA91D66B7, 0xBABFFEC0,
- 0x5DC6F43D, 0x4E646C4A, 0x7A83C4D3, 0x69215CA4, 0x134C95E1, 0x00EE0D96, 0x3409A50F, 0x27AB3D78,
- 0x809C2506, 0x933EBD71, 0xA7D915E8, 0xB47B8D9F, 0xCE1644DA, 0xDDB4DCAD, 0xE9537434, 0xFAF1EC43,
- 0x1D88E6BE, 0x0E2A7EC9, 0x3ACDD650, 0x296F4E27, 0x53028762, 0x40A01F15, 0x7447B78C, 0x67E52FFB,
- 0xBF59D487, 0xACFB4CF0, 0x981CE469, 0x8BBE7C1E, 0xF1D3B55B, 0xE2712D2C, 0xD69685B5, 0xC5341DC2,
- 0x224D173F, 0x31EF8F48, 0x050827D1, 0x16AABFA6, 0x6CC776E3, 0x7F65EE94, 0x4B82460D, 0x5820DE7A,
- 0xFBC3FAF9, 0xE861628E, 0xDC86CA17, 0xCF245260, 0xB5499B25, 0xA6EB0352, 0x920CABCB, 0x81AE33BC,
- 0x66D73941, 0x7575A136, 0x419209AF, 0x523091D8, 0x285D589D, 0x3BFFC0EA, 0x0F186873, 0x1CBAF004,
- 0xC4060B78, 0xD7A4930F, 0xE3433B96, 0xF0E1A3E1, 0x8A8C6AA4, 0x992EF2D3, 0xADC95A4A, 0xBE6BC23D,
- 0x5912C8C0, 0x4AB050B7, 0x7E57F82E, 0x6DF56059, 0x1798A91C, 0x043A316B, 0x30DD99F2, 0x237F0185,
- 0x844819FB, 0x97EA818C, 0xA30D2915, 0xB0AFB162, 0xCAC27827, 0xD960E050, 0xED8748C9, 0xFE25D0BE,
- 0x195CDA43, 0x0AFE4234, 0x3E19EAAD, 0x2DBB72DA, 0x57D6BB9F, 0x447423E8, 0x70938B71, 0x63311306,
- 0xBB8DE87A, 0xA82F700D, 0x9CC8D894, 0x8F6A40E3, 0xF50789A6, 0xE6A511D1, 0xD242B948, 0xC1E0213F,
- 0x26992BC2, 0x353BB3B5, 0x01DC1B2C, 0x127E835B, 0x68134A1E, 0x7BB1D269, 0x4F567AF0, 0x5CF4E287,
- 0x04D43CFD, 0x1776A48A, 0x23910C13, 0x30339464, 0x4A5E5D21, 0x59FCC556, 0x6D1B6DCF, 0x7EB9F5B8,
- 0x99C0FF45, 0x8A626732, 0xBE85CFAB, 0xAD2757DC, 0xD74A9E99, 0xC4E806EE, 0xF00FAE77, 0xE3AD3600,
- 0x3B11CD7C, 0x28B3550B, 0x1C54FD92, 0x0FF665E5, 0x759BACA0, 0x663934D7, 0x52DE9C4E, 0x417C0439,
- 0xA6050EC4, 0xB5A796B3, 0x81403E2A, 0x92E2A65D, 0xE88F6F18, 0xFB2DF76F, 0xCFCA5FF6, 0xDC68C781,
- 0x7B5FDFFF, 0x68FD4788, 0x5C1AEF11, 0x4FB87766, 0x35D5BE23, 0x26772654, 0x12908ECD, 0x013216BA,
- 0xE64B1C47, 0xF5E98430, 0xC10E2CA9, 0xD2ACB4DE, 0xA8C17D9B, 0xBB63E5EC, 0x8F844D75, 0x9C26D502,
- 0x449A2E7E, 0x5738B609, 0x63DF1E90, 0x707D86E7, 0x0A104FA2, 0x19B2D7D5, 0x2D557F4C, 0x3EF7E73B,
- 0xD98EEDC6, 0xCA2C75B1, 0xFECBDD28, 0xED69455F, 0x97048C1A, 0x84A6146D, 0xB041BCF4, 0xA3E32483
-},
-{
- 0x00000000, 0xA541927E, 0x4F6F520D, 0xEA2EC073, 0x9EDEA41A, 0x3B9F3664, 0xD1B1F617, 0x74F06469,
- 0x38513EC5, 0x9D10ACBB, 0x773E6CC8, 0xD27FFEB6, 0xA68F9ADF, 0x03CE08A1, 0xE9E0C8D2, 0x4CA15AAC,
- 0x70A27D8A, 0xD5E3EFF4, 0x3FCD2F87, 0x9A8CBDF9, 0xEE7CD990, 0x4B3D4BEE, 0xA1138B9D, 0x045219E3,
- 0x48F3434F, 0xEDB2D131, 0x079C1142, 0xA2DD833C, 0xD62DE755, 0x736C752B, 0x9942B558, 0x3C032726,
- 0xE144FB14, 0x4405696A, 0xAE2BA919, 0x0B6A3B67, 0x7F9A5F0E, 0xDADBCD70, 0x30F50D03, 0x95B49F7D,
- 0xD915C5D1, 0x7C5457AF, 0x967A97DC, 0x333B05A2, 0x47CB61CB, 0xE28AF3B5, 0x08A433C6, 0xADE5A1B8,
- 0x91E6869E, 0x34A714E0, 0xDE89D493, 0x7BC846ED, 0x0F382284, 0xAA79B0FA, 0x40577089, 0xE516E2F7,
- 0xA9B7B85B, 0x0CF62A25, 0xE6D8EA56, 0x43997828, 0x37691C41, 0x92288E3F, 0x78064E4C, 0xDD47DC32,
- 0xC76580D9, 0x622412A7, 0x880AD2D4, 0x2D4B40AA, 0x59BB24C3, 0xFCFAB6BD, 0x16D476CE, 0xB395E4B0,
- 0xFF34BE1C, 0x5A752C62, 0xB05BEC11, 0x151A7E6F, 0x61EA1A06, 0xC4AB8878, 0x2E85480B, 0x8BC4DA75,
- 0xB7C7FD53, 0x12866F2D, 0xF8A8AF5E, 0x5DE93D20, 0x29195949, 0x8C58CB37, 0x66760B44, 0xC337993A,
- 0x8F96C396, 0x2AD751E8, 0xC0F9919B, 0x65B803E5, 0x1148678C, 0xB409F5F2, 0x5E273581, 0xFB66A7FF,
- 0x26217BCD, 0x8360E9B3, 0x694E29C0, 0xCC0FBBBE, 0xB8FFDFD7, 0x1DBE4DA9, 0xF7908DDA, 0x52D11FA4,
- 0x1E704508, 0xBB31D776, 0x511F1705, 0xF45E857B, 0x80AEE112, 0x25EF736C, 0xCFC1B31F, 0x6A802161,
- 0x56830647, 0xF3C29439, 0x19EC544A, 0xBCADC634, 0xC85DA25D, 0x6D1C3023, 0x8732F050, 0x2273622E,
- 0x6ED23882, 0xCB93AAFC, 0x21BD6A8F, 0x84FCF8F1, 0xF00C9C98, 0x554D0EE6, 0xBF63CE95, 0x1A225CEB,
- 0x8B277743, 0x2E66E53D, 0xC448254E, 0x6109B730, 0x15F9D359, 0xB0B84127, 0x5A968154, 0xFFD7132A,
- 0xB3764986, 0x1637DBF8, 0xFC191B8B, 0x595889F5, 0x2DA8ED9C, 0x88E97FE2, 0x62C7BF91, 0xC7862DEF,
- 0xFB850AC9, 0x5EC498B7, 0xB4EA58C4, 0x11ABCABA, 0x655BAED3, 0xC01A3CAD, 0x2A34FCDE, 0x8F756EA0,
- 0xC3D4340C, 0x6695A672, 0x8CBB6601, 0x29FAF47F, 0x5D0A9016, 0xF84B0268, 0x1265C21B, 0xB7245065,
- 0x6A638C57, 0xCF221E29, 0x250CDE5A, 0x804D4C24, 0xF4BD284D, 0x51FCBA33, 0xBBD27A40, 0x1E93E83E,
- 0x5232B292, 0xF77320EC, 0x1D5DE09F, 0xB81C72E1, 0xCCEC1688, 0x69AD84F6, 0x83834485, 0x26C2D6FB,
- 0x1AC1F1DD, 0xBF8063A3, 0x55AEA3D0, 0xF0EF31AE, 0x841F55C7, 0x215EC7B9, 0xCB7007CA, 0x6E3195B4,
- 0x2290CF18, 0x87D15D66, 0x6DFF9D15, 0xC8BE0F6B, 0xBC4E6B02, 0x190FF97C, 0xF321390F, 0x5660AB71,
- 0x4C42F79A, 0xE90365E4, 0x032DA597, 0xA66C37E9, 0xD29C5380, 0x77DDC1FE, 0x9DF3018D, 0x38B293F3,
- 0x7413C95F, 0xD1525B21, 0x3B7C9B52, 0x9E3D092C, 0xEACD6D45, 0x4F8CFF3B, 0xA5A23F48, 0x00E3AD36,
- 0x3CE08A10, 0x99A1186E, 0x738FD81D, 0xD6CE4A63, 0xA23E2E0A, 0x077FBC74, 0xED517C07, 0x4810EE79,
- 0x04B1B4D5, 0xA1F026AB, 0x4BDEE6D8, 0xEE9F74A6, 0x9A6F10CF, 0x3F2E82B1, 0xD50042C2, 0x7041D0BC,
- 0xAD060C8E, 0x08479EF0, 0xE2695E83, 0x4728CCFD, 0x33D8A894, 0x96993AEA, 0x7CB7FA99, 0xD9F668E7,
- 0x9557324B, 0x3016A035, 0xDA386046, 0x7F79F238, 0x0B899651, 0xAEC8042F, 0x44E6C45C, 0xE1A75622,
- 0xDDA47104, 0x78E5E37A, 0x92CB2309, 0x378AB177, 0x437AD51E, 0xE63B4760, 0x0C158713, 0xA954156D,
- 0xE5F54FC1, 0x40B4DDBF, 0xAA9A1DCC, 0x0FDB8FB2, 0x7B2BEBDB, 0xDE6A79A5, 0x3444B9D6, 0x91052BA8
-},
-{
- 0x00000000, 0xDD45AAB8, 0xBF672381, 0x62228939, 0x7B2231F3, 0xA6679B4B, 0xC4451272, 0x1900B8CA,
- 0xF64463E6, 0x2B01C95E, 0x49234067, 0x9466EADF, 0x8D665215, 0x5023F8AD, 0x32017194, 0xEF44DB2C,
- 0xE964B13D, 0x34211B85, 0x560392BC, 0x8B463804, 0x924680CE, 0x4F032A76, 0x2D21A34F, 0xF06409F7,
- 0x1F20D2DB, 0xC2657863, 0xA047F15A, 0x7D025BE2, 0x6402E328, 0xB9474990, 0xDB65C0A9, 0x06206A11,
- 0xD725148B, 0x0A60BE33, 0x6842370A, 0xB5079DB2, 0xAC072578, 0x71428FC0, 0x136006F9, 0xCE25AC41,
- 0x2161776D, 0xFC24DDD5, 0x9E0654EC, 0x4343FE54, 0x5A43469E, 0x8706EC26, 0xE524651F, 0x3861CFA7,
- 0x3E41A5B6, 0xE3040F0E, 0x81268637, 0x5C632C8F, 0x45639445, 0x98263EFD, 0xFA04B7C4, 0x27411D7C,
- 0xC805C650, 0x15406CE8, 0x7762E5D1, 0xAA274F69, 0xB327F7A3, 0x6E625D1B, 0x0C40D422, 0xD1057E9A,
- 0xABA65FE7, 0x76E3F55F, 0x14C17C66, 0xC984D6DE, 0xD0846E14, 0x0DC1C4AC, 0x6FE34D95, 0xB2A6E72D,
- 0x5DE23C01, 0x80A796B9, 0xE2851F80, 0x3FC0B538, 0x26C00DF2, 0xFB85A74A, 0x99A72E73, 0x44E284CB,
- 0x42C2EEDA, 0x9F874462, 0xFDA5CD5B, 0x20E067E3, 0x39E0DF29, 0xE4A57591, 0x8687FCA8, 0x5BC25610,
- 0xB4868D3C, 0x69C32784, 0x0BE1AEBD, 0xD6A40405, 0xCFA4BCCF, 0x12E11677, 0x70C39F4E, 0xAD8635F6,
- 0x7C834B6C, 0xA1C6E1D4, 0xC3E468ED, 0x1EA1C255, 0x07A17A9F, 0xDAE4D027, 0xB8C6591E, 0x6583F3A6,
- 0x8AC7288A, 0x57828232, 0x35A00B0B, 0xE8E5A1B3, 0xF1E51979, 0x2CA0B3C1, 0x4E823AF8, 0x93C79040,
- 0x95E7FA51, 0x48A250E9, 0x2A80D9D0, 0xF7C57368, 0xEEC5CBA2, 0x3380611A, 0x51A2E823, 0x8CE7429B,
- 0x63A399B7, 0xBEE6330F, 0xDCC4BA36, 0x0181108E, 0x1881A844, 0xC5C402FC, 0xA7E68BC5, 0x7AA3217D,
- 0x52A0C93F, 0x8FE56387, 0xEDC7EABE, 0x30824006, 0x2982F8CC, 0xF4C75274, 0x96E5DB4D, 0x4BA071F5,
- 0xA4E4AAD9, 0x79A10061, 0x1B838958, 0xC6C623E0, 0xDFC69B2A, 0x02833192, 0x60A1B8AB, 0xBDE41213,
- 0xBBC47802, 0x6681D2BA, 0x04A35B83, 0xD9E6F13B, 0xC0E649F1, 0x1DA3E349, 0x7F816A70, 0xA2C4C0C8,
- 0x4D801BE4, 0x90C5B15C, 0xF2E73865, 0x2FA292DD, 0x36A22A17, 0xEBE780AF, 0x89C50996, 0x5480A32E,
- 0x8585DDB4, 0x58C0770C, 0x3AE2FE35, 0xE7A7548D, 0xFEA7EC47, 0x23E246FF, 0x41C0CFC6, 0x9C85657E,
- 0x73C1BE52, 0xAE8414EA, 0xCCA69DD3, 0x11E3376B, 0x08E38FA1, 0xD5A62519, 0xB784AC20, 0x6AC10698,
- 0x6CE16C89, 0xB1A4C631, 0xD3864F08, 0x0EC3E5B0, 0x17C35D7A, 0xCA86F7C2, 0xA8A47EFB, 0x75E1D443,
- 0x9AA50F6F, 0x47E0A5D7, 0x25C22CEE, 0xF8878656, 0xE1873E9C, 0x3CC29424, 0x5EE01D1D, 0x83A5B7A5,
- 0xF90696D8, 0x24433C60, 0x4661B559, 0x9B241FE1, 0x8224A72B, 0x5F610D93, 0x3D4384AA, 0xE0062E12,
- 0x0F42F53E, 0xD2075F86, 0xB025D6BF, 0x6D607C07, 0x7460C4CD, 0xA9256E75, 0xCB07E74C, 0x16424DF4,
- 0x106227E5, 0xCD278D5D, 0xAF050464, 0x7240AEDC, 0x6B401616, 0xB605BCAE, 0xD4273597, 0x09629F2F,
- 0xE6264403, 0x3B63EEBB, 0x59416782, 0x8404CD3A, 0x9D0475F0, 0x4041DF48, 0x22635671, 0xFF26FCC9,
- 0x2E238253, 0xF36628EB, 0x9144A1D2, 0x4C010B6A, 0x5501B3A0, 0x88441918, 0xEA669021, 0x37233A99,
- 0xD867E1B5, 0x05224B0D, 0x6700C234, 0xBA45688C, 0xA345D046, 0x7E007AFE, 0x1C22F3C7, 0xC167597F,
- 0xC747336E, 0x1A0299D6, 0x782010EF, 0xA565BA57, 0xBC65029D, 0x6120A825, 0x0302211C, 0xDE478BA4,
- 0x31035088, 0xEC46FA30, 0x8E647309, 0x5321D9B1, 0x4A21617B, 0x9764CBC3, 0xF54642FA, 0x2803E842
-},
-{
- 0x00000000, 0x38116FAC, 0x7022DF58, 0x4833B0F4, 0xE045BEB0, 0xD854D11C, 0x906761E8, 0xA8760E44,
- 0xC5670B91, 0xFD76643D, 0xB545D4C9, 0x8D54BB65, 0x2522B521, 0x1D33DA8D, 0x55006A79, 0x6D1105D5,
- 0x8F2261D3, 0xB7330E7F, 0xFF00BE8B, 0xC711D127, 0x6F67DF63, 0x5776B0CF, 0x1F45003B, 0x27546F97,
- 0x4A456A42, 0x725405EE, 0x3A67B51A, 0x0276DAB6, 0xAA00D4F2, 0x9211BB5E, 0xDA220BAA, 0xE2336406,
- 0x1BA8B557, 0x23B9DAFB, 0x6B8A6A0F, 0x539B05A3, 0xFBED0BE7, 0xC3FC644B, 0x8BCFD4BF, 0xB3DEBB13,
- 0xDECFBEC6, 0xE6DED16A, 0xAEED619E, 0x96FC0E32, 0x3E8A0076, 0x069B6FDA, 0x4EA8DF2E, 0x76B9B082,
- 0x948AD484, 0xAC9BBB28, 0xE4A80BDC, 0xDCB96470, 0x74CF6A34, 0x4CDE0598, 0x04EDB56C, 0x3CFCDAC0,
- 0x51EDDF15, 0x69FCB0B9, 0x21CF004D, 0x19DE6FE1, 0xB1A861A5, 0x89B90E09, 0xC18ABEFD, 0xF99BD151,
- 0x37516AAE, 0x0F400502, 0x4773B5F6, 0x7F62DA5A, 0xD714D41E, 0xEF05BBB2, 0xA7360B46, 0x9F2764EA,
- 0xF236613F, 0xCA270E93, 0x8214BE67, 0xBA05D1CB, 0x1273DF8F, 0x2A62B023, 0x625100D7, 0x5A406F7B,
- 0xB8730B7D, 0x806264D1, 0xC851D425, 0xF040BB89, 0x5836B5CD, 0x6027DA61, 0x28146A95, 0x10050539,
- 0x7D1400EC, 0x45056F40, 0x0D36DFB4, 0x3527B018, 0x9D51BE5C, 0xA540D1F0, 0xED736104, 0xD5620EA8,
- 0x2CF9DFF9, 0x14E8B055, 0x5CDB00A1, 0x64CA6F0D, 0xCCBC6149, 0xF4AD0EE5, 0xBC9EBE11, 0x848FD1BD,
- 0xE99ED468, 0xD18FBBC4, 0x99BC0B30, 0xA1AD649C, 0x09DB6AD8, 0x31CA0574, 0x79F9B580, 0x41E8DA2C,
- 0xA3DBBE2A, 0x9BCAD186, 0xD3F96172, 0xEBE80EDE, 0x439E009A, 0x7B8F6F36, 0x33BCDFC2, 0x0BADB06E,
- 0x66BCB5BB, 0x5EADDA17, 0x169E6AE3, 0x2E8F054F, 0x86F90B0B, 0xBEE864A7, 0xF6DBD453, 0xCECABBFF,
- 0x6EA2D55C, 0x56B3BAF0, 0x1E800A04, 0x269165A8, 0x8EE76BEC, 0xB6F60440, 0xFEC5B4B4, 0xC6D4DB18,
- 0xABC5DECD, 0x93D4B161, 0xDBE70195, 0xE3F66E39, 0x4B80607D, 0x73910FD1, 0x3BA2BF25, 0x03B3D089,
- 0xE180B48F, 0xD991DB23, 0x91A26BD7, 0xA9B3047B, 0x01C50A3F, 0x39D46593, 0x71E7D567, 0x49F6BACB,
- 0x24E7BF1E, 0x1CF6D0B2, 0x54C56046, 0x6CD40FEA, 0xC4A201AE, 0xFCB36E02, 0xB480DEF6, 0x8C91B15A,
- 0x750A600B, 0x4D1B0FA7, 0x0528BF53, 0x3D39D0FF, 0x954FDEBB, 0xAD5EB117, 0xE56D01E3, 0xDD7C6E4F,
- 0xB06D6B9A, 0x887C0436, 0xC04FB4C2, 0xF85EDB6E, 0x5028D52A, 0x6839BA86, 0x200A0A72, 0x181B65DE,
- 0xFA2801D8, 0xC2396E74, 0x8A0ADE80, 0xB21BB12C, 0x1A6DBF68, 0x227CD0C4, 0x6A4F6030, 0x525E0F9C,
- 0x3F4F0A49, 0x075E65E5, 0x4F6DD511, 0x777CBABD, 0xDF0AB4F9, 0xE71BDB55, 0xAF286BA1, 0x9739040D,
- 0x59F3BFF2, 0x61E2D05E, 0x29D160AA, 0x11C00F06, 0xB9B60142, 0x81A76EEE, 0xC994DE1A, 0xF185B1B6,
- 0x9C94B463, 0xA485DBCF, 0xECB66B3B, 0xD4A70497, 0x7CD10AD3, 0x44C0657F, 0x0CF3D58B, 0x34E2BA27,
- 0xD6D1DE21, 0xEEC0B18D, 0xA6F30179, 0x9EE26ED5, 0x36946091, 0x0E850F3D, 0x46B6BFC9, 0x7EA7D065,
- 0x13B6D5B0, 0x2BA7BA1C, 0x63940AE8, 0x5B856544, 0xF3F36B00, 0xCBE204AC, 0x83D1B458, 0xBBC0DBF4,
- 0x425B0AA5, 0x7A4A6509, 0x3279D5FD, 0x0A68BA51, 0xA21EB415, 0x9A0FDBB9, 0xD23C6B4D, 0xEA2D04E1,
- 0x873C0134, 0xBF2D6E98, 0xF71EDE6C, 0xCF0FB1C0, 0x6779BF84, 0x5F68D028, 0x175B60DC, 0x2F4A0F70,
- 0xCD796B76, 0xF56804DA, 0xBD5BB42E, 0x854ADB82, 0x2D3CD5C6, 0x152DBA6A, 0x5D1E0A9E, 0x650F6532,
- 0x081E60E7, 0x300F0F4B, 0x783CBFBF, 0x402DD013, 0xE85BDE57, 0xD04AB1FB, 0x9879010F, 0xA0686EA3
-},
-{
- 0x00000000, 0xEF306B19, 0xDB8CA0C3, 0x34BCCBDA, 0xB2F53777, 0x5DC55C6E, 0x697997B4, 0x8649FCAD,
- 0x6006181F, 0x8F367306, 0xBB8AB8DC, 0x54BAD3C5, 0xD2F32F68, 0x3DC34471, 0x097F8FAB, 0xE64FE4B2,
- 0xC00C303E, 0x2F3C5B27, 0x1B8090FD, 0xF4B0FBE4, 0x72F90749, 0x9DC96C50, 0xA975A78A, 0x4645CC93,
- 0xA00A2821, 0x4F3A4338, 0x7B8688E2, 0x94B6E3FB, 0x12FF1F56, 0xFDCF744F, 0xC973BF95, 0x2643D48C,
- 0x85F4168D, 0x6AC47D94, 0x5E78B64E, 0xB148DD57, 0x370121FA, 0xD8314AE3, 0xEC8D8139, 0x03BDEA20,
- 0xE5F20E92, 0x0AC2658B, 0x3E7EAE51, 0xD14EC548, 0x570739E5, 0xB83752FC, 0x8C8B9926, 0x63BBF23F,
- 0x45F826B3, 0xAAC84DAA, 0x9E748670, 0x7144ED69, 0xF70D11C4, 0x183D7ADD, 0x2C81B107, 0xC3B1DA1E,
- 0x25FE3EAC, 0xCACE55B5, 0xFE729E6F, 0x1142F576, 0x970B09DB, 0x783B62C2, 0x4C87A918, 0xA3B7C201,
- 0x0E045BEB, 0xE13430F2, 0xD588FB28, 0x3AB89031, 0xBCF16C9C, 0x53C10785, 0x677DCC5F, 0x884DA746,
- 0x6E0243F4, 0x813228ED, 0xB58EE337, 0x5ABE882E, 0xDCF77483, 0x33C71F9A, 0x077BD440, 0xE84BBF59,
- 0xCE086BD5, 0x213800CC, 0x1584CB16, 0xFAB4A00F, 0x7CFD5CA2, 0x93CD37BB, 0xA771FC61, 0x48419778,
- 0xAE0E73CA, 0x413E18D3, 0x7582D309, 0x9AB2B810, 0x1CFB44BD, 0xF3CB2FA4, 0xC777E47E, 0x28478F67,
- 0x8BF04D66, 0x64C0267F, 0x507CEDA5, 0xBF4C86BC, 0x39057A11, 0xD6351108, 0xE289DAD2, 0x0DB9B1CB,
- 0xEBF65579, 0x04C63E60, 0x307AF5BA, 0xDF4A9EA3, 0x5903620E, 0xB6330917, 0x828FC2CD, 0x6DBFA9D4,
- 0x4BFC7D58, 0xA4CC1641, 0x9070DD9B, 0x7F40B682, 0xF9094A2F, 0x16392136, 0x2285EAEC, 0xCDB581F5,
- 0x2BFA6547, 0xC4CA0E5E, 0xF076C584, 0x1F46AE9D, 0x990F5230, 0x763F3929, 0x4283F2F3, 0xADB399EA,
- 0x1C08B7D6, 0xF338DCCF, 0xC7841715, 0x28B47C0C, 0xAEFD80A1, 0x41CDEBB8, 0x75712062, 0x9A414B7B,
- 0x7C0EAFC9, 0x933EC4D0, 0xA7820F0A, 0x48B26413, 0xCEFB98BE, 0x21CBF3A7, 0x1577387D, 0xFA475364,
- 0xDC0487E8, 0x3334ECF1, 0x0788272B, 0xE8B84C32, 0x6EF1B09F, 0x81C1DB86, 0xB57D105C, 0x5A4D7B45,
- 0xBC029FF7, 0x5332F4EE, 0x678E3F34, 0x88BE542D, 0x0EF7A880, 0xE1C7C399, 0xD57B0843, 0x3A4B635A,
- 0x99FCA15B, 0x76CCCA42, 0x42700198, 0xAD406A81, 0x2B09962C, 0xC439FD35, 0xF08536EF, 0x1FB55DF6,
- 0xF9FAB944, 0x16CAD25D, 0x22761987, 0xCD46729E, 0x4B0F8E33, 0xA43FE52A, 0x90832EF0, 0x7FB345E9,
- 0x59F09165, 0xB6C0FA7C, 0x827C31A6, 0x6D4C5ABF, 0xEB05A612, 0x0435CD0B, 0x308906D1, 0xDFB96DC8,
- 0x39F6897A, 0xD6C6E263, 0xE27A29B9, 0x0D4A42A0, 0x8B03BE0D, 0x6433D514, 0x508F1ECE, 0xBFBF75D7,
- 0x120CEC3D, 0xFD3C8724, 0xC9804CFE, 0x26B027E7, 0xA0F9DB4A, 0x4FC9B053, 0x7B757B89, 0x94451090,
- 0x720AF422, 0x9D3A9F3B, 0xA98654E1, 0x46B63FF8, 0xC0FFC355, 0x2FCFA84C, 0x1B736396, 0xF443088F,
- 0xD200DC03, 0x3D30B71A, 0x098C7CC0, 0xE6BC17D9, 0x60F5EB74, 0x8FC5806D, 0xBB794BB7, 0x544920AE,
- 0xB206C41C, 0x5D36AF05, 0x698A64DF, 0x86BA0FC6, 0x00F3F36B, 0xEFC39872, 0xDB7F53A8, 0x344F38B1,
- 0x97F8FAB0, 0x78C891A9, 0x4C745A73, 0xA344316A, 0x250DCDC7, 0xCA3DA6DE, 0xFE816D04, 0x11B1061D,
- 0xF7FEE2AF, 0x18CE89B6, 0x2C72426C, 0xC3422975, 0x450BD5D8, 0xAA3BBEC1, 0x9E87751B, 0x71B71E02,
- 0x57F4CA8E, 0xB8C4A197, 0x8C786A4D, 0x63480154, 0xE501FDF9, 0x0A3196E0, 0x3E8D5D3A, 0xD1BD3623,
- 0x37F2D291, 0xD8C2B988, 0xEC7E7252, 0x034E194B, 0x8507E5E6, 0x6A378EFF, 0x5E8B4525, 0xB1BB2E3C
-},
-{
- 0x00000000, 0x68032CC8, 0xD0065990, 0xB8057558, 0xA5E0C5D1, 0xCDE3E919, 0x75E69C41, 0x1DE5B089,
- 0x4E2DFD53, 0x262ED19B, 0x9E2BA4C3, 0xF628880B, 0xEBCD3882, 0x83CE144A, 0x3BCB6112, 0x53C84DDA,
- 0x9C5BFAA6, 0xF458D66E, 0x4C5DA336, 0x245E8FFE, 0x39BB3F77, 0x51B813BF, 0xE9BD66E7, 0x81BE4A2F,
- 0xD27607F5, 0xBA752B3D, 0x02705E65, 0x6A7372AD, 0x7796C224, 0x1F95EEEC, 0xA7909BB4, 0xCF93B77C,
- 0x3D5B83BD, 0x5558AF75, 0xED5DDA2D, 0x855EF6E5, 0x98BB466C, 0xF0B86AA4, 0x48BD1FFC, 0x20BE3334,
- 0x73767EEE, 0x1B755226, 0xA370277E, 0xCB730BB6, 0xD696BB3F, 0xBE9597F7, 0x0690E2AF, 0x6E93CE67,
- 0xA100791B, 0xC90355D3, 0x7106208B, 0x19050C43, 0x04E0BCCA, 0x6CE39002, 0xD4E6E55A, 0xBCE5C992,
- 0xEF2D8448, 0x872EA880, 0x3F2BDDD8, 0x5728F110, 0x4ACD4199, 0x22CE6D51, 0x9ACB1809, 0xF2C834C1,
- 0x7AB7077A, 0x12B42BB2, 0xAAB15EEA, 0xC2B27222, 0xDF57C2AB, 0xB754EE63, 0x0F519B3B, 0x6752B7F3,
- 0x349AFA29, 0x5C99D6E1, 0xE49CA3B9, 0x8C9F8F71, 0x917A3FF8, 0xF9791330, 0x417C6668, 0x297F4AA0,
- 0xE6ECFDDC, 0x8EEFD114, 0x36EAA44C, 0x5EE98884, 0x430C380D, 0x2B0F14C5, 0x930A619D, 0xFB094D55,
- 0xA8C1008F, 0xC0C22C47, 0x78C7591F, 0x10C475D7, 0x0D21C55E, 0x6522E996, 0xDD279CCE, 0xB524B006,
- 0x47EC84C7, 0x2FEFA80F, 0x97EADD57, 0xFFE9F19F, 0xE20C4116, 0x8A0F6DDE, 0x320A1886, 0x5A09344E,
- 0x09C17994, 0x61C2555C, 0xD9C72004, 0xB1C40CCC, 0xAC21BC45, 0xC422908D, 0x7C27E5D5, 0x1424C91D,
- 0xDBB77E61, 0xB3B452A9, 0x0BB127F1, 0x63B20B39, 0x7E57BBB0, 0x16549778, 0xAE51E220, 0xC652CEE8,
- 0x959A8332, 0xFD99AFFA, 0x459CDAA2, 0x2D9FF66A, 0x307A46E3, 0x58796A2B, 0xE07C1F73, 0x887F33BB,
- 0xF56E0EF4, 0x9D6D223C, 0x25685764, 0x4D6B7BAC, 0x508ECB25, 0x388DE7ED, 0x808892B5, 0xE88BBE7D,
- 0xBB43F3A7, 0xD340DF6F, 0x6B45AA37, 0x034686FF, 0x1EA33676, 0x76A01ABE, 0xCEA56FE6, 0xA6A6432E,
- 0x6935F452, 0x0136D89A, 0xB933ADC2, 0xD130810A, 0xCCD53183, 0xA4D61D4B, 0x1CD36813, 0x74D044DB,
- 0x27180901, 0x4F1B25C9, 0xF71E5091, 0x9F1D7C59, 0x82F8CCD0, 0xEAFBE018, 0x52FE9540, 0x3AFDB988,
- 0xC8358D49, 0xA036A181, 0x1833D4D9, 0x7030F811, 0x6DD54898, 0x05D66450, 0xBDD31108, 0xD5D03DC0,
- 0x8618701A, 0xEE1B5CD2, 0x561E298A, 0x3E1D0542, 0x23F8B5CB, 0x4BFB9903, 0xF3FEEC5B, 0x9BFDC093,
- 0x546E77EF, 0x3C6D5B27, 0x84682E7F, 0xEC6B02B7, 0xF18EB23E, 0x998D9EF6, 0x2188EBAE, 0x498BC766,
- 0x1A438ABC, 0x7240A674, 0xCA45D32C, 0xA246FFE4, 0xBFA34F6D, 0xD7A063A5, 0x6FA516FD, 0x07A63A35,
- 0x8FD9098E, 0xE7DA2546, 0x5FDF501E, 0x37DC7CD6, 0x2A39CC5F, 0x423AE097, 0xFA3F95CF, 0x923CB907,
- 0xC1F4F4DD, 0xA9F7D815, 0x11F2AD4D, 0x79F18185, 0x6414310C, 0x0C171DC4, 0xB412689C, 0xDC114454,
- 0x1382F328, 0x7B81DFE0, 0xC384AAB8, 0xAB878670, 0xB66236F9, 0xDE611A31, 0x66646F69, 0x0E6743A1,
- 0x5DAF0E7B, 0x35AC22B3, 0x8DA957EB, 0xE5AA7B23, 0xF84FCBAA, 0x904CE762, 0x2849923A, 0x404ABEF2,
- 0xB2828A33, 0xDA81A6FB, 0x6284D3A3, 0x0A87FF6B, 0x17624FE2, 0x7F61632A, 0xC7641672, 0xAF673ABA,
- 0xFCAF7760, 0x94AC5BA8, 0x2CA92EF0, 0x44AA0238, 0x594FB2B1, 0x314C9E79, 0x8949EB21, 0xE14AC7E9,
- 0x2ED97095, 0x46DA5C5D, 0xFEDF2905, 0x96DC05CD, 0x8B39B544, 0xE33A998C, 0x5B3FECD4, 0x333CC01C,
- 0x60F48DC6, 0x08F7A10E, 0xB0F2D456, 0xD8F1F89E, 0xC5144817, 0xAD1764DF, 0x15121187, 0x7D113D4F
-},
-{
- 0x00000000, 0x493C7D27, 0x9278FA4E, 0xDB448769, 0x211D826D, 0x6821FF4A, 0xB3657823, 0xFA590504,
- 0x423B04DA, 0x0B0779FD, 0xD043FE94, 0x997F83B3, 0x632686B7, 0x2A1AFB90, 0xF15E7CF9, 0xB86201DE,
- 0x847609B4, 0xCD4A7493, 0x160EF3FA, 0x5F328EDD, 0xA56B8BD9, 0xEC57F6FE, 0x37137197, 0x7E2F0CB0,
- 0xC64D0D6E, 0x8F717049, 0x5435F720, 0x1D098A07, 0xE7508F03, 0xAE6CF224, 0x7528754D, 0x3C14086A,
- 0x0D006599, 0x443C18BE, 0x9F789FD7, 0xD644E2F0, 0x2C1DE7F4, 0x65219AD3, 0xBE651DBA, 0xF759609D,
- 0x4F3B6143, 0x06071C64, 0xDD439B0D, 0x947FE62A, 0x6E26E32E, 0x271A9E09, 0xFC5E1960, 0xB5626447,
- 0x89766C2D, 0xC04A110A, 0x1B0E9663, 0x5232EB44, 0xA86BEE40, 0xE1579367, 0x3A13140E, 0x732F6929,
- 0xCB4D68F7, 0x827115D0, 0x593592B9, 0x1009EF9E, 0xEA50EA9A, 0xA36C97BD, 0x782810D4, 0x31146DF3,
- 0x1A00CB32, 0x533CB615, 0x8878317C, 0xC1444C5B, 0x3B1D495F, 0x72213478, 0xA965B311, 0xE059CE36,
- 0x583BCFE8, 0x1107B2CF, 0xCA4335A6, 0x837F4881, 0x79264D85, 0x301A30A2, 0xEB5EB7CB, 0xA262CAEC,
- 0x9E76C286, 0xD74ABFA1, 0x0C0E38C8, 0x453245EF, 0xBF6B40EB, 0xF6573DCC, 0x2D13BAA5, 0x642FC782,
- 0xDC4DC65C, 0x9571BB7B, 0x4E353C12, 0x07094135, 0xFD504431, 0xB46C3916, 0x6F28BE7F, 0x2614C358,
- 0x1700AEAB, 0x5E3CD38C, 0x857854E5, 0xCC4429C2, 0x361D2CC6, 0x7F2151E1, 0xA465D688, 0xED59ABAF,
- 0x553BAA71, 0x1C07D756, 0xC743503F, 0x8E7F2D18, 0x7426281C, 0x3D1A553B, 0xE65ED252, 0xAF62AF75,
- 0x9376A71F, 0xDA4ADA38, 0x010E5D51, 0x48322076, 0xB26B2572, 0xFB575855, 0x2013DF3C, 0x692FA21B,
- 0xD14DA3C5, 0x9871DEE2, 0x4335598B, 0x0A0924AC, 0xF05021A8, 0xB96C5C8F, 0x6228DBE6, 0x2B14A6C1,
- 0x34019664, 0x7D3DEB43, 0xA6796C2A, 0xEF45110D, 0x151C1409, 0x5C20692E, 0x8764EE47, 0xCE589360,
- 0x763A92BE, 0x3F06EF99, 0xE44268F0, 0xAD7E15D7, 0x572710D3, 0x1E1B6DF4, 0xC55FEA9D, 0x8C6397BA,
- 0xB0779FD0, 0xF94BE2F7, 0x220F659E, 0x6B3318B9, 0x916A1DBD, 0xD856609A, 0x0312E7F3, 0x4A2E9AD4,
- 0xF24C9B0A, 0xBB70E62D, 0x60346144, 0x29081C63, 0xD3511967, 0x9A6D6440, 0x4129E329, 0x08159E0E,
- 0x3901F3FD, 0x703D8EDA, 0xAB7909B3, 0xE2457494, 0x181C7190, 0x51200CB7, 0x8A648BDE, 0xC358F6F9,
- 0x7B3AF727, 0x32068A00, 0xE9420D69, 0xA07E704E, 0x5A27754A, 0x131B086D, 0xC85F8F04, 0x8163F223,
- 0xBD77FA49, 0xF44B876E, 0x2F0F0007, 0x66337D20, 0x9C6A7824, 0xD5560503, 0x0E12826A, 0x472EFF4D,
- 0xFF4CFE93, 0xB67083B4, 0x6D3404DD, 0x240879FA, 0xDE517CFE, 0x976D01D9, 0x4C2986B0, 0x0515FB97,
- 0x2E015D56, 0x673D2071, 0xBC79A718, 0xF545DA3F, 0x0F1CDF3B, 0x4620A21C, 0x9D642575, 0xD4585852,
- 0x6C3A598C, 0x250624AB, 0xFE42A3C2, 0xB77EDEE5, 0x4D27DBE1, 0x041BA6C6, 0xDF5F21AF, 0x96635C88,
- 0xAA7754E2, 0xE34B29C5, 0x380FAEAC, 0x7133D38B, 0x8B6AD68F, 0xC256ABA8, 0x19122CC1, 0x502E51E6,
- 0xE84C5038, 0xA1702D1F, 0x7A34AA76, 0x3308D751, 0xC951D255, 0x806DAF72, 0x5B29281B, 0x1215553C,
- 0x230138CF, 0x6A3D45E8, 0xB179C281, 0xF845BFA6, 0x021CBAA2, 0x4B20C785, 0x906440EC, 0xD9583DCB,
- 0x613A3C15, 0x28064132, 0xF342C65B, 0xBA7EBB7C, 0x4027BE78, 0x091BC35F, 0xD25F4436, 0x9B633911,
- 0xA777317B, 0xEE4B4C5C, 0x350FCB35, 0x7C33B612, 0x866AB316, 0xCF56CE31, 0x14124958, 0x5D2E347F,
- 0xE54C35A1, 0xAC704886, 0x7734CFEF, 0x3E08B2C8, 0xC451B7CC, 0x8D6DCAEB, 0x56294D82, 0x1F1530A5
-}};
-
-#define CRC32_UPD(crc, n) \
-	(crc32c_tables[(n)][(crc) & 0xFF] ^ \
-	 crc32c_tables[(n)-1][((crc) >> 8) & 0xFF])
-
-static inline uint32_t
-crc32c_1byte(uint8_t data, uint32_t init_val)
-{
-	uint32_t crc;
-	crc = init_val;
-	crc ^= data;
-
-	return crc32c_tables[0][crc & 0xff] ^ (crc >> 8);
-}
-
-static inline uint32_t
-crc32c_2bytes(uint16_t data, uint32_t init_val)
-{
-	uint32_t crc;
-	crc = init_val;
-	crc ^= data;
-
-	crc = CRC32_UPD(crc, 1) ^ (crc >> 16);
-
-	return crc;
-}
-
-static inline uint32_t
-crc32c_1word(uint32_t data, uint32_t init_val)
-{
-	uint32_t crc, term1, term2;
-	crc = init_val;
-	crc ^= data;
-
-	term1 = CRC32_UPD(crc, 3);
-	term2 = crc >> 16;
-	crc = term1 ^ CRC32_UPD(term2, 1);
-
-	return crc;
-}
-
-static inline uint32_t
-crc32c_2words(uint64_t data, uint32_t init_val)
-{
-	uint32_t crc, term1, term2;
-	union {
-		uint64_t u64;
-		uint32_t u32[2];
-	} d;
-	d.u64 = data;
-
-	crc = init_val;
-	crc ^= d.u32[0];
-
-	term1 = CRC32_UPD(crc, 7);
-	term2 = crc >> 16;
-	crc = term1 ^ CRC32_UPD(term2, 5);
-	term1 = CRC32_UPD(d.u32[1], 3);
-	term2 = d.u32[1] >> 16;
-	crc ^= term1 ^ CRC32_UPD(term2, 1);
-
-	return crc;
-}
-
-#if defined(RTE_ARCH_X86)
-static inline uint32_t
-crc32c_sse42_u8(uint8_t data, uint32_t init_val)
-{
-	__asm__ volatile(
-			"crc32b %[data], %[init_val];"
-			: [init_val] "+r" (init_val)
-			: [data] "rm" (data));
-	return init_val;
-}
-
-static inline uint32_t
-crc32c_sse42_u16(uint16_t data, uint32_t init_val)
-{
-	__asm__ volatile(
-			"crc32w %[data], %[init_val];"
-			: [init_val] "+r" (init_val)
-			: [data] "rm" (data));
-	return init_val;
-}
-
-static inline uint32_t
-crc32c_sse42_u32(uint32_t data, uint32_t init_val)
-{
-	__asm__ volatile(
-			"crc32l %[data], %[init_val];"
-			: [init_val] "+r" (init_val)
-			: [data] "rm" (data));
-	return init_val;
-}
-
-static inline uint32_t
-crc32c_sse42_u64_mimic(uint64_t data, uint64_t init_val)
-{
-	union {
-		uint32_t u32[2];
-		uint64_t u64;
-	} d;
-
-	d.u64 = data;
-	init_val = crc32c_sse42_u32(d.u32[0], (uint32_t)init_val);
-	init_val = crc32c_sse42_u32(d.u32[1], (uint32_t)init_val);
-	return (uint32_t)init_val;
-}
-#endif
-
-#ifdef RTE_ARCH_X86_64
-static inline uint32_t
-crc32c_sse42_u64(uint64_t data, uint64_t init_val)
-{
-	__asm__ volatile(
-			"crc32q %[data], %[init_val];"
-			: [init_val] "+r" (init_val)
-			: [data] "rm" (data));
-	return (uint32_t)init_val;
-}
-#endif
+#include <hash_crc_sw.h>

 #define CRC32_SW            (1U << 0)
 #define CRC32_SSE42         (1U << 1)
@@ -427,6 +34,7 @@ static uint8_t crc32_alg = CRC32_SW;
 #if defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
 #include "rte_crc_arm64.h"
 #else
+#include "hash_crc_x86.h"

 /**
  * Allow or disallow use of SSE4.2 instrinsics for CRC32 hash
--
2.17.1


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [PATCH v4 1/2] hash: split x86 and SW hash CRC intrinsics
  2021-10-03 23:00  1% ` [dpdk-dev] [PATCH v3 1/2] hash: split x86 and SW hash CRC intrinsics pbhagavatula
@ 2021-10-04  5:52  1%   ` pbhagavatula
  0 siblings, 0 replies; 200+ results
From: pbhagavatula @ 2021-10-04  5:52 UTC (permalink / raw)
  To: ruifeng.wang, konstantin.ananyev, jerinj, Yipeng Wang,
	Sameh Gobriel, Bruce Richardson, Vladimir Medvedkin
  Cc: dev, Pavan Nikhilesh

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Split x86 and SW hash crc intrinsics into a separate files.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 v4 Changes:
 - Fix compilation issues

 v3 Changes:
 - Split x86 and SW hash crc functions into separate files.
 - Rename `rte_crc_arm64.h` to `hash_crc_arm64.h` as it is internal and not
 installed by meson build.

 v2 Changes:
 - Don't remove `rte_crc_arm64.h` for ABI purposes.
 - Revert function pointer approach for performance reasons.
 - Select the best available algorithm based on the arch when user passes an
 unsupported crc32 algorithm.

 lib/hash/hash_crc_sw.h  | 419 ++++++++++++++++++++++++++++++++++++++++
 lib/hash/hash_crc_x86.h |  62 ++++++
 lib/hash/rte_hash_crc.h | 396 +------------------------------------
 3 files changed, 483 insertions(+), 394 deletions(-)
 create mode 100644 lib/hash/hash_crc_sw.h
 create mode 100644 lib/hash/hash_crc_x86.h

diff --git a/lib/hash/hash_crc_sw.h b/lib/hash/hash_crc_sw.h
new file mode 100644
index 0000000000..4790a0970b
--- /dev/null
+++ b/lib/hash/hash_crc_sw.h
@@ -0,0 +1,419 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2014 Intel Corporation
+ */
+
+#ifndef _HASH_CRC_SW_H_
+#define _HASH_CRC_SW_H_
+
+/* Lookup tables for software implementation of CRC32C */
+static const uint32_t crc32c_tables[8][256] = {
+	{0x00000000, 0xF26B8303, 0xE13B70F7, 0x1350F3F4, 0xC79A971F, 0x35F1141C,
+	 0x26A1E7E8, 0xD4CA64EB, 0x8AD958CF, 0x78B2DBCC, 0x6BE22838, 0x9989AB3B,
+	 0x4D43CFD0, 0xBF284CD3, 0xAC78BF27, 0x5E133C24, 0x105EC76F, 0xE235446C,
+	 0xF165B798, 0x030E349B, 0xD7C45070, 0x25AFD373, 0x36FF2087, 0xC494A384,
+	 0x9A879FA0, 0x68EC1CA3, 0x7BBCEF57, 0x89D76C54, 0x5D1D08BF, 0xAF768BBC,
+	 0xBC267848, 0x4E4DFB4B, 0x20BD8EDE, 0xD2D60DDD, 0xC186FE29, 0x33ED7D2A,
+	 0xE72719C1, 0x154C9AC2, 0x061C6936, 0xF477EA35, 0xAA64D611, 0x580F5512,
+	 0x4B5FA6E6, 0xB93425E5, 0x6DFE410E, 0x9F95C20D, 0x8CC531F9, 0x7EAEB2FA,
+	 0x30E349B1, 0xC288CAB2, 0xD1D83946, 0x23B3BA45, 0xF779DEAE, 0x05125DAD,
+	 0x1642AE59, 0xE4292D5A, 0xBA3A117E, 0x4851927D, 0x5B016189, 0xA96AE28A,
+	 0x7DA08661, 0x8FCB0562, 0x9C9BF696, 0x6EF07595, 0x417B1DBC, 0xB3109EBF,
+	 0xA0406D4B, 0x522BEE48, 0x86E18AA3, 0x748A09A0, 0x67DAFA54, 0x95B17957,
+	 0xCBA24573, 0x39C9C670, 0x2A993584, 0xD8F2B687, 0x0C38D26C, 0xFE53516F,
+	 0xED03A29B, 0x1F682198, 0x5125DAD3, 0xA34E59D0, 0xB01EAA24, 0x42752927,
+	 0x96BF4DCC, 0x64D4CECF, 0x77843D3B, 0x85EFBE38, 0xDBFC821C, 0x2997011F,
+	 0x3AC7F2EB, 0xC8AC71E8, 0x1C661503, 0xEE0D9600, 0xFD5D65F4, 0x0F36E6F7,
+	 0x61C69362, 0x93AD1061, 0x80FDE395, 0x72966096, 0xA65C047D, 0x5437877E,
+	 0x4767748A, 0xB50CF789, 0xEB1FCBAD, 0x197448AE, 0x0A24BB5A, 0xF84F3859,
+	 0x2C855CB2, 0xDEEEDFB1, 0xCDBE2C45, 0x3FD5AF46, 0x7198540D, 0x83F3D70E,
+	 0x90A324FA, 0x62C8A7F9, 0xB602C312, 0x44694011, 0x5739B3E5, 0xA55230E6,
+	 0xFB410CC2, 0x092A8FC1, 0x1A7A7C35, 0xE811FF36, 0x3CDB9BDD, 0xCEB018DE,
+	 0xDDE0EB2A, 0x2F8B6829, 0x82F63B78, 0x709DB87B, 0x63CD4B8F, 0x91A6C88C,
+	 0x456CAC67, 0xB7072F64, 0xA457DC90, 0x563C5F93, 0x082F63B7, 0xFA44E0B4,
+	 0xE9141340, 0x1B7F9043, 0xCFB5F4A8, 0x3DDE77AB, 0x2E8E845F, 0xDCE5075C,
+	 0x92A8FC17, 0x60C37F14, 0x73938CE0, 0x81F80FE3, 0x55326B08, 0xA759E80B,
+	 0xB4091BFF, 0x466298FC, 0x1871A4D8, 0xEA1A27DB, 0xF94AD42F, 0x0B21572C,
+	 0xDFEB33C7, 0x2D80B0C4, 0x3ED04330, 0xCCBBC033, 0xA24BB5A6, 0x502036A5,
+	 0x4370C551, 0xB11B4652, 0x65D122B9, 0x97BAA1BA, 0x84EA524E, 0x7681D14D,
+	 0x2892ED69, 0xDAF96E6A, 0xC9A99D9E, 0x3BC21E9D, 0xEF087A76, 0x1D63F975,
+	 0x0E330A81, 0xFC588982, 0xB21572C9, 0x407EF1CA, 0x532E023E, 0xA145813D,
+	 0x758FE5D6, 0x87E466D5, 0x94B49521, 0x66DF1622, 0x38CC2A06, 0xCAA7A905,
+	 0xD9F75AF1, 0x2B9CD9F2, 0xFF56BD19, 0x0D3D3E1A, 0x1E6DCDEE, 0xEC064EED,
+	 0xC38D26C4, 0x31E6A5C7, 0x22B65633, 0xD0DDD530, 0x0417B1DB, 0xF67C32D8,
+	 0xE52CC12C, 0x1747422F, 0x49547E0B, 0xBB3FFD08, 0xA86F0EFC, 0x5A048DFF,
+	 0x8ECEE914, 0x7CA56A17, 0x6FF599E3, 0x9D9E1AE0, 0xD3D3E1AB, 0x21B862A8,
+	 0x32E8915C, 0xC083125F, 0x144976B4, 0xE622F5B7, 0xF5720643, 0x07198540,
+	 0x590AB964, 0xAB613A67, 0xB831C993, 0x4A5A4A90, 0x9E902E7B, 0x6CFBAD78,
+	 0x7FAB5E8C, 0x8DC0DD8F, 0xE330A81A, 0x115B2B19, 0x020BD8ED, 0xF0605BEE,
+	 0x24AA3F05, 0xD6C1BC06, 0xC5914FF2, 0x37FACCF1, 0x69E9F0D5, 0x9B8273D6,
+	 0x88D28022, 0x7AB90321, 0xAE7367CA, 0x5C18E4C9, 0x4F48173D, 0xBD23943E,
+	 0xF36E6F75, 0x0105EC76, 0x12551F82, 0xE03E9C81, 0x34F4F86A, 0xC69F7B69,
+	 0xD5CF889D, 0x27A40B9E, 0x79B737BA, 0x8BDCB4B9, 0x988C474D, 0x6AE7C44E,
+	 0xBE2DA0A5, 0x4C4623A6, 0x5F16D052, 0xAD7D5351},
+	{0x00000000, 0x13A29877, 0x274530EE, 0x34E7A899, 0x4E8A61DC, 0x5D28F9AB,
+	 0x69CF5132, 0x7A6DC945, 0x9D14C3B8, 0x8EB65BCF, 0xBA51F356, 0xA9F36B21,
+	 0xD39EA264, 0xC03C3A13, 0xF4DB928A, 0xE7790AFD, 0x3FC5F181, 0x2C6769F6,
+	 0x1880C16F, 0x0B225918, 0x714F905D, 0x62ED082A, 0x560AA0B3, 0x45A838C4,
+	 0xA2D13239, 0xB173AA4E, 0x859402D7, 0x96369AA0, 0xEC5B53E5, 0xFFF9CB92,
+	 0xCB1E630B, 0xD8BCFB7C, 0x7F8BE302, 0x6C297B75, 0x58CED3EC, 0x4B6C4B9B,
+	 0x310182DE, 0x22A31AA9, 0x1644B230, 0x05E62A47, 0xE29F20BA, 0xF13DB8CD,
+	 0xC5DA1054, 0xD6788823, 0xAC154166, 0xBFB7D911, 0x8B507188, 0x98F2E9FF,
+	 0x404E1283, 0x53EC8AF4, 0x670B226D, 0x74A9BA1A, 0x0EC4735F, 0x1D66EB28,
+	 0x298143B1, 0x3A23DBC6, 0xDD5AD13B, 0xCEF8494C, 0xFA1FE1D5, 0xE9BD79A2,
+	 0x93D0B0E7, 0x80722890, 0xB4958009, 0xA737187E, 0xFF17C604, 0xECB55E73,
+	 0xD852F6EA, 0xCBF06E9D, 0xB19DA7D8, 0xA23F3FAF, 0x96D89736, 0x857A0F41,
+	 0x620305BC, 0x71A19DCB, 0x45463552, 0x56E4AD25, 0x2C896460, 0x3F2BFC17,
+	 0x0BCC548E, 0x186ECCF9, 0xC0D23785, 0xD370AFF2, 0xE797076B, 0xF4359F1C,
+	 0x8E585659, 0x9DFACE2E, 0xA91D66B7, 0xBABFFEC0, 0x5DC6F43D, 0x4E646C4A,
+	 0x7A83C4D3, 0x69215CA4, 0x134C95E1, 0x00EE0D96, 0x3409A50F, 0x27AB3D78,
+	 0x809C2506, 0x933EBD71, 0xA7D915E8, 0xB47B8D9F, 0xCE1644DA, 0xDDB4DCAD,
+	 0xE9537434, 0xFAF1EC43, 0x1D88E6BE, 0x0E2A7EC9, 0x3ACDD650, 0x296F4E27,
+	 0x53028762, 0x40A01F15, 0x7447B78C, 0x67E52FFB, 0xBF59D487, 0xACFB4CF0,
+	 0x981CE469, 0x8BBE7C1E, 0xF1D3B55B, 0xE2712D2C, 0xD69685B5, 0xC5341DC2,
+	 0x224D173F, 0x31EF8F48, 0x050827D1, 0x16AABFA6, 0x6CC776E3, 0x7F65EE94,
+	 0x4B82460D, 0x5820DE7A, 0xFBC3FAF9, 0xE861628E, 0xDC86CA17, 0xCF245260,
+	 0xB5499B25, 0xA6EB0352, 0x920CABCB, 0x81AE33BC, 0x66D73941, 0x7575A136,
+	 0x419209AF, 0x523091D8, 0x285D589D, 0x3BFFC0EA, 0x0F186873, 0x1CBAF004,
+	 0xC4060B78, 0xD7A4930F, 0xE3433B96, 0xF0E1A3E1, 0x8A8C6AA4, 0x992EF2D3,
+	 0xADC95A4A, 0xBE6BC23D, 0x5912C8C0, 0x4AB050B7, 0x7E57F82E, 0x6DF56059,
+	 0x1798A91C, 0x043A316B, 0x30DD99F2, 0x237F0185, 0x844819FB, 0x97EA818C,
+	 0xA30D2915, 0xB0AFB162, 0xCAC27827, 0xD960E050, 0xED8748C9, 0xFE25D0BE,
+	 0x195CDA43, 0x0AFE4234, 0x3E19EAAD, 0x2DBB72DA, 0x57D6BB9F, 0x447423E8,
+	 0x70938B71, 0x63311306, 0xBB8DE87A, 0xA82F700D, 0x9CC8D894, 0x8F6A40E3,
+	 0xF50789A6, 0xE6A511D1, 0xD242B948, 0xC1E0213F, 0x26992BC2, 0x353BB3B5,
+	 0x01DC1B2C, 0x127E835B, 0x68134A1E, 0x7BB1D269, 0x4F567AF0, 0x5CF4E287,
+	 0x04D43CFD, 0x1776A48A, 0x23910C13, 0x30339464, 0x4A5E5D21, 0x59FCC556,
+	 0x6D1B6DCF, 0x7EB9F5B8, 0x99C0FF45, 0x8A626732, 0xBE85CFAB, 0xAD2757DC,
+	 0xD74A9E99, 0xC4E806EE, 0xF00FAE77, 0xE3AD3600, 0x3B11CD7C, 0x28B3550B,
+	 0x1C54FD92, 0x0FF665E5, 0x759BACA0, 0x663934D7, 0x52DE9C4E, 0x417C0439,
+	 0xA6050EC4, 0xB5A796B3, 0x81403E2A, 0x92E2A65D, 0xE88F6F18, 0xFB2DF76F,
+	 0xCFCA5FF6, 0xDC68C781, 0x7B5FDFFF, 0x68FD4788, 0x5C1AEF11, 0x4FB87766,
+	 0x35D5BE23, 0x26772654, 0x12908ECD, 0x013216BA, 0xE64B1C47, 0xF5E98430,
+	 0xC10E2CA9, 0xD2ACB4DE, 0xA8C17D9B, 0xBB63E5EC, 0x8F844D75, 0x9C26D502,
+	 0x449A2E7E, 0x5738B609, 0x63DF1E90, 0x707D86E7, 0x0A104FA2, 0x19B2D7D5,
+	 0x2D557F4C, 0x3EF7E73B, 0xD98EEDC6, 0xCA2C75B1, 0xFECBDD28, 0xED69455F,
+	 0x97048C1A, 0x84A6146D, 0xB041BCF4, 0xA3E32483},
+	{0x00000000, 0xA541927E, 0x4F6F520D, 0xEA2EC073, 0x9EDEA41A, 0x3B9F3664,
+	 0xD1B1F617, 0x74F06469, 0x38513EC5, 0x9D10ACBB, 0x773E6CC8, 0xD27FFEB6,
+	 0xA68F9ADF, 0x03CE08A1, 0xE9E0C8D2, 0x4CA15AAC, 0x70A27D8A, 0xD5E3EFF4,
+	 0x3FCD2F87, 0x9A8CBDF9, 0xEE7CD990, 0x4B3D4BEE, 0xA1138B9D, 0x045219E3,
+	 0x48F3434F, 0xEDB2D131, 0x079C1142, 0xA2DD833C, 0xD62DE755, 0x736C752B,
+	 0x9942B558, 0x3C032726, 0xE144FB14, 0x4405696A, 0xAE2BA919, 0x0B6A3B67,
+	 0x7F9A5F0E, 0xDADBCD70, 0x30F50D03, 0x95B49F7D, 0xD915C5D1, 0x7C5457AF,
+	 0x967A97DC, 0x333B05A2, 0x47CB61CB, 0xE28AF3B5, 0x08A433C6, 0xADE5A1B8,
+	 0x91E6869E, 0x34A714E0, 0xDE89D493, 0x7BC846ED, 0x0F382284, 0xAA79B0FA,
+	 0x40577089, 0xE516E2F7, 0xA9B7B85B, 0x0CF62A25, 0xE6D8EA56, 0x43997828,
+	 0x37691C41, 0x92288E3F, 0x78064E4C, 0xDD47DC32, 0xC76580D9, 0x622412A7,
+	 0x880AD2D4, 0x2D4B40AA, 0x59BB24C3, 0xFCFAB6BD, 0x16D476CE, 0xB395E4B0,
+	 0xFF34BE1C, 0x5A752C62, 0xB05BEC11, 0x151A7E6F, 0x61EA1A06, 0xC4AB8878,
+	 0x2E85480B, 0x8BC4DA75, 0xB7C7FD53, 0x12866F2D, 0xF8A8AF5E, 0x5DE93D20,
+	 0x29195949, 0x8C58CB37, 0x66760B44, 0xC337993A, 0x8F96C396, 0x2AD751E8,
+	 0xC0F9919B, 0x65B803E5, 0x1148678C, 0xB409F5F2, 0x5E273581, 0xFB66A7FF,
+	 0x26217BCD, 0x8360E9B3, 0x694E29C0, 0xCC0FBBBE, 0xB8FFDFD7, 0x1DBE4DA9,
+	 0xF7908DDA, 0x52D11FA4, 0x1E704508, 0xBB31D776, 0x511F1705, 0xF45E857B,
+	 0x80AEE112, 0x25EF736C, 0xCFC1B31F, 0x6A802161, 0x56830647, 0xF3C29439,
+	 0x19EC544A, 0xBCADC634, 0xC85DA25D, 0x6D1C3023, 0x8732F050, 0x2273622E,
+	 0x6ED23882, 0xCB93AAFC, 0x21BD6A8F, 0x84FCF8F1, 0xF00C9C98, 0x554D0EE6,
+	 0xBF63CE95, 0x1A225CEB, 0x8B277743, 0x2E66E53D, 0xC448254E, 0x6109B730,
+	 0x15F9D359, 0xB0B84127, 0x5A968154, 0xFFD7132A, 0xB3764986, 0x1637DBF8,
+	 0xFC191B8B, 0x595889F5, 0x2DA8ED9C, 0x88E97FE2, 0x62C7BF91, 0xC7862DEF,
+	 0xFB850AC9, 0x5EC498B7, 0xB4EA58C4, 0x11ABCABA, 0x655BAED3, 0xC01A3CAD,
+	 0x2A34FCDE, 0x8F756EA0, 0xC3D4340C, 0x6695A672, 0x8CBB6601, 0x29FAF47F,
+	 0x5D0A9016, 0xF84B0268, 0x1265C21B, 0xB7245065, 0x6A638C57, 0xCF221E29,
+	 0x250CDE5A, 0x804D4C24, 0xF4BD284D, 0x51FCBA33, 0xBBD27A40, 0x1E93E83E,
+	 0x5232B292, 0xF77320EC, 0x1D5DE09F, 0xB81C72E1, 0xCCEC1688, 0x69AD84F6,
+	 0x83834485, 0x26C2D6FB, 0x1AC1F1DD, 0xBF8063A3, 0x55AEA3D0, 0xF0EF31AE,
+	 0x841F55C7, 0x215EC7B9, 0xCB7007CA, 0x6E3195B4, 0x2290CF18, 0x87D15D66,
+	 0x6DFF9D15, 0xC8BE0F6B, 0xBC4E6B02, 0x190FF97C, 0xF321390F, 0x5660AB71,
+	 0x4C42F79A, 0xE90365E4, 0x032DA597, 0xA66C37E9, 0xD29C5380, 0x77DDC1FE,
+	 0x9DF3018D, 0x38B293F3, 0x7413C95F, 0xD1525B21, 0x3B7C9B52, 0x9E3D092C,
+	 0xEACD6D45, 0x4F8CFF3B, 0xA5A23F48, 0x00E3AD36, 0x3CE08A10, 0x99A1186E,
+	 0x738FD81D, 0xD6CE4A63, 0xA23E2E0A, 0x077FBC74, 0xED517C07, 0x4810EE79,
+	 0x04B1B4D5, 0xA1F026AB, 0x4BDEE6D8, 0xEE9F74A6, 0x9A6F10CF, 0x3F2E82B1,
+	 0xD50042C2, 0x7041D0BC, 0xAD060C8E, 0x08479EF0, 0xE2695E83, 0x4728CCFD,
+	 0x33D8A894, 0x96993AEA, 0x7CB7FA99, 0xD9F668E7, 0x9557324B, 0x3016A035,
+	 0xDA386046, 0x7F79F238, 0x0B899651, 0xAEC8042F, 0x44E6C45C, 0xE1A75622,
+	 0xDDA47104, 0x78E5E37A, 0x92CB2309, 0x378AB177, 0x437AD51E, 0xE63B4760,
+	 0x0C158713, 0xA954156D, 0xE5F54FC1, 0x40B4DDBF, 0xAA9A1DCC, 0x0FDB8FB2,
+	 0x7B2BEBDB, 0xDE6A79A5, 0x3444B9D6, 0x91052BA8},
+	{0x00000000, 0xDD45AAB8, 0xBF672381, 0x62228939, 0x7B2231F3, 0xA6679B4B,
+	 0xC4451272, 0x1900B8CA, 0xF64463E6, 0x2B01C95E, 0x49234067, 0x9466EADF,
+	 0x8D665215, 0x5023F8AD, 0x32017194, 0xEF44DB2C, 0xE964B13D, 0x34211B85,
+	 0x560392BC, 0x8B463804, 0x924680CE, 0x4F032A76, 0x2D21A34F, 0xF06409F7,
+	 0x1F20D2DB, 0xC2657863, 0xA047F15A, 0x7D025BE2, 0x6402E328, 0xB9474990,
+	 0xDB65C0A9, 0x06206A11, 0xD725148B, 0x0A60BE33, 0x6842370A, 0xB5079DB2,
+	 0xAC072578, 0x71428FC0, 0x136006F9, 0xCE25AC41, 0x2161776D, 0xFC24DDD5,
+	 0x9E0654EC, 0x4343FE54, 0x5A43469E, 0x8706EC26, 0xE524651F, 0x3861CFA7,
+	 0x3E41A5B6, 0xE3040F0E, 0x81268637, 0x5C632C8F, 0x45639445, 0x98263EFD,
+	 0xFA04B7C4, 0x27411D7C, 0xC805C650, 0x15406CE8, 0x7762E5D1, 0xAA274F69,
+	 0xB327F7A3, 0x6E625D1B, 0x0C40D422, 0xD1057E9A, 0xABA65FE7, 0x76E3F55F,
+	 0x14C17C66, 0xC984D6DE, 0xD0846E14, 0x0DC1C4AC, 0x6FE34D95, 0xB2A6E72D,
+	 0x5DE23C01, 0x80A796B9, 0xE2851F80, 0x3FC0B538, 0x26C00DF2, 0xFB85A74A,
+	 0x99A72E73, 0x44E284CB, 0x42C2EEDA, 0x9F874462, 0xFDA5CD5B, 0x20E067E3,
+	 0x39E0DF29, 0xE4A57591, 0x8687FCA8, 0x5BC25610, 0xB4868D3C, 0x69C32784,
+	 0x0BE1AEBD, 0xD6A40405, 0xCFA4BCCF, 0x12E11677, 0x70C39F4E, 0xAD8635F6,
+	 0x7C834B6C, 0xA1C6E1D4, 0xC3E468ED, 0x1EA1C255, 0x07A17A9F, 0xDAE4D027,
+	 0xB8C6591E, 0x6583F3A6, 0x8AC7288A, 0x57828232, 0x35A00B0B, 0xE8E5A1B3,
+	 0xF1E51979, 0x2CA0B3C1, 0x4E823AF8, 0x93C79040, 0x95E7FA51, 0x48A250E9,
+	 0x2A80D9D0, 0xF7C57368, 0xEEC5CBA2, 0x3380611A, 0x51A2E823, 0x8CE7429B,
+	 0x63A399B7, 0xBEE6330F, 0xDCC4BA36, 0x0181108E, 0x1881A844, 0xC5C402FC,
+	 0xA7E68BC5, 0x7AA3217D, 0x52A0C93F, 0x8FE56387, 0xEDC7EABE, 0x30824006,
+	 0x2982F8CC, 0xF4C75274, 0x96E5DB4D, 0x4BA071F5, 0xA4E4AAD9, 0x79A10061,
+	 0x1B838958, 0xC6C623E0, 0xDFC69B2A, 0x02833192, 0x60A1B8AB, 0xBDE41213,
+	 0xBBC47802, 0x6681D2BA, 0x04A35B83, 0xD9E6F13B, 0xC0E649F1, 0x1DA3E349,
+	 0x7F816A70, 0xA2C4C0C8, 0x4D801BE4, 0x90C5B15C, 0xF2E73865, 0x2FA292DD,
+	 0x36A22A17, 0xEBE780AF, 0x89C50996, 0x5480A32E, 0x8585DDB4, 0x58C0770C,
+	 0x3AE2FE35, 0xE7A7548D, 0xFEA7EC47, 0x23E246FF, 0x41C0CFC6, 0x9C85657E,
+	 0x73C1BE52, 0xAE8414EA, 0xCCA69DD3, 0x11E3376B, 0x08E38FA1, 0xD5A62519,
+	 0xB784AC20, 0x6AC10698, 0x6CE16C89, 0xB1A4C631, 0xD3864F08, 0x0EC3E5B0,
+	 0x17C35D7A, 0xCA86F7C2, 0xA8A47EFB, 0x75E1D443, 0x9AA50F6F, 0x47E0A5D7,
+	 0x25C22CEE, 0xF8878656, 0xE1873E9C, 0x3CC29424, 0x5EE01D1D, 0x83A5B7A5,
+	 0xF90696D8, 0x24433C60, 0x4661B559, 0x9B241FE1, 0x8224A72B, 0x5F610D93,
+	 0x3D4384AA, 0xE0062E12, 0x0F42F53E, 0xD2075F86, 0xB025D6BF, 0x6D607C07,
+	 0x7460C4CD, 0xA9256E75, 0xCB07E74C, 0x16424DF4, 0x106227E5, 0xCD278D5D,
+	 0xAF050464, 0x7240AEDC, 0x6B401616, 0xB605BCAE, 0xD4273597, 0x09629F2F,
+	 0xE6264403, 0x3B63EEBB, 0x59416782, 0x8404CD3A, 0x9D0475F0, 0x4041DF48,
+	 0x22635671, 0xFF26FCC9, 0x2E238253, 0xF36628EB, 0x9144A1D2, 0x4C010B6A,
+	 0x5501B3A0, 0x88441918, 0xEA669021, 0x37233A99, 0xD867E1B5, 0x05224B0D,
+	 0x6700C234, 0xBA45688C, 0xA345D046, 0x7E007AFE, 0x1C22F3C7, 0xC167597F,
+	 0xC747336E, 0x1A0299D6, 0x782010EF, 0xA565BA57, 0xBC65029D, 0x6120A825,
+	 0x0302211C, 0xDE478BA4, 0x31035088, 0xEC46FA30, 0x8E647309, 0x5321D9B1,
+	 0x4A21617B, 0x9764CBC3, 0xF54642FA, 0x2803E842},
+	{0x00000000, 0x38116FAC, 0x7022DF58, 0x4833B0F4, 0xE045BEB0, 0xD854D11C,
+	 0x906761E8, 0xA8760E44, 0xC5670B91, 0xFD76643D, 0xB545D4C9, 0x8D54BB65,
+	 0x2522B521, 0x1D33DA8D, 0x55006A79, 0x6D1105D5, 0x8F2261D3, 0xB7330E7F,
+	 0xFF00BE8B, 0xC711D127, 0x6F67DF63, 0x5776B0CF, 0x1F45003B, 0x27546F97,
+	 0x4A456A42, 0x725405EE, 0x3A67B51A, 0x0276DAB6, 0xAA00D4F2, 0x9211BB5E,
+	 0xDA220BAA, 0xE2336406, 0x1BA8B557, 0x23B9DAFB, 0x6B8A6A0F, 0x539B05A3,
+	 0xFBED0BE7, 0xC3FC644B, 0x8BCFD4BF, 0xB3DEBB13, 0xDECFBEC6, 0xE6DED16A,
+	 0xAEED619E, 0x96FC0E32, 0x3E8A0076, 0x069B6FDA, 0x4EA8DF2E, 0x76B9B082,
+	 0x948AD484, 0xAC9BBB28, 0xE4A80BDC, 0xDCB96470, 0x74CF6A34, 0x4CDE0598,
+	 0x04EDB56C, 0x3CFCDAC0, 0x51EDDF15, 0x69FCB0B9, 0x21CF004D, 0x19DE6FE1,
+	 0xB1A861A5, 0x89B90E09, 0xC18ABEFD, 0xF99BD151, 0x37516AAE, 0x0F400502,
+	 0x4773B5F6, 0x7F62DA5A, 0xD714D41E, 0xEF05BBB2, 0xA7360B46, 0x9F2764EA,
+	 0xF236613F, 0xCA270E93, 0x8214BE67, 0xBA05D1CB, 0x1273DF8F, 0x2A62B023,
+	 0x625100D7, 0x5A406F7B, 0xB8730B7D, 0x806264D1, 0xC851D425, 0xF040BB89,
+	 0x5836B5CD, 0x6027DA61, 0x28146A95, 0x10050539, 0x7D1400EC, 0x45056F40,
+	 0x0D36DFB4, 0x3527B018, 0x9D51BE5C, 0xA540D1F0, 0xED736104, 0xD5620EA8,
+	 0x2CF9DFF9, 0x14E8B055, 0x5CDB00A1, 0x64CA6F0D, 0xCCBC6149, 0xF4AD0EE5,
+	 0xBC9EBE11, 0x848FD1BD, 0xE99ED468, 0xD18FBBC4, 0x99BC0B30, 0xA1AD649C,
+	 0x09DB6AD8, 0x31CA0574, 0x79F9B580, 0x41E8DA2C, 0xA3DBBE2A, 0x9BCAD186,
+	 0xD3F96172, 0xEBE80EDE, 0x439E009A, 0x7B8F6F36, 0x33BCDFC2, 0x0BADB06E,
+	 0x66BCB5BB, 0x5EADDA17, 0x169E6AE3, 0x2E8F054F, 0x86F90B0B, 0xBEE864A7,
+	 0xF6DBD453, 0xCECABBFF, 0x6EA2D55C, 0x56B3BAF0, 0x1E800A04, 0x269165A8,
+	 0x8EE76BEC, 0xB6F60440, 0xFEC5B4B4, 0xC6D4DB18, 0xABC5DECD, 0x93D4B161,
+	 0xDBE70195, 0xE3F66E39, 0x4B80607D, 0x73910FD1, 0x3BA2BF25, 0x03B3D089,
+	 0xE180B48F, 0xD991DB23, 0x91A26BD7, 0xA9B3047B, 0x01C50A3F, 0x39D46593,
+	 0x71E7D567, 0x49F6BACB, 0x24E7BF1E, 0x1CF6D0B2, 0x54C56046, 0x6CD40FEA,
+	 0xC4A201AE, 0xFCB36E02, 0xB480DEF6, 0x8C91B15A, 0x750A600B, 0x4D1B0FA7,
+	 0x0528BF53, 0x3D39D0FF, 0x954FDEBB, 0xAD5EB117, 0xE56D01E3, 0xDD7C6E4F,
+	 0xB06D6B9A, 0x887C0436, 0xC04FB4C2, 0xF85EDB6E, 0x5028D52A, 0x6839BA86,
+	 0x200A0A72, 0x181B65DE, 0xFA2801D8, 0xC2396E74, 0x8A0ADE80, 0xB21BB12C,
+	 0x1A6DBF68, 0x227CD0C4, 0x6A4F6030, 0x525E0F9C, 0x3F4F0A49, 0x075E65E5,
+	 0x4F6DD511, 0x777CBABD, 0xDF0AB4F9, 0xE71BDB55, 0xAF286BA1, 0x9739040D,
+	 0x59F3BFF2, 0x61E2D05E, 0x29D160AA, 0x11C00F06, 0xB9B60142, 0x81A76EEE,
+	 0xC994DE1A, 0xF185B1B6, 0x9C94B463, 0xA485DBCF, 0xECB66B3B, 0xD4A70497,
+	 0x7CD10AD3, 0x44C0657F, 0x0CF3D58B, 0x34E2BA27, 0xD6D1DE21, 0xEEC0B18D,
+	 0xA6F30179, 0x9EE26ED5, 0x36946091, 0x0E850F3D, 0x46B6BFC9, 0x7EA7D065,
+	 0x13B6D5B0, 0x2BA7BA1C, 0x63940AE8, 0x5B856544, 0xF3F36B00, 0xCBE204AC,
+	 0x83D1B458, 0xBBC0DBF4, 0x425B0AA5, 0x7A4A6509, 0x3279D5FD, 0x0A68BA51,
+	 0xA21EB415, 0x9A0FDBB9, 0xD23C6B4D, 0xEA2D04E1, 0x873C0134, 0xBF2D6E98,
+	 0xF71EDE6C, 0xCF0FB1C0, 0x6779BF84, 0x5F68D028, 0x175B60DC, 0x2F4A0F70,
+	 0xCD796B76, 0xF56804DA, 0xBD5BB42E, 0x854ADB82, 0x2D3CD5C6, 0x152DBA6A,
+	 0x5D1E0A9E, 0x650F6532, 0x081E60E7, 0x300F0F4B, 0x783CBFBF, 0x402DD013,
+	 0xE85BDE57, 0xD04AB1FB, 0x9879010F, 0xA0686EA3},
+	{0x00000000, 0xEF306B19, 0xDB8CA0C3, 0x34BCCBDA, 0xB2F53777, 0x5DC55C6E,
+	 0x697997B4, 0x8649FCAD, 0x6006181F, 0x8F367306, 0xBB8AB8DC, 0x54BAD3C5,
+	 0xD2F32F68, 0x3DC34471, 0x097F8FAB, 0xE64FE4B2, 0xC00C303E, 0x2F3C5B27,
+	 0x1B8090FD, 0xF4B0FBE4, 0x72F90749, 0x9DC96C50, 0xA975A78A, 0x4645CC93,
+	 0xA00A2821, 0x4F3A4338, 0x7B8688E2, 0x94B6E3FB, 0x12FF1F56, 0xFDCF744F,
+	 0xC973BF95, 0x2643D48C, 0x85F4168D, 0x6AC47D94, 0x5E78B64E, 0xB148DD57,
+	 0x370121FA, 0xD8314AE3, 0xEC8D8139, 0x03BDEA20, 0xE5F20E92, 0x0AC2658B,
+	 0x3E7EAE51, 0xD14EC548, 0x570739E5, 0xB83752FC, 0x8C8B9926, 0x63BBF23F,
+	 0x45F826B3, 0xAAC84DAA, 0x9E748670, 0x7144ED69, 0xF70D11C4, 0x183D7ADD,
+	 0x2C81B107, 0xC3B1DA1E, 0x25FE3EAC, 0xCACE55B5, 0xFE729E6F, 0x1142F576,
+	 0x970B09DB, 0x783B62C2, 0x4C87A918, 0xA3B7C201, 0x0E045BEB, 0xE13430F2,
+	 0xD588FB28, 0x3AB89031, 0xBCF16C9C, 0x53C10785, 0x677DCC5F, 0x884DA746,
+	 0x6E0243F4, 0x813228ED, 0xB58EE337, 0x5ABE882E, 0xDCF77483, 0x33C71F9A,
+	 0x077BD440, 0xE84BBF59, 0xCE086BD5, 0x213800CC, 0x1584CB16, 0xFAB4A00F,
+	 0x7CFD5CA2, 0x93CD37BB, 0xA771FC61, 0x48419778, 0xAE0E73CA, 0x413E18D3,
+	 0x7582D309, 0x9AB2B810, 0x1CFB44BD, 0xF3CB2FA4, 0xC777E47E, 0x28478F67,
+	 0x8BF04D66, 0x64C0267F, 0x507CEDA5, 0xBF4C86BC, 0x39057A11, 0xD6351108,
+	 0xE289DAD2, 0x0DB9B1CB, 0xEBF65579, 0x04C63E60, 0x307AF5BA, 0xDF4A9EA3,
+	 0x5903620E, 0xB6330917, 0x828FC2CD, 0x6DBFA9D4, 0x4BFC7D58, 0xA4CC1641,
+	 0x9070DD9B, 0x7F40B682, 0xF9094A2F, 0x16392136, 0x2285EAEC, 0xCDB581F5,
+	 0x2BFA6547, 0xC4CA0E5E, 0xF076C584, 0x1F46AE9D, 0x990F5230, 0x763F3929,
+	 0x4283F2F3, 0xADB399EA, 0x1C08B7D6, 0xF338DCCF, 0xC7841715, 0x28B47C0C,
+	 0xAEFD80A1, 0x41CDEBB8, 0x75712062, 0x9A414B7B, 0x7C0EAFC9, 0x933EC4D0,
+	 0xA7820F0A, 0x48B26413, 0xCEFB98BE, 0x21CBF3A7, 0x1577387D, 0xFA475364,
+	 0xDC0487E8, 0x3334ECF1, 0x0788272B, 0xE8B84C32, 0x6EF1B09F, 0x81C1DB86,
+	 0xB57D105C, 0x5A4D7B45, 0xBC029FF7, 0x5332F4EE, 0x678E3F34, 0x88BE542D,
+	 0x0EF7A880, 0xE1C7C399, 0xD57B0843, 0x3A4B635A, 0x99FCA15B, 0x76CCCA42,
+	 0x42700198, 0xAD406A81, 0x2B09962C, 0xC439FD35, 0xF08536EF, 0x1FB55DF6,
+	 0xF9FAB944, 0x16CAD25D, 0x22761987, 0xCD46729E, 0x4B0F8E33, 0xA43FE52A,
+	 0x90832EF0, 0x7FB345E9, 0x59F09165, 0xB6C0FA7C, 0x827C31A6, 0x6D4C5ABF,
+	 0xEB05A612, 0x0435CD0B, 0x308906D1, 0xDFB96DC8, 0x39F6897A, 0xD6C6E263,
+	 0xE27A29B9, 0x0D4A42A0, 0x8B03BE0D, 0x6433D514, 0x508F1ECE, 0xBFBF75D7,
+	 0x120CEC3D, 0xFD3C8724, 0xC9804CFE, 0x26B027E7, 0xA0F9DB4A, 0x4FC9B053,
+	 0x7B757B89, 0x94451090, 0x720AF422, 0x9D3A9F3B, 0xA98654E1, 0x46B63FF8,
+	 0xC0FFC355, 0x2FCFA84C, 0x1B736396, 0xF443088F, 0xD200DC03, 0x3D30B71A,
+	 0x098C7CC0, 0xE6BC17D9, 0x60F5EB74, 0x8FC5806D, 0xBB794BB7, 0x544920AE,
+	 0xB206C41C, 0x5D36AF05, 0x698A64DF, 0x86BA0FC6, 0x00F3F36B, 0xEFC39872,
+	 0xDB7F53A8, 0x344F38B1, 0x97F8FAB0, 0x78C891A9, 0x4C745A73, 0xA344316A,
+	 0x250DCDC7, 0xCA3DA6DE, 0xFE816D04, 0x11B1061D, 0xF7FEE2AF, 0x18CE89B6,
+	 0x2C72426C, 0xC3422975, 0x450BD5D8, 0xAA3BBEC1, 0x9E87751B, 0x71B71E02,
+	 0x57F4CA8E, 0xB8C4A197, 0x8C786A4D, 0x63480154, 0xE501FDF9, 0x0A3196E0,
+	 0x3E8D5D3A, 0xD1BD3623, 0x37F2D291, 0xD8C2B988, 0xEC7E7252, 0x034E194B,
+	 0x8507E5E6, 0x6A378EFF, 0x5E8B4525, 0xB1BB2E3C},
+	{0x00000000, 0x68032CC8, 0xD0065990, 0xB8057558, 0xA5E0C5D1, 0xCDE3E919,
+	 0x75E69C41, 0x1DE5B089, 0x4E2DFD53, 0x262ED19B, 0x9E2BA4C3, 0xF628880B,
+	 0xEBCD3882, 0x83CE144A, 0x3BCB6112, 0x53C84DDA, 0x9C5BFAA6, 0xF458D66E,
+	 0x4C5DA336, 0x245E8FFE, 0x39BB3F77, 0x51B813BF, 0xE9BD66E7, 0x81BE4A2F,
+	 0xD27607F5, 0xBA752B3D, 0x02705E65, 0x6A7372AD, 0x7796C224, 0x1F95EEEC,
+	 0xA7909BB4, 0xCF93B77C, 0x3D5B83BD, 0x5558AF75, 0xED5DDA2D, 0x855EF6E5,
+	 0x98BB466C, 0xF0B86AA4, 0x48BD1FFC, 0x20BE3334, 0x73767EEE, 0x1B755226,
+	 0xA370277E, 0xCB730BB6, 0xD696BB3F, 0xBE9597F7, 0x0690E2AF, 0x6E93CE67,
+	 0xA100791B, 0xC90355D3, 0x7106208B, 0x19050C43, 0x04E0BCCA, 0x6CE39002,
+	 0xD4E6E55A, 0xBCE5C992, 0xEF2D8448, 0x872EA880, 0x3F2BDDD8, 0x5728F110,
+	 0x4ACD4199, 0x22CE6D51, 0x9ACB1809, 0xF2C834C1, 0x7AB7077A, 0x12B42BB2,
+	 0xAAB15EEA, 0xC2B27222, 0xDF57C2AB, 0xB754EE63, 0x0F519B3B, 0x6752B7F3,
+	 0x349AFA29, 0x5C99D6E1, 0xE49CA3B9, 0x8C9F8F71, 0x917A3FF8, 0xF9791330,
+	 0x417C6668, 0x297F4AA0, 0xE6ECFDDC, 0x8EEFD114, 0x36EAA44C, 0x5EE98884,
+	 0x430C380D, 0x2B0F14C5, 0x930A619D, 0xFB094D55, 0xA8C1008F, 0xC0C22C47,
+	 0x78C7591F, 0x10C475D7, 0x0D21C55E, 0x6522E996, 0xDD279CCE, 0xB524B006,
+	 0x47EC84C7, 0x2FEFA80F, 0x97EADD57, 0xFFE9F19F, 0xE20C4116, 0x8A0F6DDE,
+	 0x320A1886, 0x5A09344E, 0x09C17994, 0x61C2555C, 0xD9C72004, 0xB1C40CCC,
+	 0xAC21BC45, 0xC422908D, 0x7C27E5D5, 0x1424C91D, 0xDBB77E61, 0xB3B452A9,
+	 0x0BB127F1, 0x63B20B39, 0x7E57BBB0, 0x16549778, 0xAE51E220, 0xC652CEE8,
+	 0x959A8332, 0xFD99AFFA, 0x459CDAA2, 0x2D9FF66A, 0x307A46E3, 0x58796A2B,
+	 0xE07C1F73, 0x887F33BB, 0xF56E0EF4, 0x9D6D223C, 0x25685764, 0x4D6B7BAC,
+	 0x508ECB25, 0x388DE7ED, 0x808892B5, 0xE88BBE7D, 0xBB43F3A7, 0xD340DF6F,
+	 0x6B45AA37, 0x034686FF, 0x1EA33676, 0x76A01ABE, 0xCEA56FE6, 0xA6A6432E,
+	 0x6935F452, 0x0136D89A, 0xB933ADC2, 0xD130810A, 0xCCD53183, 0xA4D61D4B,
+	 0x1CD36813, 0x74D044DB, 0x27180901, 0x4F1B25C9, 0xF71E5091, 0x9F1D7C59,
+	 0x82F8CCD0, 0xEAFBE018, 0x52FE9540, 0x3AFDB988, 0xC8358D49, 0xA036A181,
+	 0x1833D4D9, 0x7030F811, 0x6DD54898, 0x05D66450, 0xBDD31108, 0xD5D03DC0,
+	 0x8618701A, 0xEE1B5CD2, 0x561E298A, 0x3E1D0542, 0x23F8B5CB, 0x4BFB9903,
+	 0xF3FEEC5B, 0x9BFDC093, 0x546E77EF, 0x3C6D5B27, 0x84682E7F, 0xEC6B02B7,
+	 0xF18EB23E, 0x998D9EF6, 0x2188EBAE, 0x498BC766, 0x1A438ABC, 0x7240A674,
+	 0xCA45D32C, 0xA246FFE4, 0xBFA34F6D, 0xD7A063A5, 0x6FA516FD, 0x07A63A35,
+	 0x8FD9098E, 0xE7DA2546, 0x5FDF501E, 0x37DC7CD6, 0x2A39CC5F, 0x423AE097,
+	 0xFA3F95CF, 0x923CB907, 0xC1F4F4DD, 0xA9F7D815, 0x11F2AD4D, 0x79F18185,
+	 0x6414310C, 0x0C171DC4, 0xB412689C, 0xDC114454, 0x1382F328, 0x7B81DFE0,
+	 0xC384AAB8, 0xAB878670, 0xB66236F9, 0xDE611A31, 0x66646F69, 0x0E6743A1,
+	 0x5DAF0E7B, 0x35AC22B3, 0x8DA957EB, 0xE5AA7B23, 0xF84FCBAA, 0x904CE762,
+	 0x2849923A, 0x404ABEF2, 0xB2828A33, 0xDA81A6FB, 0x6284D3A3, 0x0A87FF6B,
+	 0x17624FE2, 0x7F61632A, 0xC7641672, 0xAF673ABA, 0xFCAF7760, 0x94AC5BA8,
+	 0x2CA92EF0, 0x44AA0238, 0x594FB2B1, 0x314C9E79, 0x8949EB21, 0xE14AC7E9,
+	 0x2ED97095, 0x46DA5C5D, 0xFEDF2905, 0x96DC05CD, 0x8B39B544, 0xE33A998C,
+	 0x5B3FECD4, 0x333CC01C, 0x60F48DC6, 0x08F7A10E, 0xB0F2D456, 0xD8F1F89E,
+	 0xC5144817, 0xAD1764DF, 0x15121187, 0x7D113D4F},
+	{0x00000000, 0x493C7D27, 0x9278FA4E, 0xDB448769, 0x211D826D, 0x6821FF4A,
+	 0xB3657823, 0xFA590504, 0x423B04DA, 0x0B0779FD, 0xD043FE94, 0x997F83B3,
+	 0x632686B7, 0x2A1AFB90, 0xF15E7CF9, 0xB86201DE, 0x847609B4, 0xCD4A7493,
+	 0x160EF3FA, 0x5F328EDD, 0xA56B8BD9, 0xEC57F6FE, 0x37137197, 0x7E2F0CB0,
+	 0xC64D0D6E, 0x8F717049, 0x5435F720, 0x1D098A07, 0xE7508F03, 0xAE6CF224,
+	 0x7528754D, 0x3C14086A, 0x0D006599, 0x443C18BE, 0x9F789FD7, 0xD644E2F0,
+	 0x2C1DE7F4, 0x65219AD3, 0xBE651DBA, 0xF759609D, 0x4F3B6143, 0x06071C64,
+	 0xDD439B0D, 0x947FE62A, 0x6E26E32E, 0x271A9E09, 0xFC5E1960, 0xB5626447,
+	 0x89766C2D, 0xC04A110A, 0x1B0E9663, 0x5232EB44, 0xA86BEE40, 0xE1579367,
+	 0x3A13140E, 0x732F6929, 0xCB4D68F7, 0x827115D0, 0x593592B9, 0x1009EF9E,
+	 0xEA50EA9A, 0xA36C97BD, 0x782810D4, 0x31146DF3, 0x1A00CB32, 0x533CB615,
+	 0x8878317C, 0xC1444C5B, 0x3B1D495F, 0x72213478, 0xA965B311, 0xE059CE36,
+	 0x583BCFE8, 0x1107B2CF, 0xCA4335A6, 0x837F4881, 0x79264D85, 0x301A30A2,
+	 0xEB5EB7CB, 0xA262CAEC, 0x9E76C286, 0xD74ABFA1, 0x0C0E38C8, 0x453245EF,
+	 0xBF6B40EB, 0xF6573DCC, 0x2D13BAA5, 0x642FC782, 0xDC4DC65C, 0x9571BB7B,
+	 0x4E353C12, 0x07094135, 0xFD504431, 0xB46C3916, 0x6F28BE7F, 0x2614C358,
+	 0x1700AEAB, 0x5E3CD38C, 0x857854E5, 0xCC4429C2, 0x361D2CC6, 0x7F2151E1,
+	 0xA465D688, 0xED59ABAF, 0x553BAA71, 0x1C07D756, 0xC743503F, 0x8E7F2D18,
+	 0x7426281C, 0x3D1A553B, 0xE65ED252, 0xAF62AF75, 0x9376A71F, 0xDA4ADA38,
+	 0x010E5D51, 0x48322076, 0xB26B2572, 0xFB575855, 0x2013DF3C, 0x692FA21B,
+	 0xD14DA3C5, 0x9871DEE2, 0x4335598B, 0x0A0924AC, 0xF05021A8, 0xB96C5C8F,
+	 0x6228DBE6, 0x2B14A6C1, 0x34019664, 0x7D3DEB43, 0xA6796C2A, 0xEF45110D,
+	 0x151C1409, 0x5C20692E, 0x8764EE47, 0xCE589360, 0x763A92BE, 0x3F06EF99,
+	 0xE44268F0, 0xAD7E15D7, 0x572710D3, 0x1E1B6DF4, 0xC55FEA9D, 0x8C6397BA,
+	 0xB0779FD0, 0xF94BE2F7, 0x220F659E, 0x6B3318B9, 0x916A1DBD, 0xD856609A,
+	 0x0312E7F3, 0x4A2E9AD4, 0xF24C9B0A, 0xBB70E62D, 0x60346144, 0x29081C63,
+	 0xD3511967, 0x9A6D6440, 0x4129E329, 0x08159E0E, 0x3901F3FD, 0x703D8EDA,
+	 0xAB7909B3, 0xE2457494, 0x181C7190, 0x51200CB7, 0x8A648BDE, 0xC358F6F9,
+	 0x7B3AF727, 0x32068A00, 0xE9420D69, 0xA07E704E, 0x5A27754A, 0x131B086D,
+	 0xC85F8F04, 0x8163F223, 0xBD77FA49, 0xF44B876E, 0x2F0F0007, 0x66337D20,
+	 0x9C6A7824, 0xD5560503, 0x0E12826A, 0x472EFF4D, 0xFF4CFE93, 0xB67083B4,
+	 0x6D3404DD, 0x240879FA, 0xDE517CFE, 0x976D01D9, 0x4C2986B0, 0x0515FB97,
+	 0x2E015D56, 0x673D2071, 0xBC79A718, 0xF545DA3F, 0x0F1CDF3B, 0x4620A21C,
+	 0x9D642575, 0xD4585852, 0x6C3A598C, 0x250624AB, 0xFE42A3C2, 0xB77EDEE5,
+	 0x4D27DBE1, 0x041BA6C6, 0xDF5F21AF, 0x96635C88, 0xAA7754E2, 0xE34B29C5,
+	 0x380FAEAC, 0x7133D38B, 0x8B6AD68F, 0xC256ABA8, 0x19122CC1, 0x502E51E6,
+	 0xE84C5038, 0xA1702D1F, 0x7A34AA76, 0x3308D751, 0xC951D255, 0x806DAF72,
+	 0x5B29281B, 0x1215553C, 0x230138CF, 0x6A3D45E8, 0xB179C281, 0xF845BFA6,
+	 0x021CBAA2, 0x4B20C785, 0x906440EC, 0xD9583DCB, 0x613A3C15, 0x28064132,
+	 0xF342C65B, 0xBA7EBB7C, 0x4027BE78, 0x091BC35F, 0xD25F4436, 0x9B633911,
+	 0xA777317B, 0xEE4B4C5C, 0x350FCB35, 0x7C33B612, 0x866AB316, 0xCF56CE31,
+	 0x14124958, 0x5D2E347F, 0xE54C35A1, 0xAC704886, 0x7734CFEF, 0x3E08B2C8,
+	 0xC451B7CC, 0x8D6DCAEB, 0x56294D82, 0x1F1530A5},
+};
+
+#define CRC32_UPD(crc, n)                                                      \
+	(crc32c_tables[(n)][(crc)&0xFF] ^                                      \
+	 crc32c_tables[(n)-1][((crc) >> 8) & 0xFF])
+
+static inline uint32_t
+crc32c_1byte(uint8_t data, uint32_t init_val)
+{
+	uint32_t crc;
+	crc = init_val;
+	crc ^= data;
+
+	return crc32c_tables[0][crc & 0xff] ^ (crc >> 8);
+}
+
+static inline uint32_t
+crc32c_2bytes(uint16_t data, uint32_t init_val)
+{
+	uint32_t crc;
+	crc = init_val;
+	crc ^= data;
+
+	crc = CRC32_UPD(crc, 1) ^ (crc >> 16);
+
+	return crc;
+}
+
+static inline uint32_t
+crc32c_1word(uint32_t data, uint32_t init_val)
+{
+	uint32_t crc, term1, term2;
+	crc = init_val;
+	crc ^= data;
+
+	term1 = CRC32_UPD(crc, 3);
+	term2 = crc >> 16;
+	crc = term1 ^ CRC32_UPD(term2, 1);
+
+	return crc;
+}
+
+static inline uint32_t
+crc32c_2words(uint64_t data, uint32_t init_val)
+{
+	uint32_t crc, term1, term2;
+	union {
+		uint64_t u64;
+		uint32_t u32[2];
+	} d;
+	d.u64 = data;
+
+	crc = init_val;
+	crc ^= d.u32[0];
+
+	term1 = CRC32_UPD(crc, 7);
+	term2 = crc >> 16;
+	crc = term1 ^ CRC32_UPD(term2, 5);
+	term1 = CRC32_UPD(d.u32[1], 3);
+	term2 = d.u32[1] >> 16;
+	crc ^= term1 ^ CRC32_UPD(term2, 1);
+
+	return crc;
+}
+
+#endif /* HASH_CRC_SW */
diff --git a/lib/hash/hash_crc_x86.h b/lib/hash/hash_crc_x86.h
new file mode 100644
index 0000000000..b80a742afa
--- /dev/null
+++ b/lib/hash/hash_crc_x86.h
@@ -0,0 +1,62 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2014 Intel Corporation
+ */
+
+#ifndef _HASH_CRC_X86_H_
+#define _HASH_CRC_X86_H_
+
+static inline uint32_t
+crc32c_sse42_u8(uint8_t data, uint32_t init_val)
+{
+	__asm__ volatile(
+			"crc32b %[data], %[init_val];"
+			: [init_val] "+r" (init_val)
+			: [data] "rm" (data));
+	return init_val;
+}
+
+static inline uint32_t
+crc32c_sse42_u16(uint16_t data, uint32_t init_val)
+{
+	__asm__ volatile(
+			"crc32w %[data], %[init_val];"
+			: [init_val] "+r" (init_val)
+			: [data] "rm" (data));
+	return init_val;
+}
+
+static inline uint32_t
+crc32c_sse42_u32(uint32_t data, uint32_t init_val)
+{
+	__asm__ volatile(
+			"crc32l %[data], %[init_val];"
+			: [init_val] "+r" (init_val)
+			: [data] "rm" (data));
+	return init_val;
+}
+
+static inline uint32_t
+crc32c_sse42_u64_mimic(uint64_t data, uint64_t init_val)
+{
+	union {
+		uint32_t u32[2];
+		uint64_t u64;
+	} d;
+
+	d.u64 = data;
+	init_val = crc32c_sse42_u32(d.u32[0], (uint32_t)init_val);
+	init_val = crc32c_sse42_u32(d.u32[1], (uint32_t)init_val);
+	return (uint32_t)init_val;
+}
+
+static inline uint32_t
+crc32c_sse42_u64(uint64_t data, uint64_t init_val)
+{
+	__asm__ volatile(
+			"crc32q %[data], %[init_val];"
+			: [init_val] "+r" (init_val)
+			: [data] "rm" (data));
+	return (uint32_t)init_val;
+}
+
+#endif
diff --git a/lib/hash/rte_hash_crc.h b/lib/hash/rte_hash_crc.h
index 3e131aa6bb..1cc8f84fe2 100644
--- a/lib/hash/rte_hash_crc.h
+++ b/lib/hash/rte_hash_crc.h
@@ -21,400 +21,7 @@ extern "C" {
 #include <rte_branch_prediction.h>
 #include <rte_common.h>

-/* Lookup tables for software implementation of CRC32C */
-static const uint32_t crc32c_tables[8][256] = {{
- 0x00000000, 0xF26B8303, 0xE13B70F7, 0x1350F3F4, 0xC79A971F, 0x35F1141C, 0x26A1E7E8, 0xD4CA64EB,
- 0x8AD958CF, 0x78B2DBCC, 0x6BE22838, 0x9989AB3B, 0x4D43CFD0, 0xBF284CD3, 0xAC78BF27, 0x5E133C24,
- 0x105EC76F, 0xE235446C, 0xF165B798, 0x030E349B, 0xD7C45070, 0x25AFD373, 0x36FF2087, 0xC494A384,
- 0x9A879FA0, 0x68EC1CA3, 0x7BBCEF57, 0x89D76C54, 0x5D1D08BF, 0xAF768BBC, 0xBC267848, 0x4E4DFB4B,
- 0x20BD8EDE, 0xD2D60DDD, 0xC186FE29, 0x33ED7D2A, 0xE72719C1, 0x154C9AC2, 0x061C6936, 0xF477EA35,
- 0xAA64D611, 0x580F5512, 0x4B5FA6E6, 0xB93425E5, 0x6DFE410E, 0x9F95C20D, 0x8CC531F9, 0x7EAEB2FA,
- 0x30E349B1, 0xC288CAB2, 0xD1D83946, 0x23B3BA45, 0xF779DEAE, 0x05125DAD, 0x1642AE59, 0xE4292D5A,
- 0xBA3A117E, 0x4851927D, 0x5B016189, 0xA96AE28A, 0x7DA08661, 0x8FCB0562, 0x9C9BF696, 0x6EF07595,
- 0x417B1DBC, 0xB3109EBF, 0xA0406D4B, 0x522BEE48, 0x86E18AA3, 0x748A09A0, 0x67DAFA54, 0x95B17957,
- 0xCBA24573, 0x39C9C670, 0x2A993584, 0xD8F2B687, 0x0C38D26C, 0xFE53516F, 0xED03A29B, 0x1F682198,
- 0x5125DAD3, 0xA34E59D0, 0xB01EAA24, 0x42752927, 0x96BF4DCC, 0x64D4CECF, 0x77843D3B, 0x85EFBE38,
- 0xDBFC821C, 0x2997011F, 0x3AC7F2EB, 0xC8AC71E8, 0x1C661503, 0xEE0D9600, 0xFD5D65F4, 0x0F36E6F7,
- 0x61C69362, 0x93AD1061, 0x80FDE395, 0x72966096, 0xA65C047D, 0x5437877E, 0x4767748A, 0xB50CF789,
- 0xEB1FCBAD, 0x197448AE, 0x0A24BB5A, 0xF84F3859, 0x2C855CB2, 0xDEEEDFB1, 0xCDBE2C45, 0x3FD5AF46,
- 0x7198540D, 0x83F3D70E, 0x90A324FA, 0x62C8A7F9, 0xB602C312, 0x44694011, 0x5739B3E5, 0xA55230E6,
- 0xFB410CC2, 0x092A8FC1, 0x1A7A7C35, 0xE811FF36, 0x3CDB9BDD, 0xCEB018DE, 0xDDE0EB2A, 0x2F8B6829,
- 0x82F63B78, 0x709DB87B, 0x63CD4B8F, 0x91A6C88C, 0x456CAC67, 0xB7072F64, 0xA457DC90, 0x563C5F93,
- 0x082F63B7, 0xFA44E0B4, 0xE9141340, 0x1B7F9043, 0xCFB5F4A8, 0x3DDE77AB, 0x2E8E845F, 0xDCE5075C,
- 0x92A8FC17, 0x60C37F14, 0x73938CE0, 0x81F80FE3, 0x55326B08, 0xA759E80B, 0xB4091BFF, 0x466298FC,
- 0x1871A4D8, 0xEA1A27DB, 0xF94AD42F, 0x0B21572C, 0xDFEB33C7, 0x2D80B0C4, 0x3ED04330, 0xCCBBC033,
- 0xA24BB5A6, 0x502036A5, 0x4370C551, 0xB11B4652, 0x65D122B9, 0x97BAA1BA, 0x84EA524E, 0x7681D14D,
- 0x2892ED69, 0xDAF96E6A, 0xC9A99D9E, 0x3BC21E9D, 0xEF087A76, 0x1D63F975, 0x0E330A81, 0xFC588982,
- 0xB21572C9, 0x407EF1CA, 0x532E023E, 0xA145813D, 0x758FE5D6, 0x87E466D5, 0x94B49521, 0x66DF1622,
- 0x38CC2A06, 0xCAA7A905, 0xD9F75AF1, 0x2B9CD9F2, 0xFF56BD19, 0x0D3D3E1A, 0x1E6DCDEE, 0xEC064EED,
- 0xC38D26C4, 0x31E6A5C7, 0x22B65633, 0xD0DDD530, 0x0417B1DB, 0xF67C32D8, 0xE52CC12C, 0x1747422F,
- 0x49547E0B, 0xBB3FFD08, 0xA86F0EFC, 0x5A048DFF, 0x8ECEE914, 0x7CA56A17, 0x6FF599E3, 0x9D9E1AE0,
- 0xD3D3E1AB, 0x21B862A8, 0x32E8915C, 0xC083125F, 0x144976B4, 0xE622F5B7, 0xF5720643, 0x07198540,
- 0x590AB964, 0xAB613A67, 0xB831C993, 0x4A5A4A90, 0x9E902E7B, 0x6CFBAD78, 0x7FAB5E8C, 0x8DC0DD8F,
- 0xE330A81A, 0x115B2B19, 0x020BD8ED, 0xF0605BEE, 0x24AA3F05, 0xD6C1BC06, 0xC5914FF2, 0x37FACCF1,
- 0x69E9F0D5, 0x9B8273D6, 0x88D28022, 0x7AB90321, 0xAE7367CA, 0x5C18E4C9, 0x4F48173D, 0xBD23943E,
- 0xF36E6F75, 0x0105EC76, 0x12551F82, 0xE03E9C81, 0x34F4F86A, 0xC69F7B69, 0xD5CF889D, 0x27A40B9E,
- 0x79B737BA, 0x8BDCB4B9, 0x988C474D, 0x6AE7C44E, 0xBE2DA0A5, 0x4C4623A6, 0x5F16D052, 0xAD7D5351
-},
-{
- 0x00000000, 0x13A29877, 0x274530EE, 0x34E7A899, 0x4E8A61DC, 0x5D28F9AB, 0x69CF5132, 0x7A6DC945,
- 0x9D14C3B8, 0x8EB65BCF, 0xBA51F356, 0xA9F36B21, 0xD39EA264, 0xC03C3A13, 0xF4DB928A, 0xE7790AFD,
- 0x3FC5F181, 0x2C6769F6, 0x1880C16F, 0x0B225918, 0x714F905D, 0x62ED082A, 0x560AA0B3, 0x45A838C4,
- 0xA2D13239, 0xB173AA4E, 0x859402D7, 0x96369AA0, 0xEC5B53E5, 0xFFF9CB92, 0xCB1E630B, 0xD8BCFB7C,
- 0x7F8BE302, 0x6C297B75, 0x58CED3EC, 0x4B6C4B9B, 0x310182DE, 0x22A31AA9, 0x1644B230, 0x05E62A47,
- 0xE29F20BA, 0xF13DB8CD, 0xC5DA1054, 0xD6788823, 0xAC154166, 0xBFB7D911, 0x8B507188, 0x98F2E9FF,
- 0x404E1283, 0x53EC8AF4, 0x670B226D, 0x74A9BA1A, 0x0EC4735F, 0x1D66EB28, 0x298143B1, 0x3A23DBC6,
- 0xDD5AD13B, 0xCEF8494C, 0xFA1FE1D5, 0xE9BD79A2, 0x93D0B0E7, 0x80722890, 0xB4958009, 0xA737187E,
- 0xFF17C604, 0xECB55E73, 0xD852F6EA, 0xCBF06E9D, 0xB19DA7D8, 0xA23F3FAF, 0x96D89736, 0x857A0F41,
- 0x620305BC, 0x71A19DCB, 0x45463552, 0x56E4AD25, 0x2C896460, 0x3F2BFC17, 0x0BCC548E, 0x186ECCF9,
- 0xC0D23785, 0xD370AFF2, 0xE797076B, 0xF4359F1C, 0x8E585659, 0x9DFACE2E, 0xA91D66B7, 0xBABFFEC0,
- 0x5DC6F43D, 0x4E646C4A, 0x7A83C4D3, 0x69215CA4, 0x134C95E1, 0x00EE0D96, 0x3409A50F, 0x27AB3D78,
- 0x809C2506, 0x933EBD71, 0xA7D915E8, 0xB47B8D9F, 0xCE1644DA, 0xDDB4DCAD, 0xE9537434, 0xFAF1EC43,
- 0x1D88E6BE, 0x0E2A7EC9, 0x3ACDD650, 0x296F4E27, 0x53028762, 0x40A01F15, 0x7447B78C, 0x67E52FFB,
- 0xBF59D487, 0xACFB4CF0, 0x981CE469, 0x8BBE7C1E, 0xF1D3B55B, 0xE2712D2C, 0xD69685B5, 0xC5341DC2,
- 0x224D173F, 0x31EF8F48, 0x050827D1, 0x16AABFA6, 0x6CC776E3, 0x7F65EE94, 0x4B82460D, 0x5820DE7A,
- 0xFBC3FAF9, 0xE861628E, 0xDC86CA17, 0xCF245260, 0xB5499B25, 0xA6EB0352, 0x920CABCB, 0x81AE33BC,
- 0x66D73941, 0x7575A136, 0x419209AF, 0x523091D8, 0x285D589D, 0x3BFFC0EA, 0x0F186873, 0x1CBAF004,
- 0xC4060B78, 0xD7A4930F, 0xE3433B96, 0xF0E1A3E1, 0x8A8C6AA4, 0x992EF2D3, 0xADC95A4A, 0xBE6BC23D,
- 0x5912C8C0, 0x4AB050B7, 0x7E57F82E, 0x6DF56059, 0x1798A91C, 0x043A316B, 0x30DD99F2, 0x237F0185,
- 0x844819FB, 0x97EA818C, 0xA30D2915, 0xB0AFB162, 0xCAC27827, 0xD960E050, 0xED8748C9, 0xFE25D0BE,
- 0x195CDA43, 0x0AFE4234, 0x3E19EAAD, 0x2DBB72DA, 0x57D6BB9F, 0x447423E8, 0x70938B71, 0x63311306,
- 0xBB8DE87A, 0xA82F700D, 0x9CC8D894, 0x8F6A40E3, 0xF50789A6, 0xE6A511D1, 0xD242B948, 0xC1E0213F,
- 0x26992BC2, 0x353BB3B5, 0x01DC1B2C, 0x127E835B, 0x68134A1E, 0x7BB1D269, 0x4F567AF0, 0x5CF4E287,
- 0x04D43CFD, 0x1776A48A, 0x23910C13, 0x30339464, 0x4A5E5D21, 0x59FCC556, 0x6D1B6DCF, 0x7EB9F5B8,
- 0x99C0FF45, 0x8A626732, 0xBE85CFAB, 0xAD2757DC, 0xD74A9E99, 0xC4E806EE, 0xF00FAE77, 0xE3AD3600,
- 0x3B11CD7C, 0x28B3550B, 0x1C54FD92, 0x0FF665E5, 0x759BACA0, 0x663934D7, 0x52DE9C4E, 0x417C0439,
- 0xA6050EC4, 0xB5A796B3, 0x81403E2A, 0x92E2A65D, 0xE88F6F18, 0xFB2DF76F, 0xCFCA5FF6, 0xDC68C781,
- 0x7B5FDFFF, 0x68FD4788, 0x5C1AEF11, 0x4FB87766, 0x35D5BE23, 0x26772654, 0x12908ECD, 0x013216BA,
- 0xE64B1C47, 0xF5E98430, 0xC10E2CA9, 0xD2ACB4DE, 0xA8C17D9B, 0xBB63E5EC, 0x8F844D75, 0x9C26D502,
- 0x449A2E7E, 0x5738B609, 0x63DF1E90, 0x707D86E7, 0x0A104FA2, 0x19B2D7D5, 0x2D557F4C, 0x3EF7E73B,
- 0xD98EEDC6, 0xCA2C75B1, 0xFECBDD28, 0xED69455F, 0x97048C1A, 0x84A6146D, 0xB041BCF4, 0xA3E32483
-},
-{
- 0x00000000, 0xA541927E, 0x4F6F520D, 0xEA2EC073, 0x9EDEA41A, 0x3B9F3664, 0xD1B1F617, 0x74F06469,
- 0x38513EC5, 0x9D10ACBB, 0x773E6CC8, 0xD27FFEB6, 0xA68F9ADF, 0x03CE08A1, 0xE9E0C8D2, 0x4CA15AAC,
- 0x70A27D8A, 0xD5E3EFF4, 0x3FCD2F87, 0x9A8CBDF9, 0xEE7CD990, 0x4B3D4BEE, 0xA1138B9D, 0x045219E3,
- 0x48F3434F, 0xEDB2D131, 0x079C1142, 0xA2DD833C, 0xD62DE755, 0x736C752B, 0x9942B558, 0x3C032726,
- 0xE144FB14, 0x4405696A, 0xAE2BA919, 0x0B6A3B67, 0x7F9A5F0E, 0xDADBCD70, 0x30F50D03, 0x95B49F7D,
- 0xD915C5D1, 0x7C5457AF, 0x967A97DC, 0x333B05A2, 0x47CB61CB, 0xE28AF3B5, 0x08A433C6, 0xADE5A1B8,
- 0x91E6869E, 0x34A714E0, 0xDE89D493, 0x7BC846ED, 0x0F382284, 0xAA79B0FA, 0x40577089, 0xE516E2F7,
- 0xA9B7B85B, 0x0CF62A25, 0xE6D8EA56, 0x43997828, 0x37691C41, 0x92288E3F, 0x78064E4C, 0xDD47DC32,
- 0xC76580D9, 0x622412A7, 0x880AD2D4, 0x2D4B40AA, 0x59BB24C3, 0xFCFAB6BD, 0x16D476CE, 0xB395E4B0,
- 0xFF34BE1C, 0x5A752C62, 0xB05BEC11, 0x151A7E6F, 0x61EA1A06, 0xC4AB8878, 0x2E85480B, 0x8BC4DA75,
- 0xB7C7FD53, 0x12866F2D, 0xF8A8AF5E, 0x5DE93D20, 0x29195949, 0x8C58CB37, 0x66760B44, 0xC337993A,
- 0x8F96C396, 0x2AD751E8, 0xC0F9919B, 0x65B803E5, 0x1148678C, 0xB409F5F2, 0x5E273581, 0xFB66A7FF,
- 0x26217BCD, 0x8360E9B3, 0x694E29C0, 0xCC0FBBBE, 0xB8FFDFD7, 0x1DBE4DA9, 0xF7908DDA, 0x52D11FA4,
- 0x1E704508, 0xBB31D776, 0x511F1705, 0xF45E857B, 0x80AEE112, 0x25EF736C, 0xCFC1B31F, 0x6A802161,
- 0x56830647, 0xF3C29439, 0x19EC544A, 0xBCADC634, 0xC85DA25D, 0x6D1C3023, 0x8732F050, 0x2273622E,
- 0x6ED23882, 0xCB93AAFC, 0x21BD6A8F, 0x84FCF8F1, 0xF00C9C98, 0x554D0EE6, 0xBF63CE95, 0x1A225CEB,
- 0x8B277743, 0x2E66E53D, 0xC448254E, 0x6109B730, 0x15F9D359, 0xB0B84127, 0x5A968154, 0xFFD7132A,
- 0xB3764986, 0x1637DBF8, 0xFC191B8B, 0x595889F5, 0x2DA8ED9C, 0x88E97FE2, 0x62C7BF91, 0xC7862DEF,
- 0xFB850AC9, 0x5EC498B7, 0xB4EA58C4, 0x11ABCABA, 0x655BAED3, 0xC01A3CAD, 0x2A34FCDE, 0x8F756EA0,
- 0xC3D4340C, 0x6695A672, 0x8CBB6601, 0x29FAF47F, 0x5D0A9016, 0xF84B0268, 0x1265C21B, 0xB7245065,
- 0x6A638C57, 0xCF221E29, 0x250CDE5A, 0x804D4C24, 0xF4BD284D, 0x51FCBA33, 0xBBD27A40, 0x1E93E83E,
- 0x5232B292, 0xF77320EC, 0x1D5DE09F, 0xB81C72E1, 0xCCEC1688, 0x69AD84F6, 0x83834485, 0x26C2D6FB,
- 0x1AC1F1DD, 0xBF8063A3, 0x55AEA3D0, 0xF0EF31AE, 0x841F55C7, 0x215EC7B9, 0xCB7007CA, 0x6E3195B4,
- 0x2290CF18, 0x87D15D66, 0x6DFF9D15, 0xC8BE0F6B, 0xBC4E6B02, 0x190FF97C, 0xF321390F, 0x5660AB71,
- 0x4C42F79A, 0xE90365E4, 0x032DA597, 0xA66C37E9, 0xD29C5380, 0x77DDC1FE, 0x9DF3018D, 0x38B293F3,
- 0x7413C95F, 0xD1525B21, 0x3B7C9B52, 0x9E3D092C, 0xEACD6D45, 0x4F8CFF3B, 0xA5A23F48, 0x00E3AD36,
- 0x3CE08A10, 0x99A1186E, 0x738FD81D, 0xD6CE4A63, 0xA23E2E0A, 0x077FBC74, 0xED517C07, 0x4810EE79,
- 0x04B1B4D5, 0xA1F026AB, 0x4BDEE6D8, 0xEE9F74A6, 0x9A6F10CF, 0x3F2E82B1, 0xD50042C2, 0x7041D0BC,
- 0xAD060C8E, 0x08479EF0, 0xE2695E83, 0x4728CCFD, 0x33D8A894, 0x96993AEA, 0x7CB7FA99, 0xD9F668E7,
- 0x9557324B, 0x3016A035, 0xDA386046, 0x7F79F238, 0x0B899651, 0xAEC8042F, 0x44E6C45C, 0xE1A75622,
- 0xDDA47104, 0x78E5E37A, 0x92CB2309, 0x378AB177, 0x437AD51E, 0xE63B4760, 0x0C158713, 0xA954156D,
- 0xE5F54FC1, 0x40B4DDBF, 0xAA9A1DCC, 0x0FDB8FB2, 0x7B2BEBDB, 0xDE6A79A5, 0x3444B9D6, 0x91052BA8
-},
-{
- 0x00000000, 0xDD45AAB8, 0xBF672381, 0x62228939, 0x7B2231F3, 0xA6679B4B, 0xC4451272, 0x1900B8CA,
- 0xF64463E6, 0x2B01C95E, 0x49234067, 0x9466EADF, 0x8D665215, 0x5023F8AD, 0x32017194, 0xEF44DB2C,
- 0xE964B13D, 0x34211B85, 0x560392BC, 0x8B463804, 0x924680CE, 0x4F032A76, 0x2D21A34F, 0xF06409F7,
- 0x1F20D2DB, 0xC2657863, 0xA047F15A, 0x7D025BE2, 0x6402E328, 0xB9474990, 0xDB65C0A9, 0x06206A11,
- 0xD725148B, 0x0A60BE33, 0x6842370A, 0xB5079DB2, 0xAC072578, 0x71428FC0, 0x136006F9, 0xCE25AC41,
- 0x2161776D, 0xFC24DDD5, 0x9E0654EC, 0x4343FE54, 0x5A43469E, 0x8706EC26, 0xE524651F, 0x3861CFA7,
- 0x3E41A5B6, 0xE3040F0E, 0x81268637, 0x5C632C8F, 0x45639445, 0x98263EFD, 0xFA04B7C4, 0x27411D7C,
- 0xC805C650, 0x15406CE8, 0x7762E5D1, 0xAA274F69, 0xB327F7A3, 0x6E625D1B, 0x0C40D422, 0xD1057E9A,
- 0xABA65FE7, 0x76E3F55F, 0x14C17C66, 0xC984D6DE, 0xD0846E14, 0x0DC1C4AC, 0x6FE34D95, 0xB2A6E72D,
- 0x5DE23C01, 0x80A796B9, 0xE2851F80, 0x3FC0B538, 0x26C00DF2, 0xFB85A74A, 0x99A72E73, 0x44E284CB,
- 0x42C2EEDA, 0x9F874462, 0xFDA5CD5B, 0x20E067E3, 0x39E0DF29, 0xE4A57591, 0x8687FCA8, 0x5BC25610,
- 0xB4868D3C, 0x69C32784, 0x0BE1AEBD, 0xD6A40405, 0xCFA4BCCF, 0x12E11677, 0x70C39F4E, 0xAD8635F6,
- 0x7C834B6C, 0xA1C6E1D4, 0xC3E468ED, 0x1EA1C255, 0x07A17A9F, 0xDAE4D027, 0xB8C6591E, 0x6583F3A6,
- 0x8AC7288A, 0x57828232, 0x35A00B0B, 0xE8E5A1B3, 0xF1E51979, 0x2CA0B3C1, 0x4E823AF8, 0x93C79040,
- 0x95E7FA51, 0x48A250E9, 0x2A80D9D0, 0xF7C57368, 0xEEC5CBA2, 0x3380611A, 0x51A2E823, 0x8CE7429B,
- 0x63A399B7, 0xBEE6330F, 0xDCC4BA36, 0x0181108E, 0x1881A844, 0xC5C402FC, 0xA7E68BC5, 0x7AA3217D,
- 0x52A0C93F, 0x8FE56387, 0xEDC7EABE, 0x30824006, 0x2982F8CC, 0xF4C75274, 0x96E5DB4D, 0x4BA071F5,
- 0xA4E4AAD9, 0x79A10061, 0x1B838958, 0xC6C623E0, 0xDFC69B2A, 0x02833192, 0x60A1B8AB, 0xBDE41213,
- 0xBBC47802, 0x6681D2BA, 0x04A35B83, 0xD9E6F13B, 0xC0E649F1, 0x1DA3E349, 0x7F816A70, 0xA2C4C0C8,
- 0x4D801BE4, 0x90C5B15C, 0xF2E73865, 0x2FA292DD, 0x36A22A17, 0xEBE780AF, 0x89C50996, 0x5480A32E,
- 0x8585DDB4, 0x58C0770C, 0x3AE2FE35, 0xE7A7548D, 0xFEA7EC47, 0x23E246FF, 0x41C0CFC6, 0x9C85657E,
- 0x73C1BE52, 0xAE8414EA, 0xCCA69DD3, 0x11E3376B, 0x08E38FA1, 0xD5A62519, 0xB784AC20, 0x6AC10698,
- 0x6CE16C89, 0xB1A4C631, 0xD3864F08, 0x0EC3E5B0, 0x17C35D7A, 0xCA86F7C2, 0xA8A47EFB, 0x75E1D443,
- 0x9AA50F6F, 0x47E0A5D7, 0x25C22CEE, 0xF8878656, 0xE1873E9C, 0x3CC29424, 0x5EE01D1D, 0x83A5B7A5,
- 0xF90696D8, 0x24433C60, 0x4661B559, 0x9B241FE1, 0x8224A72B, 0x5F610D93, 0x3D4384AA, 0xE0062E12,
- 0x0F42F53E, 0xD2075F86, 0xB025D6BF, 0x6D607C07, 0x7460C4CD, 0xA9256E75, 0xCB07E74C, 0x16424DF4,
- 0x106227E5, 0xCD278D5D, 0xAF050464, 0x7240AEDC, 0x6B401616, 0xB605BCAE, 0xD4273597, 0x09629F2F,
- 0xE6264403, 0x3B63EEBB, 0x59416782, 0x8404CD3A, 0x9D0475F0, 0x4041DF48, 0x22635671, 0xFF26FCC9,
- 0x2E238253, 0xF36628EB, 0x9144A1D2, 0x4C010B6A, 0x5501B3A0, 0x88441918, 0xEA669021, 0x37233A99,
- 0xD867E1B5, 0x05224B0D, 0x6700C234, 0xBA45688C, 0xA345D046, 0x7E007AFE, 0x1C22F3C7, 0xC167597F,
- 0xC747336E, 0x1A0299D6, 0x782010EF, 0xA565BA57, 0xBC65029D, 0x6120A825, 0x0302211C, 0xDE478BA4,
- 0x31035088, 0xEC46FA30, 0x8E647309, 0x5321D9B1, 0x4A21617B, 0x9764CBC3, 0xF54642FA, 0x2803E842
-},
-{
- 0x00000000, 0x38116FAC, 0x7022DF58, 0x4833B0F4, 0xE045BEB0, 0xD854D11C, 0x906761E8, 0xA8760E44,
- 0xC5670B91, 0xFD76643D, 0xB545D4C9, 0x8D54BB65, 0x2522B521, 0x1D33DA8D, 0x55006A79, 0x6D1105D5,
- 0x8F2261D3, 0xB7330E7F, 0xFF00BE8B, 0xC711D127, 0x6F67DF63, 0x5776B0CF, 0x1F45003B, 0x27546F97,
- 0x4A456A42, 0x725405EE, 0x3A67B51A, 0x0276DAB6, 0xAA00D4F2, 0x9211BB5E, 0xDA220BAA, 0xE2336406,
- 0x1BA8B557, 0x23B9DAFB, 0x6B8A6A0F, 0x539B05A3, 0xFBED0BE7, 0xC3FC644B, 0x8BCFD4BF, 0xB3DEBB13,
- 0xDECFBEC6, 0xE6DED16A, 0xAEED619E, 0x96FC0E32, 0x3E8A0076, 0x069B6FDA, 0x4EA8DF2E, 0x76B9B082,
- 0x948AD484, 0xAC9BBB28, 0xE4A80BDC, 0xDCB96470, 0x74CF6A34, 0x4CDE0598, 0x04EDB56C, 0x3CFCDAC0,
- 0x51EDDF15, 0x69FCB0B9, 0x21CF004D, 0x19DE6FE1, 0xB1A861A5, 0x89B90E09, 0xC18ABEFD, 0xF99BD151,
- 0x37516AAE, 0x0F400502, 0x4773B5F6, 0x7F62DA5A, 0xD714D41E, 0xEF05BBB2, 0xA7360B46, 0x9F2764EA,
- 0xF236613F, 0xCA270E93, 0x8214BE67, 0xBA05D1CB, 0x1273DF8F, 0x2A62B023, 0x625100D7, 0x5A406F7B,
- 0xB8730B7D, 0x806264D1, 0xC851D425, 0xF040BB89, 0x5836B5CD, 0x6027DA61, 0x28146A95, 0x10050539,
- 0x7D1400EC, 0x45056F40, 0x0D36DFB4, 0x3527B018, 0x9D51BE5C, 0xA540D1F0, 0xED736104, 0xD5620EA8,
- 0x2CF9DFF9, 0x14E8B055, 0x5CDB00A1, 0x64CA6F0D, 0xCCBC6149, 0xF4AD0EE5, 0xBC9EBE11, 0x848FD1BD,
- 0xE99ED468, 0xD18FBBC4, 0x99BC0B30, 0xA1AD649C, 0x09DB6AD8, 0x31CA0574, 0x79F9B580, 0x41E8DA2C,
- 0xA3DBBE2A, 0x9BCAD186, 0xD3F96172, 0xEBE80EDE, 0x439E009A, 0x7B8F6F36, 0x33BCDFC2, 0x0BADB06E,
- 0x66BCB5BB, 0x5EADDA17, 0x169E6AE3, 0x2E8F054F, 0x86F90B0B, 0xBEE864A7, 0xF6DBD453, 0xCECABBFF,
- 0x6EA2D55C, 0x56B3BAF0, 0x1E800A04, 0x269165A8, 0x8EE76BEC, 0xB6F60440, 0xFEC5B4B4, 0xC6D4DB18,
- 0xABC5DECD, 0x93D4B161, 0xDBE70195, 0xE3F66E39, 0x4B80607D, 0x73910FD1, 0x3BA2BF25, 0x03B3D089,
- 0xE180B48F, 0xD991DB23, 0x91A26BD7, 0xA9B3047B, 0x01C50A3F, 0x39D46593, 0x71E7D567, 0x49F6BACB,
- 0x24E7BF1E, 0x1CF6D0B2, 0x54C56046, 0x6CD40FEA, 0xC4A201AE, 0xFCB36E02, 0xB480DEF6, 0x8C91B15A,
- 0x750A600B, 0x4D1B0FA7, 0x0528BF53, 0x3D39D0FF, 0x954FDEBB, 0xAD5EB117, 0xE56D01E3, 0xDD7C6E4F,
- 0xB06D6B9A, 0x887C0436, 0xC04FB4C2, 0xF85EDB6E, 0x5028D52A, 0x6839BA86, 0x200A0A72, 0x181B65DE,
- 0xFA2801D8, 0xC2396E74, 0x8A0ADE80, 0xB21BB12C, 0x1A6DBF68, 0x227CD0C4, 0x6A4F6030, 0x525E0F9C,
- 0x3F4F0A49, 0x075E65E5, 0x4F6DD511, 0x777CBABD, 0xDF0AB4F9, 0xE71BDB55, 0xAF286BA1, 0x9739040D,
- 0x59F3BFF2, 0x61E2D05E, 0x29D160AA, 0x11C00F06, 0xB9B60142, 0x81A76EEE, 0xC994DE1A, 0xF185B1B6,
- 0x9C94B463, 0xA485DBCF, 0xECB66B3B, 0xD4A70497, 0x7CD10AD3, 0x44C0657F, 0x0CF3D58B, 0x34E2BA27,
- 0xD6D1DE21, 0xEEC0B18D, 0xA6F30179, 0x9EE26ED5, 0x36946091, 0x0E850F3D, 0x46B6BFC9, 0x7EA7D065,
- 0x13B6D5B0, 0x2BA7BA1C, 0x63940AE8, 0x5B856544, 0xF3F36B00, 0xCBE204AC, 0x83D1B458, 0xBBC0DBF4,
- 0x425B0AA5, 0x7A4A6509, 0x3279D5FD, 0x0A68BA51, 0xA21EB415, 0x9A0FDBB9, 0xD23C6B4D, 0xEA2D04E1,
- 0x873C0134, 0xBF2D6E98, 0xF71EDE6C, 0xCF0FB1C0, 0x6779BF84, 0x5F68D028, 0x175B60DC, 0x2F4A0F70,
- 0xCD796B76, 0xF56804DA, 0xBD5BB42E, 0x854ADB82, 0x2D3CD5C6, 0x152DBA6A, 0x5D1E0A9E, 0x650F6532,
- 0x081E60E7, 0x300F0F4B, 0x783CBFBF, 0x402DD013, 0xE85BDE57, 0xD04AB1FB, 0x9879010F, 0xA0686EA3
-},
-{
- 0x00000000, 0xEF306B19, 0xDB8CA0C3, 0x34BCCBDA, 0xB2F53777, 0x5DC55C6E, 0x697997B4, 0x8649FCAD,
- 0x6006181F, 0x8F367306, 0xBB8AB8DC, 0x54BAD3C5, 0xD2F32F68, 0x3DC34471, 0x097F8FAB, 0xE64FE4B2,
- 0xC00C303E, 0x2F3C5B27, 0x1B8090FD, 0xF4B0FBE4, 0x72F90749, 0x9DC96C50, 0xA975A78A, 0x4645CC93,
- 0xA00A2821, 0x4F3A4338, 0x7B8688E2, 0x94B6E3FB, 0x12FF1F56, 0xFDCF744F, 0xC973BF95, 0x2643D48C,
- 0x85F4168D, 0x6AC47D94, 0x5E78B64E, 0xB148DD57, 0x370121FA, 0xD8314AE3, 0xEC8D8139, 0x03BDEA20,
- 0xE5F20E92, 0x0AC2658B, 0x3E7EAE51, 0xD14EC548, 0x570739E5, 0xB83752FC, 0x8C8B9926, 0x63BBF23F,
- 0x45F826B3, 0xAAC84DAA, 0x9E748670, 0x7144ED69, 0xF70D11C4, 0x183D7ADD, 0x2C81B107, 0xC3B1DA1E,
- 0x25FE3EAC, 0xCACE55B5, 0xFE729E6F, 0x1142F576, 0x970B09DB, 0x783B62C2, 0x4C87A918, 0xA3B7C201,
- 0x0E045BEB, 0xE13430F2, 0xD588FB28, 0x3AB89031, 0xBCF16C9C, 0x53C10785, 0x677DCC5F, 0x884DA746,
- 0x6E0243F4, 0x813228ED, 0xB58EE337, 0x5ABE882E, 0xDCF77483, 0x33C71F9A, 0x077BD440, 0xE84BBF59,
- 0xCE086BD5, 0x213800CC, 0x1584CB16, 0xFAB4A00F, 0x7CFD5CA2, 0x93CD37BB, 0xA771FC61, 0x48419778,
- 0xAE0E73CA, 0x413E18D3, 0x7582D309, 0x9AB2B810, 0x1CFB44BD, 0xF3CB2FA4, 0xC777E47E, 0x28478F67,
- 0x8BF04D66, 0x64C0267F, 0x507CEDA5, 0xBF4C86BC, 0x39057A11, 0xD6351108, 0xE289DAD2, 0x0DB9B1CB,
- 0xEBF65579, 0x04C63E60, 0x307AF5BA, 0xDF4A9EA3, 0x5903620E, 0xB6330917, 0x828FC2CD, 0x6DBFA9D4,
- 0x4BFC7D58, 0xA4CC1641, 0x9070DD9B, 0x7F40B682, 0xF9094A2F, 0x16392136, 0x2285EAEC, 0xCDB581F5,
- 0x2BFA6547, 0xC4CA0E5E, 0xF076C584, 0x1F46AE9D, 0x990F5230, 0x763F3929, 0x4283F2F3, 0xADB399EA,
- 0x1C08B7D6, 0xF338DCCF, 0xC7841715, 0x28B47C0C, 0xAEFD80A1, 0x41CDEBB8, 0x75712062, 0x9A414B7B,
- 0x7C0EAFC9, 0x933EC4D0, 0xA7820F0A, 0x48B26413, 0xCEFB98BE, 0x21CBF3A7, 0x1577387D, 0xFA475364,
- 0xDC0487E8, 0x3334ECF1, 0x0788272B, 0xE8B84C32, 0x6EF1B09F, 0x81C1DB86, 0xB57D105C, 0x5A4D7B45,
- 0xBC029FF7, 0x5332F4EE, 0x678E3F34, 0x88BE542D, 0x0EF7A880, 0xE1C7C399, 0xD57B0843, 0x3A4B635A,
- 0x99FCA15B, 0x76CCCA42, 0x42700198, 0xAD406A81, 0x2B09962C, 0xC439FD35, 0xF08536EF, 0x1FB55DF6,
- 0xF9FAB944, 0x16CAD25D, 0x22761987, 0xCD46729E, 0x4B0F8E33, 0xA43FE52A, 0x90832EF0, 0x7FB345E9,
- 0x59F09165, 0xB6C0FA7C, 0x827C31A6, 0x6D4C5ABF, 0xEB05A612, 0x0435CD0B, 0x308906D1, 0xDFB96DC8,
- 0x39F6897A, 0xD6C6E263, 0xE27A29B9, 0x0D4A42A0, 0x8B03BE0D, 0x6433D514, 0x508F1ECE, 0xBFBF75D7,
- 0x120CEC3D, 0xFD3C8724, 0xC9804CFE, 0x26B027E7, 0xA0F9DB4A, 0x4FC9B053, 0x7B757B89, 0x94451090,
- 0x720AF422, 0x9D3A9F3B, 0xA98654E1, 0x46B63FF8, 0xC0FFC355, 0x2FCFA84C, 0x1B736396, 0xF443088F,
- 0xD200DC03, 0x3D30B71A, 0x098C7CC0, 0xE6BC17D9, 0x60F5EB74, 0x8FC5806D, 0xBB794BB7, 0x544920AE,
- 0xB206C41C, 0x5D36AF05, 0x698A64DF, 0x86BA0FC6, 0x00F3F36B, 0xEFC39872, 0xDB7F53A8, 0x344F38B1,
- 0x97F8FAB0, 0x78C891A9, 0x4C745A73, 0xA344316A, 0x250DCDC7, 0xCA3DA6DE, 0xFE816D04, 0x11B1061D,
- 0xF7FEE2AF, 0x18CE89B6, 0x2C72426C, 0xC3422975, 0x450BD5D8, 0xAA3BBEC1, 0x9E87751B, 0x71B71E02,
- 0x57F4CA8E, 0xB8C4A197, 0x8C786A4D, 0x63480154, 0xE501FDF9, 0x0A3196E0, 0x3E8D5D3A, 0xD1BD3623,
- 0x37F2D291, 0xD8C2B988, 0xEC7E7252, 0x034E194B, 0x8507E5E6, 0x6A378EFF, 0x5E8B4525, 0xB1BB2E3C
-},
-{
- 0x00000000, 0x68032CC8, 0xD0065990, 0xB8057558, 0xA5E0C5D1, 0xCDE3E919, 0x75E69C41, 0x1DE5B089,
- 0x4E2DFD53, 0x262ED19B, 0x9E2BA4C3, 0xF628880B, 0xEBCD3882, 0x83CE144A, 0x3BCB6112, 0x53C84DDA,
- 0x9C5BFAA6, 0xF458D66E, 0x4C5DA336, 0x245E8FFE, 0x39BB3F77, 0x51B813BF, 0xE9BD66E7, 0x81BE4A2F,
- 0xD27607F5, 0xBA752B3D, 0x02705E65, 0x6A7372AD, 0x7796C224, 0x1F95EEEC, 0xA7909BB4, 0xCF93B77C,
- 0x3D5B83BD, 0x5558AF75, 0xED5DDA2D, 0x855EF6E5, 0x98BB466C, 0xF0B86AA4, 0x48BD1FFC, 0x20BE3334,
- 0x73767EEE, 0x1B755226, 0xA370277E, 0xCB730BB6, 0xD696BB3F, 0xBE9597F7, 0x0690E2AF, 0x6E93CE67,
- 0xA100791B, 0xC90355D3, 0x7106208B, 0x19050C43, 0x04E0BCCA, 0x6CE39002, 0xD4E6E55A, 0xBCE5C992,
- 0xEF2D8448, 0x872EA880, 0x3F2BDDD8, 0x5728F110, 0x4ACD4199, 0x22CE6D51, 0x9ACB1809, 0xF2C834C1,
- 0x7AB7077A, 0x12B42BB2, 0xAAB15EEA, 0xC2B27222, 0xDF57C2AB, 0xB754EE63, 0x0F519B3B, 0x6752B7F3,
- 0x349AFA29, 0x5C99D6E1, 0xE49CA3B9, 0x8C9F8F71, 0x917A3FF8, 0xF9791330, 0x417C6668, 0x297F4AA0,
- 0xE6ECFDDC, 0x8EEFD114, 0x36EAA44C, 0x5EE98884, 0x430C380D, 0x2B0F14C5, 0x930A619D, 0xFB094D55,
- 0xA8C1008F, 0xC0C22C47, 0x78C7591F, 0x10C475D7, 0x0D21C55E, 0x6522E996, 0xDD279CCE, 0xB524B006,
- 0x47EC84C7, 0x2FEFA80F, 0x97EADD57, 0xFFE9F19F, 0xE20C4116, 0x8A0F6DDE, 0x320A1886, 0x5A09344E,
- 0x09C17994, 0x61C2555C, 0xD9C72004, 0xB1C40CCC, 0xAC21BC45, 0xC422908D, 0x7C27E5D5, 0x1424C91D,
- 0xDBB77E61, 0xB3B452A9, 0x0BB127F1, 0x63B20B39, 0x7E57BBB0, 0x16549778, 0xAE51E220, 0xC652CEE8,
- 0x959A8332, 0xFD99AFFA, 0x459CDAA2, 0x2D9FF66A, 0x307A46E3, 0x58796A2B, 0xE07C1F73, 0x887F33BB,
- 0xF56E0EF4, 0x9D6D223C, 0x25685764, 0x4D6B7BAC, 0x508ECB25, 0x388DE7ED, 0x808892B5, 0xE88BBE7D,
- 0xBB43F3A7, 0xD340DF6F, 0x6B45AA37, 0x034686FF, 0x1EA33676, 0x76A01ABE, 0xCEA56FE6, 0xA6A6432E,
- 0x6935F452, 0x0136D89A, 0xB933ADC2, 0xD130810A, 0xCCD53183, 0xA4D61D4B, 0x1CD36813, 0x74D044DB,
- 0x27180901, 0x4F1B25C9, 0xF71E5091, 0x9F1D7C59, 0x82F8CCD0, 0xEAFBE018, 0x52FE9540, 0x3AFDB988,
- 0xC8358D49, 0xA036A181, 0x1833D4D9, 0x7030F811, 0x6DD54898, 0x05D66450, 0xBDD31108, 0xD5D03DC0,
- 0x8618701A, 0xEE1B5CD2, 0x561E298A, 0x3E1D0542, 0x23F8B5CB, 0x4BFB9903, 0xF3FEEC5B, 0x9BFDC093,
- 0x546E77EF, 0x3C6D5B27, 0x84682E7F, 0xEC6B02B7, 0xF18EB23E, 0x998D9EF6, 0x2188EBAE, 0x498BC766,
- 0x1A438ABC, 0x7240A674, 0xCA45D32C, 0xA246FFE4, 0xBFA34F6D, 0xD7A063A5, 0x6FA516FD, 0x07A63A35,
- 0x8FD9098E, 0xE7DA2546, 0x5FDF501E, 0x37DC7CD6, 0x2A39CC5F, 0x423AE097, 0xFA3F95CF, 0x923CB907,
- 0xC1F4F4DD, 0xA9F7D815, 0x11F2AD4D, 0x79F18185, 0x6414310C, 0x0C171DC4, 0xB412689C, 0xDC114454,
- 0x1382F328, 0x7B81DFE0, 0xC384AAB8, 0xAB878670, 0xB66236F9, 0xDE611A31, 0x66646F69, 0x0E6743A1,
- 0x5DAF0E7B, 0x35AC22B3, 0x8DA957EB, 0xE5AA7B23, 0xF84FCBAA, 0x904CE762, 0x2849923A, 0x404ABEF2,
- 0xB2828A33, 0xDA81A6FB, 0x6284D3A3, 0x0A87FF6B, 0x17624FE2, 0x7F61632A, 0xC7641672, 0xAF673ABA,
- 0xFCAF7760, 0x94AC5BA8, 0x2CA92EF0, 0x44AA0238, 0x594FB2B1, 0x314C9E79, 0x8949EB21, 0xE14AC7E9,
- 0x2ED97095, 0x46DA5C5D, 0xFEDF2905, 0x96DC05CD, 0x8B39B544, 0xE33A998C, 0x5B3FECD4, 0x333CC01C,
- 0x60F48DC6, 0x08F7A10E, 0xB0F2D456, 0xD8F1F89E, 0xC5144817, 0xAD1764DF, 0x15121187, 0x7D113D4F
-},
-{
- 0x00000000, 0x493C7D27, 0x9278FA4E, 0xDB448769, 0x211D826D, 0x6821FF4A, 0xB3657823, 0xFA590504,
- 0x423B04DA, 0x0B0779FD, 0xD043FE94, 0x997F83B3, 0x632686B7, 0x2A1AFB90, 0xF15E7CF9, 0xB86201DE,
- 0x847609B4, 0xCD4A7493, 0x160EF3FA, 0x5F328EDD, 0xA56B8BD9, 0xEC57F6FE, 0x37137197, 0x7E2F0CB0,
- 0xC64D0D6E, 0x8F717049, 0x5435F720, 0x1D098A07, 0xE7508F03, 0xAE6CF224, 0x7528754D, 0x3C14086A,
- 0x0D006599, 0x443C18BE, 0x9F789FD7, 0xD644E2F0, 0x2C1DE7F4, 0x65219AD3, 0xBE651DBA, 0xF759609D,
- 0x4F3B6143, 0x06071C64, 0xDD439B0D, 0x947FE62A, 0x6E26E32E, 0x271A9E09, 0xFC5E1960, 0xB5626447,
- 0x89766C2D, 0xC04A110A, 0x1B0E9663, 0x5232EB44, 0xA86BEE40, 0xE1579367, 0x3A13140E, 0x732F6929,
- 0xCB4D68F7, 0x827115D0, 0x593592B9, 0x1009EF9E, 0xEA50EA9A, 0xA36C97BD, 0x782810D4, 0x31146DF3,
- 0x1A00CB32, 0x533CB615, 0x8878317C, 0xC1444C5B, 0x3B1D495F, 0x72213478, 0xA965B311, 0xE059CE36,
- 0x583BCFE8, 0x1107B2CF, 0xCA4335A6, 0x837F4881, 0x79264D85, 0x301A30A2, 0xEB5EB7CB, 0xA262CAEC,
- 0x9E76C286, 0xD74ABFA1, 0x0C0E38C8, 0x453245EF, 0xBF6B40EB, 0xF6573DCC, 0x2D13BAA5, 0x642FC782,
- 0xDC4DC65C, 0x9571BB7B, 0x4E353C12, 0x07094135, 0xFD504431, 0xB46C3916, 0x6F28BE7F, 0x2614C358,
- 0x1700AEAB, 0x5E3CD38C, 0x857854E5, 0xCC4429C2, 0x361D2CC6, 0x7F2151E1, 0xA465D688, 0xED59ABAF,
- 0x553BAA71, 0x1C07D756, 0xC743503F, 0x8E7F2D18, 0x7426281C, 0x3D1A553B, 0xE65ED252, 0xAF62AF75,
- 0x9376A71F, 0xDA4ADA38, 0x010E5D51, 0x48322076, 0xB26B2572, 0xFB575855, 0x2013DF3C, 0x692FA21B,
- 0xD14DA3C5, 0x9871DEE2, 0x4335598B, 0x0A0924AC, 0xF05021A8, 0xB96C5C8F, 0x6228DBE6, 0x2B14A6C1,
- 0x34019664, 0x7D3DEB43, 0xA6796C2A, 0xEF45110D, 0x151C1409, 0x5C20692E, 0x8764EE47, 0xCE589360,
- 0x763A92BE, 0x3F06EF99, 0xE44268F0, 0xAD7E15D7, 0x572710D3, 0x1E1B6DF4, 0xC55FEA9D, 0x8C6397BA,
- 0xB0779FD0, 0xF94BE2F7, 0x220F659E, 0x6B3318B9, 0x916A1DBD, 0xD856609A, 0x0312E7F3, 0x4A2E9AD4,
- 0xF24C9B0A, 0xBB70E62D, 0x60346144, 0x29081C63, 0xD3511967, 0x9A6D6440, 0x4129E329, 0x08159E0E,
- 0x3901F3FD, 0x703D8EDA, 0xAB7909B3, 0xE2457494, 0x181C7190, 0x51200CB7, 0x8A648BDE, 0xC358F6F9,
- 0x7B3AF727, 0x32068A00, 0xE9420D69, 0xA07E704E, 0x5A27754A, 0x131B086D, 0xC85F8F04, 0x8163F223,
- 0xBD77FA49, 0xF44B876E, 0x2F0F0007, 0x66337D20, 0x9C6A7824, 0xD5560503, 0x0E12826A, 0x472EFF4D,
- 0xFF4CFE93, 0xB67083B4, 0x6D3404DD, 0x240879FA, 0xDE517CFE, 0x976D01D9, 0x4C2986B0, 0x0515FB97,
- 0x2E015D56, 0x673D2071, 0xBC79A718, 0xF545DA3F, 0x0F1CDF3B, 0x4620A21C, 0x9D642575, 0xD4585852,
- 0x6C3A598C, 0x250624AB, 0xFE42A3C2, 0xB77EDEE5, 0x4D27DBE1, 0x041BA6C6, 0xDF5F21AF, 0x96635C88,
- 0xAA7754E2, 0xE34B29C5, 0x380FAEAC, 0x7133D38B, 0x8B6AD68F, 0xC256ABA8, 0x19122CC1, 0x502E51E6,
- 0xE84C5038, 0xA1702D1F, 0x7A34AA76, 0x3308D751, 0xC951D255, 0x806DAF72, 0x5B29281B, 0x1215553C,
- 0x230138CF, 0x6A3D45E8, 0xB179C281, 0xF845BFA6, 0x021CBAA2, 0x4B20C785, 0x906440EC, 0xD9583DCB,
- 0x613A3C15, 0x28064132, 0xF342C65B, 0xBA7EBB7C, 0x4027BE78, 0x091BC35F, 0xD25F4436, 0x9B633911,
- 0xA777317B, 0xEE4B4C5C, 0x350FCB35, 0x7C33B612, 0x866AB316, 0xCF56CE31, 0x14124958, 0x5D2E347F,
- 0xE54C35A1, 0xAC704886, 0x7734CFEF, 0x3E08B2C8, 0xC451B7CC, 0x8D6DCAEB, 0x56294D82, 0x1F1530A5
-}};
-
-#define CRC32_UPD(crc, n) \
-	(crc32c_tables[(n)][(crc) & 0xFF] ^ \
-	 crc32c_tables[(n)-1][((crc) >> 8) & 0xFF])
-
-static inline uint32_t
-crc32c_1byte(uint8_t data, uint32_t init_val)
-{
-	uint32_t crc;
-	crc = init_val;
-	crc ^= data;
-
-	return crc32c_tables[0][crc & 0xff] ^ (crc >> 8);
-}
-
-static inline uint32_t
-crc32c_2bytes(uint16_t data, uint32_t init_val)
-{
-	uint32_t crc;
-	crc = init_val;
-	crc ^= data;
-
-	crc = CRC32_UPD(crc, 1) ^ (crc >> 16);
-
-	return crc;
-}
-
-static inline uint32_t
-crc32c_1word(uint32_t data, uint32_t init_val)
-{
-	uint32_t crc, term1, term2;
-	crc = init_val;
-	crc ^= data;
-
-	term1 = CRC32_UPD(crc, 3);
-	term2 = crc >> 16;
-	crc = term1 ^ CRC32_UPD(term2, 1);
-
-	return crc;
-}
-
-static inline uint32_t
-crc32c_2words(uint64_t data, uint32_t init_val)
-{
-	uint32_t crc, term1, term2;
-	union {
-		uint64_t u64;
-		uint32_t u32[2];
-	} d;
-	d.u64 = data;
-
-	crc = init_val;
-	crc ^= d.u32[0];
-
-	term1 = CRC32_UPD(crc, 7);
-	term2 = crc >> 16;
-	crc = term1 ^ CRC32_UPD(term2, 5);
-	term1 = CRC32_UPD(d.u32[1], 3);
-	term2 = d.u32[1] >> 16;
-	crc ^= term1 ^ CRC32_UPD(term2, 1);
-
-	return crc;
-}
-
-#if defined(RTE_ARCH_X86)
-static inline uint32_t
-crc32c_sse42_u8(uint8_t data, uint32_t init_val)
-{
-	__asm__ volatile(
-			"crc32b %[data], %[init_val];"
-			: [init_val] "+r" (init_val)
-			: [data] "rm" (data));
-	return init_val;
-}
-
-static inline uint32_t
-crc32c_sse42_u16(uint16_t data, uint32_t init_val)
-{
-	__asm__ volatile(
-			"crc32w %[data], %[init_val];"
-			: [init_val] "+r" (init_val)
-			: [data] "rm" (data));
-	return init_val;
-}
-
-static inline uint32_t
-crc32c_sse42_u32(uint32_t data, uint32_t init_val)
-{
-	__asm__ volatile(
-			"crc32l %[data], %[init_val];"
-			: [init_val] "+r" (init_val)
-			: [data] "rm" (data));
-	return init_val;
-}
-
-static inline uint32_t
-crc32c_sse42_u64_mimic(uint64_t data, uint64_t init_val)
-{
-	union {
-		uint32_t u32[2];
-		uint64_t u64;
-	} d;
-
-	d.u64 = data;
-	init_val = crc32c_sse42_u32(d.u32[0], (uint32_t)init_val);
-	init_val = crc32c_sse42_u32(d.u32[1], (uint32_t)init_val);
-	return (uint32_t)init_val;
-}
-#endif
-
-#ifdef RTE_ARCH_X86_64
-static inline uint32_t
-crc32c_sse42_u64(uint64_t data, uint64_t init_val)
-{
-	__asm__ volatile(
-			"crc32q %[data], %[init_val];"
-			: [init_val] "+r" (init_val)
-			: [data] "rm" (data));
-	return (uint32_t)init_val;
-}
-#endif
+#include <hash_crc_sw.h>

 #define CRC32_SW            (1U << 0)
 #define CRC32_SSE42         (1U << 1)
@@ -427,6 +34,7 @@ static uint8_t crc32_alg = CRC32_SW;
 #if defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
 #include "rte_crc_arm64.h"
 #else
+#include "hash_crc_x86.h"

 /**
  * Allow or disallow use of SSE4.2 instrinsics for CRC32 hash
--
2.17.1


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [PATCH] cryptodev: extend data-unit length field
@ 2021-10-04  6:36 12% Matan Azrad
  0 siblings, 0 replies; 200+ results
From: Matan Azrad @ 2021-10-04  6:36 UTC (permalink / raw)
  To: Akhil Goyal, Declan Doherty; +Cc: dev, Thomas Monjalon

As described in [1] and as announced in [2], The field ``dataunit_len``
of the ``struct rte_crypto_cipher_xform`` moved to the end of the
structure and extended to ``uint32_t``.

In this way, sizes bigger than 64K bytes can be supported for data-unit
lengths.

[1] commit d014dddb2d69 ("cryptodev: support multiple cipher
data-units")
[2] commit 9a5c09211b3a ("doc: announce extension of crypto data-unit
length")

Signed-off-by: Matan Azrad <matan@nvidia.com>
---
 app/test/test_cryptodev_blockcipher.h  |  2 +-
 doc/guides/rel_notes/deprecation.rst   |  4 ---
 doc/guides/rel_notes/release_21_11.rst |  3 +++
 examples/l2fwd-crypto/main.c           |  6 ++---
 lib/cryptodev/rte_crypto_sym.h         | 36 +++++++++-----------------
 5 files changed, 19 insertions(+), 32 deletions(-)

diff --git a/app/test/test_cryptodev_blockcipher.h b/app/test/test_cryptodev_blockcipher.h
index dcaa08ae22..84f5d57787 100644
--- a/app/test/test_cryptodev_blockcipher.h
+++ b/app/test/test_cryptodev_blockcipher.h
@@ -97,7 +97,7 @@ struct blockcipher_test_data {
 
 	unsigned int cipher_offset;
 	unsigned int auth_offset;
-	uint16_t xts_dataunit_len;
+	uint32_t xts_dataunit_len;
 	bool wrapped_key;
 };
 
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 05fc2fdee7..8b54088a39 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -202,10 +202,6 @@ Deprecation Notices
 * cryptodev: ``min`` and ``max`` fields of ``rte_crypto_param_range`` structure
   will be renamed in DPDK 21.11 to avoid conflict with Windows Sockets headers.
 
-* cryptodev: The field ``dataunit_len`` of the ``struct rte_crypto_cipher_xform``
-  has a limited size ``uint16_t``.
-  It will be moved and extended as ``uint32_t`` in DPDK 21.11.
-
 * cryptodev: The structure ``rte_crypto_sym_vec`` would be updated to add
   ``dest_sgl`` to support out of place processing.
   This field will be null for inplace processing.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 37dc1a7786..4a9d1dedd8 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -190,6 +190,9 @@ ABI Changes
      Use fixed width quotes for ``function_names`` or ``struct_names``.
      Use the past tense.
 
+* cryptodev: The field ``dataunit_len`` of the ``struct rte_crypto_cipher_xform``
+  moved to the end of the structure and extended to ``uint32_t``.
+
    This section is a comment. Do not overwrite or remove it.
    Also, make sure to start the actual text at the margin.
    =======================================================
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index 66d1491bf7..78844cee18 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -182,7 +182,7 @@ struct l2fwd_crypto_params {
 	unsigned digest_length;
 	unsigned block_size;
 
-	uint16_t cipher_dataunit_len;
+	uint32_t cipher_dataunit_len;
 
 	struct l2fwd_iv cipher_iv;
 	struct l2fwd_iv auth_iv;
@@ -1269,9 +1269,9 @@ l2fwd_crypto_parse_args_long_options(struct l2fwd_crypto_options *options,
 
 	else if (strcmp(lgopts[option_index].name, "cipher_dataunit_len") == 0) {
 		retval = parse_size(&val, optarg);
-		if (retval == 0 && val >= 0 && val <= UINT16_MAX) {
+		if (retval == 0 && val >= 0) {
 			options->cipher_xform.cipher.dataunit_len =
-								(uint16_t)val;
+								(uint32_t)val;
 			return 0;
 		} else
 			return -1;
diff --git a/lib/cryptodev/rte_crypto_sym.h b/lib/cryptodev/rte_crypto_sym.h
index 58c0724743..1106ad6201 100644
--- a/lib/cryptodev/rte_crypto_sym.h
+++ b/lib/cryptodev/rte_crypto_sym.h
@@ -195,9 +195,6 @@ struct rte_crypto_cipher_xform {
 	enum rte_crypto_cipher_algorithm algo;
 	/**< Cipher algorithm */
 
-	RTE_STD_C11
-	union { /* temporary anonymous union for ABI compatibility */
-
 	struct {
 		const uint8_t *data;	/**< pointer to key data */
 		uint16_t length;	/**< key length in bytes */
@@ -233,27 +230,6 @@ struct rte_crypto_cipher_xform {
 	 *  - Each key can be either 128 bits (16 bytes) or 256 bits (32 bytes).
 	 *  - Both keys must have the same size.
 	 **/
-
-	RTE_STD_C11
-	struct { /* temporary anonymous struct for ABI compatibility */
-		const uint8_t *_key_data; /* reserved for key.data union */
-		uint16_t _key_length;     /* reserved for key.length union */
-		/* next field can fill the padding hole */
-
-	uint16_t dataunit_len;
-	/**< When RTE_CRYPTODEV_FF_CIPHER_MULTIPLE_DATA_UNITS is enabled,
-	 * this is the data-unit length of the algorithm,
-	 * otherwise or when the value is 0, use the operation length.
-	 * The value should be in the range defined by the dataunit_set field
-	 * in the cipher capability.
-	 *
-	 * - For AES-XTS it is the size of data-unit, from IEEE Std 1619-2007.
-	 * For-each data-unit in the operation, the tweak (IV) value is
-	 * assigned consecutively starting from the operation assigned IV.
-	 */
-
-	}; }; /* temporary struct nested in union for ABI compatibility */
-
 	struct {
 		uint16_t offset;
 		/**< Starting point for Initialisation Vector or Counter,
@@ -297,6 +273,18 @@ struct rte_crypto_cipher_xform {
 		 * which can be in the range 7 to 13 inclusive.
 		 */
 	} iv;	/**< Initialisation vector parameters */
+
+	uint32_t dataunit_len;
+	/**< When RTE_CRYPTODEV_FF_CIPHER_MULTIPLE_DATA_UNITS is enabled,
+	 * this is the data-unit length of the algorithm,
+	 * otherwise or when the value is 0, use the operation length.
+	 * The value should be in the range defined by the dataunit_set field
+	 * in the cipher capability.
+	 *
+	 * - For AES-XTS it is the size of data-unit, from IEEE Std 1619-2007.
+	 * For-each data-unit in the operation, the tweak (IV) value is
+	 * assigned consecutively starting from the operation assigned IV.
+	 */
 };
 
 /** Symmetric Authentication / Hash Algorithms
-- 
2.25.1


^ permalink raw reply	[relevance 12%]

* Re: [dpdk-dev] [PATCH v3 4/7] ethdev: make burst functions to use new flat array
  2021-10-01 17:40  0%         ` Ananyev, Konstantin
@ 2021-10-04  8:46  3%           ` Ferruh Yigit
  2021-10-04  9:20  0%             ` Ananyev, Konstantin
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-10-04  8:46 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev
  Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
	maxime.coquelin, Xia, Chenbo, thomas, mdr, Jayatheerthan, Jay

On 10/1/2021 6:40 PM, Ananyev, Konstantin wrote:
> 
> 
>> On 10/1/2021 3:02 PM, Konstantin Ananyev wrote:
>>> Rework 'fast' burst functions to use rte_eth_fp_ops[].
>>> While it is an API/ABI breakage, this change is intended to be
>>> transparent for both users (no changes in user app is required) and
>>> PMD developers (no changes in PMD is required).
>>> One extra thing to note - RX/TX callback invocation will cause extra
>>> function call with these changes. That might cause some insignificant
>>> slowdown for code-path where RX/TX callbacks are heavily involved.
>>>
>>> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>>
>> <...>
>>
>>>  static inline int
>>>  rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
>>>  {
>>> -	struct rte_eth_dev *dev;
>>> +	struct rte_eth_fp_ops *p;
>>> +	void *qd;
>>> +
>>> +	if (port_id >= RTE_MAX_ETHPORTS ||
>>> +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
>>> +		RTE_ETHDEV_LOG(ERR,
>>> +			"Invalid port_id=%u or queue_id=%u\n",
>>> +			port_id, queue_id);
>>> +		return -EINVAL;
>>> +	}
>>
>> Should the checkes wrapped with '#ifdef RTE_ETHDEV_DEBUG_RX' like others?
> 
> Original rte_eth_rx_queue_count() always have similar checks enabled,
> that's why I also kept them 'always on'. 
> 
>>
>> <...>
>>
>>> +++ b/lib/ethdev/version.map
>>> @@ -247,11 +247,16 @@ EXPERIMENTAL {
>>>  	rte_mtr_meter_policy_delete;
>>>  	rte_mtr_meter_policy_update;
>>>  	rte_mtr_meter_policy_validate;
>>> +
>>> +	# added in 21.05
>>
>> s/21.05/21.11/
>>
>>> +	__rte_eth_rx_epilog;
>>> +	__rte_eth_tx_prolog;
>>
>> These are directly called by application and must be part of ABI, but marked as
>> 'internal' and has '__rte' prefix to highligh it, this may be confusing.
>> What about making them proper, non-internal, API?
> 
> Hmm not sure what do you suggest here.
> We don't want users to call them explicitly.
> They are sort of helpers for rte_eth_rx_burst/rte_eth_tx_burst.
> So I did what I thought is our usual policy for such semi-internal thigns:
> have '@intenal' in comments, but in version.map put them under EXPERIMETAL/global
> section.
> 
> What do you think it should be instead?
>  

Make them public API. (Basically just remove '__' prefix and @internal comment).

This way application can use them to run custom callback(s) (not only the
registered ones), not sure if this can be dangerous though.

We need to trace the ABI for these functions, making them public clarifies it.

Also comment can be updated to describe intended usage instead of marking them
internal, and applications can use these anyway if we mark them internal or not.


>>>  };
>>>
>>>  INTERNAL {
>>>  	global:
>>>
>>> +	rte_eth_fp_ops;
>>
>> This variable is accessed in inline function, so accessed by application, not
>> sure if it suits the 'internal' object definition, internal should be only for
>> objects accessed by other parts of DPDK.
>> I think this can be added to 'DPDK_22'.
>>
>>>  	rte_eth_dev_allocate;
>>>  	rte_eth_dev_allocated;
>>>  	rte_eth_dev_attach_secondary;
>>>
> 


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v3 4/7] ethdev: make burst functions to use new flat array
  2021-10-04  8:46  3%           ` Ferruh Yigit
@ 2021-10-04  9:20  0%             ` Ananyev, Konstantin
  2021-10-04 10:13  3%               ` Ferruh Yigit
  0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2021-10-04  9:20 UTC (permalink / raw)
  To: Yigit, Ferruh, dev
  Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
	maxime.coquelin, Xia, Chenbo, thomas, mdr, Jayatheerthan, Jay


> >>
> >>>  static inline int
> >>>  rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
> >>>  {
> >>> -	struct rte_eth_dev *dev;
> >>> +	struct rte_eth_fp_ops *p;
> >>> +	void *qd;
> >>> +
> >>> +	if (port_id >= RTE_MAX_ETHPORTS ||
> >>> +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> >>> +		RTE_ETHDEV_LOG(ERR,
> >>> +			"Invalid port_id=%u or queue_id=%u\n",
> >>> +			port_id, queue_id);
> >>> +		return -EINVAL;
> >>> +	}
> >>
> >> Should the checkes wrapped with '#ifdef RTE_ETHDEV_DEBUG_RX' like others?
> >
> > Original rte_eth_rx_queue_count() always have similar checks enabled,
> > that's why I also kept them 'always on'.
> >
> >>
> >> <...>
> >>
> >>> +++ b/lib/ethdev/version.map
> >>> @@ -247,11 +247,16 @@ EXPERIMENTAL {
> >>>  	rte_mtr_meter_policy_delete;
> >>>  	rte_mtr_meter_policy_update;
> >>>  	rte_mtr_meter_policy_validate;
> >>> +
> >>> +	# added in 21.05
> >>
> >> s/21.05/21.11/
> >>
> >>> +	__rte_eth_rx_epilog;
> >>> +	__rte_eth_tx_prolog;
> >>
> >> These are directly called by application and must be part of ABI, but marked as
> >> 'internal' and has '__rte' prefix to highligh it, this may be confusing.
> >> What about making them proper, non-internal, API?
> >
> > Hmm not sure what do you suggest here.
> > We don't want users to call them explicitly.
> > They are sort of helpers for rte_eth_rx_burst/rte_eth_tx_burst.
> > So I did what I thought is our usual policy for such semi-internal thigns:
> > have '@intenal' in comments, but in version.map put them under EXPERIMETAL/global
> > section.
> >
> > What do you think it should be instead?
> >
> 
> Make them public API. (Basically just remove '__' prefix and @internal comment).
> 
> This way application can use them to run custom callback(s) (not only the
> registered ones), not sure if this can be dangerous though.

Hmm, as I said above, I don't want users to call them explicitly.
Do you have any good reason to allow it?

> 
> We need to trace the ABI for these functions, making them public clarifies it.

We do have plenty of semi-internal functions right now,
why adding that one will be a problem?
From other side - if we'll declare it public, we will have obligations to support it
in future releases, plus it might encourage users to use it on its own.
To me that sounds like extra headache without any gain in return.

> Also comment can be updated to describe intended usage instead of marking them
> internal, and applications can use these anyway if we mark them internal or not.


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 1/3] ethdev: update modify field flow action
  2021-10-01 19:52  9%   ` [dpdk-dev] [PATCH 1/3] " Viacheslav Ovsiienko
@ 2021-10-04  9:38  0%     ` Ori Kam
  0 siblings, 0 replies; 200+ results
From: Ori Kam @ 2021-10-04  9:38 UTC (permalink / raw)
  To: Slava Ovsiienko, dev
  Cc: Raslan Darawsheh, Matan Azrad, Shahaf Shuler, Gregory Etelson,
	NBU-Contact-Thomas Monjalon

Hi Slava,

> -----Original Message-----
> From: Slava Ovsiienko <viacheslavo@nvidia.com>
> Sent: Friday, October 1, 2021 10:52 PM
> Subject: [PATCH 1/3] ethdev: update modify field flow action
> 
> The generic modify field flow action introduced in [1] has some issues related
> to the immediate source operand:
> 
>   - immediate source can be presented either as an unsigned
>     64-bit integer or pointer to data pattern in memory.
>     There was no explicit pointer field defined in the union
> 
>   - the byte ordering for 64-bit integer was not specified.
>     Many fields have lesser lengths and byte ordering
>     is crucial.
> 
>   - how the bit offset is applied to the immediate source
>     field was not defined and documented
> 
>   - 64-bit integer size is not enough to provide MAC and

I think for mac it is enough.

>     IPv6 addresses
> 
> In order to cover the issues and exclude any ambiguities the following is
> done:
> 
>   - introduce the explicit pointer field
>     in rte_flow_action_modify_data structure
> 
>   - replace the 64-bit unsigned integer with 16-byte array
> 
>   - update the modify field flow action documentation
> 
> [1] commit 73b68f4c54a0 ("ethdev: introduce generic modify flow action")
> 
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---
>  doc/guides/prog_guide/rte_flow.rst     |  8 ++++++++
>  doc/guides/rel_notes/release_21_11.rst |  7 +++++++
>  lib/ethdev/rte_flow.h                  | 15 ++++++++++++---
>  3 files changed, 27 insertions(+), 3 deletions(-)
> 
> diff --git a/doc/guides/prog_guide/rte_flow.rst
> b/doc/guides/prog_guide/rte_flow.rst
> index 2b42d5ec8c..a54760a7b4 100644
> --- a/doc/guides/prog_guide/rte_flow.rst
> +++ b/doc/guides/prog_guide/rte_flow.rst
> @@ -2835,6 +2835,14 @@ a packet to any other part of it.
>  ``value`` sets an immediate value to be used as a source or points to a
> location of the value in memory. It is used instead of ``level`` and ``offset``
> for ``RTE_FLOW_FIELD_VALUE`` and ``RTE_FLOW_FIELD_POINTER``
> respectively.
> +The data in memory should be presented exactly in the same byte order
> +and length as in the relevant flow item, i.e. data for field with type
> +RTE_FLOW_FIELD_MAC_DST should follow the conventions of dst field in
> +rte_flow_item_eth structure, with type RTE_FLOW_FIELD_IPV6_SRC -
> +rte_flow_item_ipv6 conventions, and so on. The bitfield exatracted from
> +the memory being applied as second operation parameter is defined by
> +width and the destination field offset. If the field size is large than
> +16 bytes the pattern can be provided as pointer only.
> 
You should specify where is the offset of the src is taken from.
Per your example if the application wants to change the 2 byte of source mac
it should giveas an imidate value 6 bytes, with the second byte as the new value to set
so from where do it takes the offset? Since offset is not valid in case of immediate value.
I assume it is based on the offset of the destination.

>  .. _table_rte_flow_action_modify_field:
> 
> diff --git a/doc/guides/rel_notes/release_21_11.rst
> b/doc/guides/rel_notes/release_21_11.rst
> index 73e377a007..7db6cccab0 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -170,6 +170,10 @@ API Changes
>    the crypto/security operation. This field will be used to communicate
>    events such as soft expiry with IPsec in lookaside mode.
> 
> +* ethdev: ``rte_flow_action_modify_data`` structure udpdated, immediate
> +data
> +  array is extended, data pointer field is explicitly added to union,
> +the
> +  action behavior is defined in more strict fashion and documentation
> uddated.
> +
Uddated ->updated?
I think it is important to document here that the behavior has changed,
from seting only the relevant value to update to setting all the field and
the mask is done internally.

> 
>  ABI Changes
>  -----------
> @@ -206,6 +210,9 @@ ABI Changes
>    and hard expiry limits. Limits can be either in number of packets or bytes.
> 
> 
> +* ethdev: ``rte_flow_action_modify_data`` structure udpdated.
> +
> +
>  Known Issues
>  ------------
> 
> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index
> 7b1ed7f110..af4c693ead 100644
> --- a/lib/ethdev/rte_flow.h
> +++ b/lib/ethdev/rte_flow.h
> @@ -3204,6 +3204,9 @@ enum rte_flow_field_id {  };
> 
>  /**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice
> + *
>   * Field description for MODIFY_FIELD action.
>   */
>  struct rte_flow_action_modify_data {
> @@ -3217,10 +3220,16 @@ struct rte_flow_action_modify_data {
>  			uint32_t offset;
>  		};
>  		/**
> -		 * Immediate value for RTE_FLOW_FIELD_VALUE or
> -		 * memory address for RTE_FLOW_FIELD_POINTER.
> +		 * Immediate value for RTE_FLOW_FIELD_VALUE, presented
> in the
> +		 * same byte order and length as in relevant
> rte_flow_item_xxx.

Please see my comment about how to get the offset.

>  		 */
> -		uint64_t value;
> +		uint8_t value[16];
> +		/*
> +		 * Memory address for RTE_FLOW_FIELD_POINTER, memory
> layout
> +		 * should be the same as for relevant field in the
> +		 * rte_flow_item_xxx structure.

I assume also in this case the offset comes from the dest right?

> +		 */
> +		void *pvalue;
>  	};
>  };
> 
> --
> 2.18.1

Thanks,
Ori


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3 4/7] ethdev: make burst functions to use new flat array
  2021-10-04  9:20  0%             ` Ananyev, Konstantin
@ 2021-10-04 10:13  3%               ` Ferruh Yigit
  2021-10-04 11:17  0%                 ` Ananyev, Konstantin
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-10-04 10:13 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev
  Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
	maxime.coquelin, Xia, Chenbo, thomas, mdr, Jayatheerthan, Jay

On 10/4/2021 10:20 AM, Ananyev, Konstantin wrote:
> 
>>>>
>>>>>  static inline int
>>>>>  rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
>>>>>  {
>>>>> -	struct rte_eth_dev *dev;
>>>>> +	struct rte_eth_fp_ops *p;
>>>>> +	void *qd;
>>>>> +
>>>>> +	if (port_id >= RTE_MAX_ETHPORTS ||
>>>>> +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
>>>>> +		RTE_ETHDEV_LOG(ERR,
>>>>> +			"Invalid port_id=%u or queue_id=%u\n",
>>>>> +			port_id, queue_id);
>>>>> +		return -EINVAL;
>>>>> +	}
>>>>
>>>> Should the checkes wrapped with '#ifdef RTE_ETHDEV_DEBUG_RX' like others?
>>>
>>> Original rte_eth_rx_queue_count() always have similar checks enabled,
>>> that's why I also kept them 'always on'.
>>>
>>>>
>>>> <...>
>>>>
>>>>> +++ b/lib/ethdev/version.map
>>>>> @@ -247,11 +247,16 @@ EXPERIMENTAL {
>>>>>  	rte_mtr_meter_policy_delete;
>>>>>  	rte_mtr_meter_policy_update;
>>>>>  	rte_mtr_meter_policy_validate;
>>>>> +
>>>>> +	# added in 21.05
>>>>
>>>> s/21.05/21.11/
>>>>
>>>>> +	__rte_eth_rx_epilog;
>>>>> +	__rte_eth_tx_prolog;
>>>>
>>>> These are directly called by application and must be part of ABI, but marked as
>>>> 'internal' and has '__rte' prefix to highligh it, this may be confusing.
>>>> What about making them proper, non-internal, API?
>>>
>>> Hmm not sure what do you suggest here.
>>> We don't want users to call them explicitly.
>>> They are sort of helpers for rte_eth_rx_burst/rte_eth_tx_burst.
>>> So I did what I thought is our usual policy for such semi-internal thigns:
>>> have '@intenal' in comments, but in version.map put them under EXPERIMETAL/global
>>> section.
>>>
>>> What do you think it should be instead?
>>>
>>
>> Make them public API. (Basically just remove '__' prefix and @internal comment).
>>
>> This way application can use them to run custom callback(s) (not only the
>> registered ones), not sure if this can be dangerous though.
> 
> Hmm, as I said above, I don't want users to call them explicitly.
> Do you have any good reason to allow it?
> 

Just to get rid of this internal APIs that is exposed to application state.

>>
>> We need to trace the ABI for these functions, making them public clarifies it.
> 
> We do have plenty of semi-internal functions right now,
> why adding that one will be a problem?

As far as I remember existing ones are 'static inline' functions, and we don't
have an ABI concern with them. But these are actual functions called by application.

> From other side - if we'll declare it public, we will have obligations to support it
> in future releases, plus it might encourage users to use it on its own.
> To me that sounds like extra headache without any gain in return.
> 

If having those two as public API doesn't make sense, I agree with you.

>> Also comment can be updated to describe intended usage instead of marking them
>> internal, and applications can use these anyway if we mark them internal or not.
> 


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [EXT] Re: [PATCH v1 2/7] eal/interrupts: implement get set APIs
  2021-10-03 18:05  3%     ` Dmitry Kozlyuk
@ 2021-10-04 10:37  0%       ` Harman Kalra
  0 siblings, 0 replies; 200+ results
From: Harman Kalra @ 2021-10-04 10:37 UTC (permalink / raw)
  To: Dmitry Kozlyuk; +Cc: dev, Ray Kinsella, David Marchand

Hi Dmitry,

Thanks for reviewing the series.
Please find my comments inline.

> -----Original Message-----
> From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> Sent: Sunday, October 3, 2021 11:35 PM
> To: Harman Kalra <hkalra@marvell.com>
> Cc: dev@dpdk.org; Ray Kinsella <mdr@ashroe.eu>
> Subject: [EXT] Re: [dpdk-dev] [PATCH v1 2/7] eal/interrupts: implement get
> set APIs
> 
> External Email
> 
> ----------------------------------------------------------------------
> 2021-09-03 18:10 (UTC+0530), Harman Kalra:
> > [...]
> > diff --git a/lib/eal/common/eal_common_interrupts.c
> > b/lib/eal/common/eal_common_interrupts.c
> > new file mode 100644
> > index 0000000000..2e4fed96f0
> > --- /dev/null
> > +++ b/lib/eal/common/eal_common_interrupts.c
> > @@ -0,0 +1,506 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(C) 2021 Marvell.
> > + */
> > +
> > +#include <stdlib.h>
> > +#include <string.h>
> > +
> > +#include <rte_errno.h>
> > +#include <rte_log.h>
> > +#include <rte_malloc.h>
> > +
> > +#include <rte_interrupts.h>
> > +
> > +
> > +struct rte_intr_handle *rte_intr_handle_instance_alloc(int size,
> > +						       bool from_hugepage)
> 
> Since the purpose of the series is to reduce future ABI breakages, how about
> making the second parameter "flags" to have some spare bits?
> (If not removing it completely per David's suggestion.)
> 

<HK> Having second parameter "flags" is a good suggestion, I will include it.

> > +{
> > +	struct rte_intr_handle *intr_handle;
> > +	int i;
> > +
> > +	if (from_hugepage)
> > +		intr_handle = rte_zmalloc(NULL,
> > +					  size * sizeof(struct rte_intr_handle),
> > +					  0);
> > +	else
> > +		intr_handle = calloc(1, size * sizeof(struct rte_intr_handle));
> > +	if (!intr_handle) {
> > +		RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
> > +		rte_errno = ENOMEM;
> > +		return NULL;
> > +	}
> > +
> > +	for (i = 0; i < size; i++) {
> > +		intr_handle[i].nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
> > +		intr_handle[i].alloc_from_hugepage = from_hugepage;
> > +	}
> > +
> > +	return intr_handle;
> > +}
> > +
> > +struct rte_intr_handle *rte_intr_handle_instance_index_get(
> > +				struct rte_intr_handle *intr_handle, int
> index)
> 
> If rte_intr_handle_instance_alloc() returns a pointer to an array, this function
> is useless since the user can simply manipulate a pointer.

<HK> User wont be able to manipulate the pointer as he is not aware of size of struct rte_intr_handle.
He will observe "dereferencing pointer to incomplete type" compilation error.

> If we want to make a distinction between a single struct rte_intr_handle and
> a commonly allocated bunch of such (but why?), then they should be
> represented by distinct types.

<HK> Do you mean, we should have separate APIs for single allocation and batch allocation? As get API
will be useful only in case of batch allocation. Currently interrupt autotests and ifpga_rawdev driver makes
batch allocation. 
I think common API for single and batch is fine, get API is required for returning a particular intr_handle instance.
But one problem I see in current implementation is there should be upper limit check for index in get/set
API, which I will fix.

> 
> > +{
> > +	if (intr_handle == NULL) {
> > +		RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > +		rte_errno = ENOMEM;
> 
> Why it's sometimes ENOMEM and sometimes ENOTSUP when the handle is
> not allocated?

<HK> I will fix and make it symmetrical across.

> 
> > +		return NULL;
> > +	}
> > +
> > +	return &intr_handle[index];
> > +}
> > +
> > +int rte_intr_handle_instance_index_set(struct rte_intr_handle
> *intr_handle,
> > +				       const struct rte_intr_handle *src,
> > +				       int index)
> 
> See above regarding the "index" parameter. If it can be removed, a better
> name for this function would be rte_intr_handle_copy().

<HK> I think get API is required.

> 
> > +{
> > +	if (intr_handle == NULL) {
> > +		RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > +		rte_errno = ENOTSUP;
> > +		goto fail;
> > +	}
> > +
> > +	if (src == NULL) {
> > +		RTE_LOG(ERR, EAL, "Source interrupt instance
> unallocated\n");
> > +		rte_errno = EINVAL;
> > +		goto fail;
> > +	}
> > +
> > +	if (index < 0) {
> > +		RTE_LOG(ERR, EAL, "Index cany be negative");
> > +		rte_errno = EINVAL;
> > +		goto fail;
> > +	}
> 
> How about making this parameter "size_t"?

<HK> You mean index ? It can be size_t.

> 
> > +
> > +	intr_handle[index].fd = src->fd;
> > +	intr_handle[index].vfio_dev_fd = src->vfio_dev_fd;
> > +	intr_handle[index].type = src->type;
> > +	intr_handle[index].max_intr = src->max_intr;
> > +	intr_handle[index].nb_efd = src->nb_efd;
> > +	intr_handle[index].efd_counter_size = src->efd_counter_size;
> > +
> > +	memcpy(intr_handle[index].efds, src->efds, src->nb_intr);
> > +	memcpy(intr_handle[index].elist, src->elist, src->nb_intr);
> > +
> > +	return 0;
> > +fail:
> > +	return rte_errno;
> 
> Should be (-rte_errno) per documentation.
> Please check all functions in this file that return an "int" status.

<HK> Sure will fix it across APIs.

> 
> > [...]
> > +int rte_intr_handle_dev_fd_get(const struct rte_intr_handle
> > +*intr_handle) {
> > +	if (intr_handle == NULL) {
> > +		RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > +		rte_errno = ENOTSUP;
> > +		goto fail;
> > +	}
> > +
> > +	return intr_handle->vfio_dev_fd;
> > +fail:
> > +	return rte_errno;
> > +}
> 
> Returning a errno value instead of an FD is very error-prone.
> Probably returning (-1) is both safe and convenient?

<HK> Ack

> 
> > +
> > +int rte_intr_handle_max_intr_set(struct rte_intr_handle *intr_handle,
> > +				 int max_intr)
> > +{
> > +	if (intr_handle == NULL) {
> > +		RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > +		rte_errno = ENOTSUP;
> > +		goto fail;
> > +	}
> > +
> > +	if (max_intr > intr_handle->nb_intr) {
> > +		RTE_LOG(ERR, EAL, "Max_intr=%d greater than
> > +PLT_MAX_RXTX_INTR_VEC_ID=%d",
> 
> Seems like this common/cnxk name leaked here by mistake?

<HK> Thanks for catching this.

> 
> > +			max_intr, intr_handle->nb_intr);
> > +		rte_errno = ERANGE;
> > +		goto fail;
> > +	}
> > +
> > +	intr_handle->max_intr = max_intr;
> > +
> > +	return 0;
> > +fail:
> > +	return rte_errno;
> > +}
> > +
> > +int rte_intr_handle_max_intr_get(const struct rte_intr_handle
> > +*intr_handle) {
> > +	if (intr_handle == NULL) {
> > +		RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > +		rte_errno = ENOTSUP;
> > +		goto fail;
> > +	}
> > +
> > +	return intr_handle->max_intr;
> > +fail:
> > +	return rte_errno;
> > +}
> 
> Should be negative per documentation and to avoid returning a positive
> value that cannot be distinguished from a successful return.
> Please also check other functions in this file returning an "int" result (not
> status).

<HK> Will fix it.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3 4/7] ethdev: make burst functions to use new flat array
  2021-10-04 10:13  3%               ` Ferruh Yigit
@ 2021-10-04 11:17  0%                 ` Ananyev, Konstantin
  0 siblings, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2021-10-04 11:17 UTC (permalink / raw)
  To: Yigit, Ferruh, dev
  Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
	maxime.coquelin, Xia, Chenbo, thomas, mdr, Jayatheerthan, Jay



> 
> On 10/4/2021 10:20 AM, Ananyev, Konstantin wrote:
> >
> >>>>
> >>>>>  static inline int
> >>>>>  rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
> >>>>>  {
> >>>>> -	struct rte_eth_dev *dev;
> >>>>> +	struct rte_eth_fp_ops *p;
> >>>>> +	void *qd;
> >>>>> +
> >>>>> +	if (port_id >= RTE_MAX_ETHPORTS ||
> >>>>> +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> >>>>> +		RTE_ETHDEV_LOG(ERR,
> >>>>> +			"Invalid port_id=%u or queue_id=%u\n",
> >>>>> +			port_id, queue_id);
> >>>>> +		return -EINVAL;
> >>>>> +	}
> >>>>
> >>>> Should the checkes wrapped with '#ifdef RTE_ETHDEV_DEBUG_RX' like others?
> >>>
> >>> Original rte_eth_rx_queue_count() always have similar checks enabled,
> >>> that's why I also kept them 'always on'.
> >>>
> >>>>
> >>>> <...>
> >>>>
> >>>>> +++ b/lib/ethdev/version.map
> >>>>> @@ -247,11 +247,16 @@ EXPERIMENTAL {
> >>>>>  	rte_mtr_meter_policy_delete;
> >>>>>  	rte_mtr_meter_policy_update;
> >>>>>  	rte_mtr_meter_policy_validate;
> >>>>> +
> >>>>> +	# added in 21.05
> >>>>
> >>>> s/21.05/21.11/
> >>>>
> >>>>> +	__rte_eth_rx_epilog;
> >>>>> +	__rte_eth_tx_prolog;
> >>>>
> >>>> These are directly called by application and must be part of ABI, but marked as
> >>>> 'internal' and has '__rte' prefix to highligh it, this may be confusing.
> >>>> What about making them proper, non-internal, API?
> >>>
> >>> Hmm not sure what do you suggest here.
> >>> We don't want users to call them explicitly.
> >>> They are sort of helpers for rte_eth_rx_burst/rte_eth_tx_burst.
> >>> So I did what I thought is our usual policy for such semi-internal thigns:
> >>> have '@intenal' in comments, but in version.map put them under EXPERIMETAL/global
> >>> section.
> >>>
> >>> What do you think it should be instead?
> >>>
> >>
> >> Make them public API. (Basically just remove '__' prefix and @internal comment).
> >>
> >> This way application can use them to run custom callback(s) (not only the
> >> registered ones), not sure if this can be dangerous though.
> >
> > Hmm, as I said above, I don't want users to call them explicitly.
> > Do you have any good reason to allow it?
> >
> 
> Just to get rid of this internal APIs that is exposed to application state.
> 
> >>
> >> We need to trace the ABI for these functions, making them public clarifies it.
> >
> > We do have plenty of semi-internal functions right now,
> > why adding that one will be a problem?
> 
> As far as I remember existing ones are 'static inline' functions, and we don't
> have an ABI concern with them. But these are actual functions called by application.

Not always.
As an example of internal but not static ones:
rte_mempool_check_cookies
rte_mempool_contig_blocks_check_cookies
rte_mempool_op_calc_mem_size_helper
_rte_pktmbuf_read

> 
> > From other side - if we'll declare it public, we will have obligations to support it
> > in future releases, plus it might encourage users to use it on its own.
> > To me that sounds like extra headache without any gain in return.
> >
> 
> If having those two as public API doesn't make sense, I agree with you.
> 
> >> Also comment can be updated to describe intended usage instead of marking them
> >> internal, and applications can use these anyway if we mark them internal or not.
> >


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v4] net: introduce IPv4 ihl and version fields
  @ 2021-10-04 12:13  4% ` Gregory Etelson
  0 siblings, 0 replies; 200+ results
From: Gregory Etelson @ 2021-10-04 12:13 UTC (permalink / raw)
  To: dev, getelson; +Cc: matan, rasland, olivier.matz, thomas, Bernard Iremonger

RTE IPv4 header definition combines the `version' and `ihl'  fields
into a single structure member.
This patch introduces dedicated structure members for both `version'
and `ihl' IPv4 fields. Separated header fields definitions allow to
create simplified code to match on the IHL value in a flow rule.
The original `version_ihl' structure member is kept for backward
compatibility.

Signed-off-by: Gregory Etelson <getelson@nvidia.com>

Depends-on: f7383e7c7ec1 ("net: announce changes in IPv4 header access")

Acked-by: Olivier Matz <olivier.matz@6wind.com>

---
v2: Add dependency.
v3: Add comments.
v4: Update release notes.
---
 app/test/test_flow_classify.c          |  8 ++++----
 doc/guides/rel_notes/release_21_11.rst |  3 +++
 lib/net/rte_ip.h                       | 16 +++++++++++++++-
 3 files changed, 22 insertions(+), 5 deletions(-)

diff --git a/app/test/test_flow_classify.c b/app/test/test_flow_classify.c
index 951606f248..4f64be5357 100644
--- a/app/test/test_flow_classify.c
+++ b/app/test/test_flow_classify.c
@@ -95,7 +95,7 @@ static struct rte_acl_field_def ipv4_defs[NUM_FIELDS_IPV4] = {
  *  dst mask 255.255.255.00 / udp src is 32 dst is 33 / end"
  */
 static struct rte_flow_item_ipv4 ipv4_udp_spec_1 = {
-	{ 0, 0, 0, 0, 0, 0, IPPROTO_UDP, 0,
+	{ { .version_ihl = 0}, 0, 0, 0, 0, 0, IPPROTO_UDP, 0,
 	  RTE_IPV4(2, 2, 2, 3), RTE_IPV4(2, 2, 2, 7)}
 };
 static const struct rte_flow_item_ipv4 ipv4_mask_24 = {
@@ -131,7 +131,7 @@ static struct rte_flow_item  end_item = { RTE_FLOW_ITEM_TYPE_END,
  *  dst mask 255.255.255.00 / tcp src is 16 dst is 17 / end"
  */
 static struct rte_flow_item_ipv4 ipv4_tcp_spec_1 = {
-	{ 0, 0, 0, 0, 0, 0, IPPROTO_TCP, 0,
+	{ { .version_ihl = 0}, 0, 0, 0, 0, 0, IPPROTO_TCP, 0,
 	  RTE_IPV4(1, 2, 3, 4), RTE_IPV4(5, 6, 7, 8)}
 };
 
@@ -150,8 +150,8 @@ static struct rte_flow_item  tcp_item_1 = { RTE_FLOW_ITEM_TYPE_TCP,
  *  dst mask 255.255.255.00 / sctp src is 16 dst is 17/ end"
  */
 static struct rte_flow_item_ipv4 ipv4_sctp_spec_1 = {
-	{ 0, 0, 0, 0, 0, 0, IPPROTO_SCTP, 0, RTE_IPV4(11, 12, 13, 14),
-	RTE_IPV4(15, 16, 17, 18)}
+	{ { .version_ihl = 0}, 0, 0, 0, 0, 0, IPPROTO_SCTP, 0,
+	RTE_IPV4(11, 12, 13, 14), RTE_IPV4(15, 16, 17, 18)}
 };
 
 static struct rte_flow_item_sctp sctp_spec_1 = {
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 73e377a007..deab44a92a 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -170,6 +170,9 @@ API Changes
   the crypto/security operation. This field will be used to communicate
   events such as soft expiry with IPsec in lookaside mode.
 
+* net: Add ``version`` and ``ihl`` bit-fields to ``struct rte_ipv4_hdr``.
+  Existing ``version_ihl`` field was kept for backward compatibility.
+
 
 ABI Changes
 -----------
diff --git a/lib/net/rte_ip.h b/lib/net/rte_ip.h
index 05948b69b7..89a68d9433 100644
--- a/lib/net/rte_ip.h
+++ b/lib/net/rte_ip.h
@@ -38,7 +38,21 @@ extern "C" {
  * IPv4 Header
  */
 struct rte_ipv4_hdr {
-	uint8_t  version_ihl;		/**< version and header length */
+	__extension__
+	union {
+		uint8_t version_ihl;    /**< version and header length */
+		struct {
+#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
+			uint8_t ihl:4;     /**< header length */
+			uint8_t version:4; /**< version */
+#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+			uint8_t version:4; /**< version */
+			uint8_t ihl:4;     /**< header length */
+#else
+#error "setup endian definition"
+#endif
+		};
+	};
 	uint8_t  type_of_service;	/**< type of service */
 	rte_be16_t total_length;	/**< length of packet */
 	rte_be16_t packet_id;		/**< packet ID */
-- 
2.33.0


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v1] ci: update machine meson option to platform
@ 2021-10-04 12:55  4% Juraj Linkeš
  2021-10-04 13:29  4% ` [dpdk-dev] [PATCH v2] " Juraj Linkeš
  0 siblings, 1 reply; 200+ results
From: Juraj Linkeš @ 2021-10-04 12:55 UTC (permalink / raw)
  To: thomas, david.marchand, aconole, maicolgabriel; +Cc: dev, Juraj Linkeš

The way we're building DPDK in CI, with -Dmachine=default, has not been
updated when the option got replaced to preserve a backwards-complatible
build call to facilitate ABI verification between DPDK versions. Update
the call to use -Dplatform=generic, which is the most up to date way to
execute the same build which is now present in all DPDK versions the ABI
check verifies.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 .ci/linux-build.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/.ci/linux-build.sh b/.ci/linux-build.sh
index 91e43a975b..f8710e3ad4 100755
--- a/.ci/linux-build.sh
+++ b/.ci/linux-build.sh
@@ -77,7 +77,7 @@ else
     OPTS="$OPTS -Dexamples=all"
 fi
 
-OPTS="$OPTS -Dmachine=default"
+OPTS="$OPTS -platform=generic"
 OPTS="$OPTS --default-library=$DEF_LIB"
 OPTS="$OPTS --buildtype=debugoptimized"
 OPTS="$OPTS -Dcheck_includes=true"
-- 
2.20.1


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v2] ci: update machine meson option to platform
  2021-10-04 12:55  4% [dpdk-dev] [PATCH v1] ci: update machine meson option to platform Juraj Linkeš
@ 2021-10-04 13:29  4% ` Juraj Linkeš
  2021-10-11 13:40  4%   ` [dpdk-dev] [PATCH v3] " Juraj Linkeš
  0 siblings, 1 reply; 200+ results
From: Juraj Linkeš @ 2021-10-04 13:29 UTC (permalink / raw)
  To: thomas, david.marchand, aconole, maicolgabriel; +Cc: dev, Juraj Linkeš

The way we're building DPDK in CI, with -Dmachine=default, has not been
updated when the option got replaced to preserve a backwards-complatible
build call to facilitate ABI verification between DPDK versions. Update
the call to use -Dplatform=generic, which is the most up to date way to
execute the same build which is now present in all DPDK versions the ABI
check verifies.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
 .ci/linux-build.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/.ci/linux-build.sh b/.ci/linux-build.sh
index 91e43a975b..06aaa79100 100755
--- a/.ci/linux-build.sh
+++ b/.ci/linux-build.sh
@@ -77,7 +77,7 @@ else
     OPTS="$OPTS -Dexamples=all"
 fi
 
-OPTS="$OPTS -Dmachine=default"
+OPTS="$OPTS -Dplatform=generic"
 OPTS="$OPTS --default-library=$DEF_LIB"
 OPTS="$OPTS --buildtype=debugoptimized"
 OPTS="$OPTS -Dcheck_includes=true"
-- 
2.20.1


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v4 2/7] ethdev: change input parameters for rx_queue_count
  2021-10-04 13:55  4%     ` [dpdk-dev] [PATCH v4 " Konstantin Ananyev
@ 2021-10-04 13:55  6%       ` Konstantin Ananyev
  2021-10-04 13:55  2%       ` [dpdk-dev] [PATCH v4 3/7] ethdev: copy ethdev 'fast' API into separate structure Konstantin Ananyev
                         ` (4 subsequent siblings)
  5 siblings, 0 replies; 200+ results
From: Konstantin Ananyev @ 2021-10-04 13:55 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Currently majority of 'fast' ethdev ops take pointers to internal
queue data structures as an input parameter.
While eth_rx_queue_count() takes a pointer to rte_eth_dev and queue
index.
For future work to hide rte_eth_devices[] and friends it would be
plausible to unify parameters list of all 'fast' ethdev ops.
This patch changes eth_rx_queue_count() to accept pointer to internal
queue data as input parameter.
While this change is transparent to user, it still counts as an ABI change,
as eth_rx_queue_count_t is used by ethdev public inline function
rte_eth_rx_queue_count().

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/release_21_11.rst  |  6 ++++++
 drivers/net/ark/ark_ethdev_rx.c         |  4 ++--
 drivers/net/ark/ark_ethdev_rx.h         |  3 +--
 drivers/net/atlantic/atl_ethdev.h       |  2 +-
 drivers/net/atlantic/atl_rxtx.c         |  9 ++-------
 drivers/net/bnxt/bnxt_ethdev.c          |  8 +++++---
 drivers/net/dpaa/dpaa_ethdev.c          |  9 ++++-----
 drivers/net/dpaa2/dpaa2_ethdev.c        |  9 ++++-----
 drivers/net/e1000/e1000_ethdev.h        |  6 ++----
 drivers/net/e1000/em_rxtx.c             |  4 ++--
 drivers/net/e1000/igb_rxtx.c            |  4 ++--
 drivers/net/enic/enic_ethdev.c          | 12 ++++++------
 drivers/net/fm10k/fm10k.h               |  2 +-
 drivers/net/fm10k/fm10k_rxtx.c          |  4 ++--
 drivers/net/hns3/hns3_rxtx.c            |  7 +++++--
 drivers/net/hns3/hns3_rxtx.h            |  2 +-
 drivers/net/i40e/i40e_rxtx.c            |  4 ++--
 drivers/net/i40e/i40e_rxtx.h            |  3 +--
 drivers/net/iavf/iavf_rxtx.c            |  4 ++--
 drivers/net/iavf/iavf_rxtx.h            |  2 +-
 drivers/net/ice/ice_rxtx.c              |  4 ++--
 drivers/net/ice/ice_rxtx.h              |  2 +-
 drivers/net/igc/igc_txrx.c              |  5 ++---
 drivers/net/igc/igc_txrx.h              |  3 +--
 drivers/net/ixgbe/ixgbe_ethdev.h        |  3 +--
 drivers/net/ixgbe/ixgbe_rxtx.c          |  4 ++--
 drivers/net/mlx5/mlx5_rx.c              | 26 ++++++++++++-------------
 drivers/net/mlx5/mlx5_rx.h              |  2 +-
 drivers/net/netvsc/hn_rxtx.c            |  4 ++--
 drivers/net/netvsc/hn_var.h             |  2 +-
 drivers/net/nfp/nfp_rxtx.c              |  4 ++--
 drivers/net/nfp/nfp_rxtx.h              |  3 +--
 drivers/net/octeontx2/otx2_ethdev.h     |  2 +-
 drivers/net/octeontx2/otx2_ethdev_ops.c |  8 ++++----
 drivers/net/sfc/sfc_ethdev.c            | 12 ++++++------
 drivers/net/thunderx/nicvf_ethdev.c     |  3 +--
 drivers/net/thunderx/nicvf_rxtx.c       |  4 ++--
 drivers/net/thunderx/nicvf_rxtx.h       |  2 +-
 drivers/net/txgbe/txgbe_ethdev.h        |  3 +--
 drivers/net/txgbe/txgbe_rxtx.c          |  4 ++--
 drivers/net/vhost/rte_eth_vhost.c       |  4 ++--
 lib/ethdev/rte_ethdev.h                 |  2 +-
 lib/ethdev/rte_ethdev_core.h            |  3 +--
 43 files changed, 103 insertions(+), 110 deletions(-)

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 37dc1a7786..fd80538b6c 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -213,6 +213,12 @@ ABI Changes
   ``rte_security_ipsec_xform`` to allow applications to configure SA soft
   and hard expiry limits. Limits can be either in number of packets or bytes.
 
+* ethdev: Input parameters for ``eth_rx_queue_count_t`` was changed.
+  Instead of pointer to ``rte_eth_dev`` and queue index, now it accepts pointer
+  to internal queue data as input parameter. While this change is transparent
+  to user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
+  is used by  public inline function ``rte_eth_rx_queue_count``.
+
 
 Known Issues
 ------------
diff --git a/drivers/net/ark/ark_ethdev_rx.c b/drivers/net/ark/ark_ethdev_rx.c
index d255f0177b..98658ce621 100644
--- a/drivers/net/ark/ark_ethdev_rx.c
+++ b/drivers/net/ark/ark_ethdev_rx.c
@@ -388,11 +388,11 @@ eth_ark_rx_queue_drain(struct ark_rx_queue *queue)
 }
 
 uint32_t
-eth_ark_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id)
+eth_ark_dev_rx_queue_count(void *rx_queue)
 {
 	struct ark_rx_queue *queue;
 
-	queue = dev->data->rx_queues[queue_id];
+	queue = rx_queue;
 	return (queue->prod_index - queue->cons_index);	/* mod arith */
 }
 
diff --git a/drivers/net/ark/ark_ethdev_rx.h b/drivers/net/ark/ark_ethdev_rx.h
index c8dc340a8a..859fcf1e6f 100644
--- a/drivers/net/ark/ark_ethdev_rx.h
+++ b/drivers/net/ark/ark_ethdev_rx.h
@@ -17,8 +17,7 @@ int eth_ark_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			       unsigned int socket_id,
 			       const struct rte_eth_rxconf *rx_conf,
 			       struct rte_mempool *mp);
-uint32_t eth_ark_dev_rx_queue_count(struct rte_eth_dev *dev,
-				    uint16_t rx_queue_id);
+uint32_t eth_ark_dev_rx_queue_count(void *rx_queue);
 int eth_ark_rx_stop_queue(struct rte_eth_dev *dev, uint16_t queue_id);
 int eth_ark_rx_start_queue(struct rte_eth_dev *dev, uint16_t queue_id);
 uint16_t eth_ark_recv_pkts_noop(void *rx_queue, struct rte_mbuf **rx_pkts,
diff --git a/drivers/net/atlantic/atl_ethdev.h b/drivers/net/atlantic/atl_ethdev.h
index f547571b5c..e808460520 100644
--- a/drivers/net/atlantic/atl_ethdev.h
+++ b/drivers/net/atlantic/atl_ethdev.h
@@ -66,7 +66,7 @@ int atl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
 
-uint32_t atl_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t atl_rx_queue_count(void *rx_queue);
 
 int atl_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int atl_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c
index 7d367c9306..35bb13044e 100644
--- a/drivers/net/atlantic/atl_rxtx.c
+++ b/drivers/net/atlantic/atl_rxtx.c
@@ -689,18 +689,13 @@ atl_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 /* Return Rx queue avail count */
 
 uint32_t
-atl_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+atl_rx_queue_count(void *rx_queue)
 {
 	struct atl_rx_queue *rxq;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (rx_queue_id >= dev->data->nb_rx_queues) {
-		PMD_DRV_LOG(ERR, "Invalid RX queue id=%d", rx_queue_id);
-		return 0;
-	}
-
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 
 	if (rxq == NULL)
 		return 0;
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 097dd10de9..e07242e961 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3130,20 +3130,22 @@ bnxt_dev_led_off_op(struct rte_eth_dev *dev)
 }
 
 static uint32_t
-bnxt_rx_queue_count_op(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+bnxt_rx_queue_count_op(void *rx_queue)
 {
-	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
+	struct bnxt *bp;
 	struct bnxt_cp_ring_info *cpr;
 	uint32_t desc = 0, raw_cons, cp_ring_size;
 	struct bnxt_rx_queue *rxq;
 	struct rx_pkt_cmpl *rxcmp;
 	int rc;
 
+	rxq = rx_queue;
+	bp = rxq->bp;
+
 	rc = is_bnxt_in_error(bp);
 	if (rc)
 		return rc;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
 	cpr = rxq->cp_ring;
 	raw_cons = cpr->cp_raw_cons;
 	cp_ring_size = cpr->cp_ring_struct->ring_size;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 36d8f9249d..b5589300c9 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1278,17 +1278,16 @@ static void dpaa_eth_tx_queue_release(void *txq __rte_unused)
 }
 
 static uint32_t
-dpaa_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+dpaa_dev_rx_queue_count(void *rx_queue)
 {
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
-	struct qman_fq *rxq = &dpaa_intf->rx_queues[rx_queue_id];
+	struct qman_fq *rxq = rx_queue;
 	u32 frm_cnt = 0;
 
 	PMD_INIT_FUNC_TRACE();
 
 	if (qman_query_fq_frm_cnt(rxq, &frm_cnt) == 0) {
-		DPAA_PMD_DEBUG("RX frame count for q(%d) is %u",
-			       rx_queue_id, frm_cnt);
+		DPAA_PMD_DEBUG("RX frame count for q(%p) is %u",
+			       rx_queue, frm_cnt);
 	}
 	return frm_cnt;
 }
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index c12169578e..b295af2a57 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1011,10 +1011,9 @@ dpaa2_dev_tx_queue_release(void *q __rte_unused)
 }
 
 static uint32_t
-dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+dpaa2_dev_rx_queue_count(void *rx_queue)
 {
 	int32_t ret;
-	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	struct dpaa2_queue *dpaa2_q;
 	struct qbman_swp *swp;
 	struct qbman_fq_query_np_rslt state;
@@ -1031,12 +1030,12 @@ dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 	swp = DPAA2_PER_LCORE_PORTAL;
 
-	dpaa2_q = (struct dpaa2_queue *)priv->rx_vq[rx_queue_id];
+	dpaa2_q = rx_queue;
 
 	if (qbman_fq_query_state(swp, dpaa2_q->fqid, &state) == 0) {
 		frame_cnt = qbman_fq_state_frame_count(&state);
-		DPAA2_PMD_DP_DEBUG("RX frame count for q(%d) is %u",
-				rx_queue_id, frame_cnt);
+		DPAA2_PMD_DP_DEBUG("RX frame count for q(%p) is %u",
+				rx_queue, frame_cnt);
 	}
 	return frame_cnt;
 }
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 3b4d9c3ee6..460e130a83 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -399,8 +399,7 @@ int eth_igb_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		const struct rte_eth_rxconf *rx_conf,
 		struct rte_mempool *mb_pool);
 
-uint32_t eth_igb_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t eth_igb_rx_queue_count(void *rx_queue);
 
 int eth_igb_rx_descriptor_done(void *rx_queue, uint16_t offset);
 
@@ -476,8 +475,7 @@ int eth_em_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		const struct rte_eth_rxconf *rx_conf,
 		struct rte_mempool *mb_pool);
 
-uint32_t eth_em_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t eth_em_rx_queue_count(void *rx_queue);
 
 int eth_em_rx_descriptor_done(void *rx_queue, uint16_t offset);
 
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index dfd8f2fd00..40de36cb20 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -1489,14 +1489,14 @@ eth_em_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-eth_em_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_em_rx_queue_count(void *rx_queue)
 {
 #define EM_RXQ_SCAN_INTERVAL 4
 	volatile struct e1000_rx_desc *rxdp;
 	struct em_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 278d5d2712..3210a0e008 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -1769,14 +1769,14 @@ eth_igb_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-eth_igb_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_igb_rx_queue_count(void *rx_queue)
 {
 #define IGB_RXQ_SCAN_INTERVAL 4
 	volatile union e1000_adv_rx_desc *rxdp;
 	struct igb_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 8d5797523b..5b2d60ad9c 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -233,18 +233,18 @@ static void enicpmd_dev_rx_queue_release(void *rxq)
 	enic_free_rq(rxq);
 }
 
-static uint32_t enicpmd_dev_rx_queue_count(struct rte_eth_dev *dev,
-					   uint16_t rx_queue_id)
+static uint32_t enicpmd_dev_rx_queue_count(void *rx_queue)
 {
-	struct enic *enic = pmd_priv(dev);
+	struct enic *enic;
+	struct vnic_rq *sop_rq;
 	uint32_t queue_count = 0;
 	struct vnic_cq *cq;
 	uint32_t cq_tail;
 	uint16_t cq_idx;
-	int rq_num;
 
-	rq_num = enic_rte_rq_idx_to_sop_idx(rx_queue_id);
-	cq = &enic->cq[enic_cq_rq(enic, rq_num)];
+	sop_rq = rx_queue;
+	enic = vnic_dev_priv(sop_rq->vdev);
+	cq = &enic->cq[enic_cq_rq(enic, sop_rq->index)];
 	cq_idx = cq->to_clean;
 
 	cq_tail = ioread32(&cq->ctrl->cq_tail);
diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h
index 916b856acc..648d12a1b4 100644
--- a/drivers/net/fm10k/fm10k.h
+++ b/drivers/net/fm10k/fm10k.h
@@ -324,7 +324,7 @@ uint16_t fm10k_recv_scattered_pkts(void *rx_queue,
 		struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
 
 uint32_t
-fm10k_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+fm10k_dev_rx_queue_count(void *rx_queue);
 
 int
 fm10k_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index 0a9a27aa5a..eab798e52c 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -367,14 +367,14 @@ fm10k_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 }
 
 uint32_t
-fm10k_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+fm10k_dev_rx_queue_count(void *rx_queue)
 {
 #define FM10K_RXQ_SCAN_INTERVAL 4
 	volatile union fm10k_rx_desc *rxdp;
 	struct fm10k_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->hw_ring[rxq->next_dd];
 	while ((desc < rxq->nb_desc) &&
 		rxdp->w.status & rte_cpu_to_le_16(FM10K_RXD_STATUS_DD)) {
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 481872e395..04791ae7d0 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -4673,7 +4673,7 @@ hns3_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
 }
 
 uint32_t
-hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+hns3_rx_queue_count(void *rx_queue)
 {
 	/*
 	 * Number of BDs that have been processed by the driver
@@ -4681,9 +4681,12 @@ hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	 */
 	uint32_t driver_hold_bd_num;
 	struct hns3_rx_queue *rxq;
+	const struct rte_eth_dev *dev;
 	uint32_t fbd_num;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
+	dev = &rte_eth_devices[rxq->port_id];
+
 	fbd_num = hns3_read_dev(rxq, HNS3_RING_RX_FBDNUM_REG);
 	if (dev->rx_pkt_burst == hns3_recv_pkts_vec ||
 	    dev->rx_pkt_burst == hns3_recv_pkts_vec_sve)
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index cd7c21c1d0..34a028701f 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -696,7 +696,7 @@ int hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 			struct rte_mempool *mp);
 int hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 			unsigned int socket, const struct rte_eth_txconf *conf);
-uint32_t hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t hns3_rx_queue_count(void *rx_queue);
 int hns3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int hns3_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int hns3_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 3eb82578b0..5493ae6bba 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2117,14 +2117,14 @@ i40e_dev_rx_queue_release(void *rxq)
 }
 
 uint32_t
-i40e_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+i40e_dev_rx_queue_count(void *rx_queue)
 {
 #define I40E_RXQ_SCAN_INTERVAL 4
 	volatile union i40e_rx_desc *rxdp;
 	struct i40e_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 	while ((desc < rxq->nb_rx_desc) &&
 		((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 5ccf5773e8..a08b80f020 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -225,8 +225,7 @@ int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
 int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
 void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
 
-uint32_t i40e_dev_rx_queue_count(struct rte_eth_dev *dev,
-				 uint16_t rx_queue_id);
+uint32_t i40e_dev_rx_queue_count(void *rx_queue);
 int i40e_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
 int i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 87afc0b4cb..3dc1f04380 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -2799,14 +2799,14 @@ iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 
 /* Get the number of used descriptors of a rx queue */
 uint32_t
-iavf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id)
+iavf_dev_rxq_count(void *rx_queue)
 {
 #define IAVF_RXQ_SCAN_INTERVAL 4
 	volatile union iavf_rx_desc *rxdp;
 	struct iavf_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index e210b913d6..2f7bec2b63 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -453,7 +453,7 @@ void iavf_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			  struct rte_eth_rxq_info *qinfo);
 void iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			  struct rte_eth_txq_info *qinfo);
-uint32_t iavf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id);
+uint32_t iavf_dev_rxq_count(void *rx_queue);
 int iavf_dev_rx_desc_status(void *rx_queue, uint16_t offset);
 int iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset);
 
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 5d7ab4f047..61936b0ab1 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1427,14 +1427,14 @@ ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 }
 
 uint32_t
-ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+ice_rx_queue_count(void *rx_queue)
 {
 #define ICE_RXQ_SCAN_INTERVAL 4
 	volatile union ice_rx_flex_desc *rxdp;
 	struct ice_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 	while ((desc < rxq->nb_rx_desc) &&
 	       rte_le_to_cpu_16(rxdp->wb.status_error0) &
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index b10db0874d..b45abec91a 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -222,7 +222,7 @@ uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 void ice_set_tx_function_flag(struct rte_eth_dev *dev,
 			      struct ice_tx_queue *txq);
 void ice_set_tx_function(struct rte_eth_dev *dev);
-uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t ice_rx_queue_count(void *rx_queue);
 void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 		      struct rte_eth_rxq_info *qinfo);
 void ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index b5489eedd2..437992ecdf 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -722,8 +722,7 @@ void eth_igc_rx_queue_release(void *rxq)
 		igc_rx_queue_release(rxq);
 }
 
-uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id)
+uint32_t eth_igc_rx_queue_count(void *rx_queue)
 {
 	/**
 	 * Check the DD bit of a rx descriptor of each 4 in a group,
@@ -736,7 +735,7 @@ uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
 	struct igc_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 
 	while (desc < rxq->nb_rx_desc - rxq->rx_tail) {
diff --git a/drivers/net/igc/igc_txrx.h b/drivers/net/igc/igc_txrx.h
index f2b2d75bbc..b0c4b3ebd9 100644
--- a/drivers/net/igc/igc_txrx.h
+++ b/drivers/net/igc/igc_txrx.h
@@ -22,8 +22,7 @@ int eth_igc_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		const struct rte_eth_rxconf *rx_conf,
 		struct rte_mempool *mb_pool);
 
-uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t eth_igc_rx_queue_count(void *rx_queue);
 
 int eth_igc_rx_descriptor_done(void *rx_queue, uint16_t offset);
 
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index a0ce18ca24..c5027be1dc 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -602,8 +602,7 @@ int  ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
 
-uint32_t ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t ixgbe_dev_rx_queue_count(void *rx_queue);
 
 int ixgbe_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
 
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index bfdfd5e755..1f802851e3 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -3258,14 +3258,14 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+ixgbe_dev_rx_queue_count(void *rx_queue)
 {
 #define IXGBE_RXQ_SCAN_INTERVAL 4
 	volatile union ixgbe_adv_rx_desc *rxdp;
 	struct ixgbe_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index e3b1051ba4..1a9eb35acc 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -240,32 +240,32 @@ mlx5_rx_burst_mode_get(struct rte_eth_dev *dev,
 /**
  * DPDK callback to get the number of used descriptors in a RX queue.
  *
- * @param dev
- *   Pointer to the device structure.
- *
- * @param rx_queue_id
- *   The Rx queue.
+ * @param rx_queue
+ *   The Rx queue pointer.
  *
  * @return
  *   The number of used rx descriptor.
  *   -EINVAL if the queue is invalid
  */
 uint32_t
-mlx5_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+mlx5_rx_queue_count(void *rx_queue)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq;
+	struct mlx5_rxq_data *rxq = rx_queue;
+	struct rte_eth_dev *dev;
+
+	if (!rxq) {
+		rte_errno = EINVAL;
+		return -rte_errno;
+	}
+
+	dev = &rte_eth_devices[rxq->port_id];
 
 	if (dev->rx_pkt_burst == NULL ||
 	    dev->rx_pkt_burst == removed_rx_burst) {
 		rte_errno = ENOTSUP;
 		return -rte_errno;
 	}
-	rxq = (*priv->rxqs)[rx_queue_id];
-	if (!rxq) {
-		rte_errno = EINVAL;
-		return -rte_errno;
-	}
+
 	return rx_queue_count(rxq);
 }
 
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 3f2b99fb65..5e4ac7324d 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -260,7 +260,7 @@ uint16_t mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts,
 uint16_t removed_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts,
 			  uint16_t pkts_n);
 int mlx5_rx_descriptor_status(void *rx_queue, uint16_t offset);
-uint32_t mlx5_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t mlx5_rx_queue_count(void *rx_queue);
 void mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 		       struct rte_eth_rxq_info *qinfo);
 int mlx5_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t rx_queue_id,
diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c
index c6bf7cc132..30aac371c8 100644
--- a/drivers/net/netvsc/hn_rxtx.c
+++ b/drivers/net/netvsc/hn_rxtx.c
@@ -1018,9 +1018,9 @@ hn_dev_rx_queue_release(void *arg)
  * For this device that means how many packets are pending in the ring.
  */
 uint32_t
-hn_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id)
+hn_dev_rx_queue_count(void *rx_queue)
 {
-	struct hn_rx_queue *rxq = dev->data->rx_queues[queue_id];
+	struct hn_rx_queue *rxq = rx_queue;
 
 	return rte_ring_count(rxq->rx_ring);
 }
diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h
index 43642408bc..2a2bac9338 100644
--- a/drivers/net/netvsc/hn_var.h
+++ b/drivers/net/netvsc/hn_var.h
@@ -215,7 +215,7 @@ int	hn_dev_rx_queue_setup(struct rte_eth_dev *dev,
 void	hn_dev_rx_queue_info(struct rte_eth_dev *dev, uint16_t queue_id,
 			     struct rte_eth_rxq_info *qinfo);
 void	hn_dev_rx_queue_release(void *arg);
-uint32_t hn_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id);
+uint32_t hn_dev_rx_queue_count(void *rx_queue);
 int	hn_dev_rx_queue_status(void *rxq, uint16_t offset);
 void	hn_dev_free_queues(struct rte_eth_dev *dev);
 
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 1402c5f84a..4b2ac4cc43 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -97,14 +97,14 @@ nfp_net_rx_freelist_setup(struct rte_eth_dev *dev)
 }
 
 uint32_t
-nfp_net_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
+nfp_net_rx_queue_count(void *rx_queue)
 {
 	struct nfp_net_rxq *rxq;
 	struct nfp_net_rx_desc *rxds;
 	uint32_t idx;
 	uint32_t count;
 
-	rxq = (struct nfp_net_rxq *)dev->data->rx_queues[queue_idx];
+	rxq = rx_queue;
 
 	idx = rxq->rd_p;
 
diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h
index b0a8bf81b0..0fd50a6c22 100644
--- a/drivers/net/nfp/nfp_rxtx.h
+++ b/drivers/net/nfp/nfp_rxtx.h
@@ -275,8 +275,7 @@ struct nfp_net_rxq {
 } __rte_aligned(64);
 
 int nfp_net_rx_freelist_setup(struct rte_eth_dev *dev);
-uint32_t nfp_net_rx_queue_count(struct rte_eth_dev *dev,
-				       uint16_t queue_idx);
+uint32_t nfp_net_rx_queue_count(void *rx_queue);
 uint16_t nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 				  uint16_t nb_pkts);
 void nfp_net_rx_queue_release(void *rxq);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 7871e3d30b..6696db6f6f 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -431,7 +431,7 @@ int otx2_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			   struct rte_eth_burst_mode *mode);
 int otx2_tx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			   struct rte_eth_burst_mode *mode);
-uint32_t otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t qidx);
+uint32_t otx2_nix_rx_queue_count(void *rx_queue);
 int otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt);
 int otx2_nix_rx_descriptor_done(void *rxq, uint16_t offset);
 int otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 552e6bd43d..e6f8e5bfc1 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -342,13 +342,13 @@ nix_rx_head_tail_get(struct otx2_eth_dev *dev,
 }
 
 uint32_t
-otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t queue_idx)
+otx2_nix_rx_queue_count(void *rx_queue)
 {
-	struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[queue_idx];
-	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_eth_rxq *rxq = rx_queue;
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(rxq->eth_dev);
 	uint32_t head, tail;
 
-	nix_rx_head_tail_get(dev, &head, &tail, queue_idx);
+	nix_rx_head_tail_get(dev, &head, &tail, rxq->rq);
 	return (tail - head) % rxq->qlen;
 }
 
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 2db0d000c3..4b5713f3ec 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1281,19 +1281,19 @@ sfc_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
  * use any process-local pointers from the adapter data.
  */
 static uint32_t
-sfc_rx_queue_count(struct rte_eth_dev *dev, uint16_t ethdev_qid)
+sfc_rx_queue_count(void *rx_queue)
 {
-	const struct sfc_adapter_priv *sap = sfc_adapter_priv_by_eth_dev(dev);
-	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
-	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
+	struct sfc_dp_rxq *dp_rxq = rx_queue;
+	const struct sfc_dp_rx *dp_rx;
 	struct sfc_rxq_info *rxq_info;
 
-	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
+	dp_rx = sfc_dp_rx_by_dp_rxq(dp_rxq);
+	rxq_info = sfc_rxq_info_by_dp_rxq(dp_rxq);
 
 	if ((rxq_info->state & SFC_RXQ_STARTED) == 0)
 		return 0;
 
-	return sap->dp_rx->qdesc_npending(rxq_info->dp);
+	return dp_rx->qdesc_npending(dp_rxq);
 }
 
 /*
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 561a98fc81..0e87620e42 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -1060,8 +1060,7 @@ nicvf_rx_queue_release_mbufs(struct rte_eth_dev *dev, struct nicvf_rxq *rxq)
 	if (dev->rx_pkt_burst == NULL)
 		return;
 
-	while ((rxq_cnt = nicvf_dev_rx_queue_count(dev,
-				nicvf_netdev_qidx(rxq->nic, rxq->queue_id)))) {
+	while ((rxq_cnt = nicvf_dev_rx_queue_count(rxq))) {
 		nb_pkts = dev->rx_pkt_burst(rxq, rx_pkts,
 					NICVF_MAX_RX_FREE_THRESH);
 		PMD_DRV_LOG(INFO, "nb_pkts=%d  rxq_cnt=%d", nb_pkts, rxq_cnt);
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index 91e09ff8d5..0d4f4ae87e 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -649,11 +649,11 @@ nicvf_recv_pkts_multiseg_cksum_vlan_strip(void *rx_queue,
 }
 
 uint32_t
-nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
+nicvf_dev_rx_queue_count(void *rx_queue)
 {
 	struct nicvf_rxq *rxq;
 
-	rxq = dev->data->rx_queues[queue_idx];
+	rxq = rx_queue;
 	return nicvf_addr_read(rxq->cq_status) & NICVF_CQ_CQE_COUNT_MASK;
 }
 
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index d6ed660b4e..271f329dc4 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -83,7 +83,7 @@ nicvf_mbuff_init_mseg_update(struct rte_mbuf *pkt, const uint64_t mbuf_init,
 	*(uint64_t *)(&pkt->rearm_data) = init.value;
 }
 
-uint32_t nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx);
+uint32_t nicvf_dev_rx_queue_count(void *rx_queue);
 uint32_t nicvf_dev_rbdr_refill(struct rte_eth_dev *dev, uint16_t queue_idx);
 
 uint16_t nicvf_recv_pkts_no_offload(void *rxq, struct rte_mbuf **rx_pkts,
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 3021933965..569cd6a48f 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -446,8 +446,7 @@ int  txgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
 
-uint32_t txgbe_dev_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t txgbe_dev_rx_queue_count(void *rx_queue);
 
 int txgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int txgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 1a261287d1..2a7cfdeedb 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -2688,14 +2688,14 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-txgbe_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+txgbe_dev_rx_queue_count(void *rx_queue)
 {
 #define TXGBE_RXQ_SCAN_INTERVAL 4
 	volatile struct txgbe_rx_desc *rxdp;
 	struct txgbe_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index a202931e9a..f2b3f142d8 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -1369,11 +1369,11 @@ eth_link_update(struct rte_eth_dev *dev __rte_unused,
 }
 
 static uint32_t
-eth_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_rx_queue_count(void *rx_queue)
 {
 	struct vhost_queue *vq;
 
-	vq = dev->data->rx_queues[rx_queue_id];
+	vq = rx_queue;
 	if (vq == NULL)
 		return 0;
 
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index afdc53b674..9642b7c00f 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -5060,7 +5060,7 @@ rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
 	    dev->data->rx_queues[queue_id] == NULL)
 		return -EINVAL;
 
-	return (int)(*dev->rx_queue_count)(dev, queue_id);
+	return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]);
 }
 
 /**
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index d2c9ec42c7..948c0b71c1 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -41,8 +41,7 @@ typedef uint16_t (*eth_tx_prep_t)(void *txq,
 /**< @internal Prepare output packets on a transmit queue of an Ethernet device. */
 
 
-typedef uint32_t (*eth_rx_queue_count_t)(struct rte_eth_dev *dev,
-					 uint16_t rx_queue_id);
+typedef uint32_t (*eth_rx_queue_count_t)(void *rxq);
 /**< @internal Get number of used descriptors on a receive queue. */
 
 typedef int (*eth_rx_descriptor_done_t)(void *rxq, uint16_t offset);
-- 
2.26.3


^ permalink raw reply	[relevance 6%]

* [dpdk-dev] [PATCH v4 0/7] hide eth dev related structures
  2021-10-01 14:02  4%   ` [dpdk-dev] [PATCH v3 0/7] hide eth dev related structures Konstantin Ananyev
                       ` (4 preceding siblings ...)
  2021-10-01 17:02  0%     ` [dpdk-dev] [PATCH v3 0/7] " Ferruh Yigit
@ 2021-10-04 13:55  4%     ` Konstantin Ananyev
  2021-10-04 13:55  6%       ` [dpdk-dev] [PATCH v4 2/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
                         ` (5 more replies)
  5 siblings, 6 replies; 200+ results
From: Konstantin Ananyev @ 2021-10-04 13:55 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

v4 changes:
 - Fix secondary process attach (Pavan)
 - Fix build failure (Ferruh)
 - Update lib/ethdev/verion.map (Ferruh)
   Note that moving newly added symbols from EXPERIMENTAL to DPDK_22
   section makes checkpatch.sh to complain.

v3 changes:
 - Changes in public struct naming (Jerin/Haiyue)
 - Split patches
 - Update docs
 - Shamelessly included Andrew's patch:
   https://patches.dpdk.org/project/dpdk/patch/20210928154856.1015020-1-andrew.rybchenko@oktetlabs.ru/
   into these series.
   I have to do similar thing here, so decided to avoid duplicated effort.   

The aim of these patch series is to make rte_ethdev core data structures
(rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback, etc.) internal to
DPDK and not visible to the user.
That should allow future possible changes to core ethdev related structures
to be transparent to the user and help to improve ABI/API stability.
Note that current ethdev API is preserved, but it is a formal ABI break.

The work is based on previous discussions at:
https://www.mail-archive.com/dev@dpdk.org/msg211405.html
https://www.mail-archive.com/dev@dpdk.org/msg216685.html
and consists of the following main points:
1. Copy public 'fast' function pointers (rx_pkt_burst(), etc.) and
   related data pointer from rte_eth_dev into a separate flat array.
   We keep it public to still be able to use inline functions for these
   'fast' calls (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
   Note that apart from function pointers itself, each element of this
   flat array also contains two opaque pointers for each ethdev:
   1) a pointer to an array of internal queue data pointers
   2)  points to array of queue callback data pointers.
   Note that exposing this extra information allows us to avoid extra
   changes inside PMD level, plus should help to avoid possible
   performance degradation.
2. Change implementation of 'fast' inline ethdev functions
   (rte_eth_rx_burst(), etc.) to use new public flat array.
   While it is an ABI breakage, this change is intended to be transparent
   for both users (no changes in user app is required) and PMD developers
   (no changes in PMD is required).
   One extra note - with new implementation RX/TX callback invocation
   will cost one extra function call with this changes. That might cause
   some slowdown for code-path with RX/TX callbacks heavily involved.
   Hope such trade-off is acceptable for the community.
3. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
   things into internal header: <ethdev_driver.h>.

That approach was selected to:
  - Avoid(/minimize) possible performance losses.
  - Minimize required changes inside PMDs.
 
Performance testing results (ICX 2.0GHz, E810 (ice)):
 - testpmd macswap fwd mode, plus
   a) no RX/TX callbacks:
      no actual slowdown observed
   b) bpf-load rx 0 0 JM ./dpdk.org/examples/bpf/t3.o:
      ~2% slowdown
 - l3fwd: no actual slowdown observed

Would like to thank everyone who already reviewed and tested previous
versions of these series. All other interested parties please don't be shy
and provide your feedback.

Konstantin Ananyev (7):
  ethdev: allocate max space for internal queue array
  ethdev: change input parameters for rx_queue_count
  ethdev: copy ethdev 'fast' API into separate structure
  ethdev: make burst functions to use new flat array
  ethdev: add API to retrieve multiple ethernet addresses
  ethdev: remove legacy Rx descriptor done API
  ethdev: hide eth dev related structures

 app/test-pmd/config.c                         |  23 +-
 doc/guides/nics/features.rst                  |   6 +-
 doc/guides/rel_notes/deprecation.rst          |   5 -
 doc/guides/rel_notes/release_21_11.rst        |  21 ++
 drivers/common/octeontx2/otx2_sec_idev.c      |   2 +-
 drivers/crypto/octeontx2/otx2_cryptodev_ops.c |   2 +-
 drivers/net/ark/ark_ethdev_rx.c               |   4 +-
 drivers/net/ark/ark_ethdev_rx.h               |   3 +-
 drivers/net/atlantic/atl_ethdev.h             |   2 +-
 drivers/net/atlantic/atl_rxtx.c               |   9 +-
 drivers/net/bnxt/bnxt_ethdev.c                |   8 +-
 drivers/net/cxgbe/base/adapter.h              |   2 +-
 drivers/net/dpaa/dpaa_ethdev.c                |   9 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |   9 +-
 drivers/net/dpaa2/dpaa2_ptp.c                 |   2 +-
 drivers/net/e1000/e1000_ethdev.h              |  10 +-
 drivers/net/e1000/em_ethdev.c                 |   1 -
 drivers/net/e1000/em_rxtx.c                   |  21 +-
 drivers/net/e1000/igb_ethdev.c                |   2 -
 drivers/net/e1000/igb_rxtx.c                  |  21 +-
 drivers/net/enic/enic_ethdev.c                |  12 +-
 drivers/net/fm10k/fm10k.h                     |   5 +-
 drivers/net/fm10k/fm10k_ethdev.c              |   1 -
 drivers/net/fm10k/fm10k_rxtx.c                |  29 +-
 drivers/net/hns3/hns3_rxtx.c                  |   7 +-
 drivers/net/hns3/hns3_rxtx.h                  |   2 +-
 drivers/net/i40e/i40e_ethdev.c                |   1 -
 drivers/net/i40e/i40e_ethdev_vf.c             |   1 -
 drivers/net/i40e/i40e_rxtx.c                  |  30 +-
 drivers/net/i40e/i40e_rxtx.h                  |   4 +-
 drivers/net/iavf/iavf_rxtx.c                  |   4 +-
 drivers/net/iavf/iavf_rxtx.h                  |   2 +-
 drivers/net/ice/ice_rxtx.c                    |   4 +-
 drivers/net/ice/ice_rxtx.h                    |   2 +-
 drivers/net/igc/igc_ethdev.c                  |   1 -
 drivers/net/igc/igc_txrx.c                    |  23 +-
 drivers/net/igc/igc_txrx.h                    |   5 +-
 drivers/net/ixgbe/ixgbe_ethdev.c              |   2 -
 drivers/net/ixgbe/ixgbe_ethdev.h              |   5 +-
 drivers/net/ixgbe/ixgbe_rxtx.c                |  22 +-
 drivers/net/mlx5/mlx5_rx.c                    |  26 +-
 drivers/net/mlx5/mlx5_rx.h                    |   2 +-
 drivers/net/netvsc/hn_rxtx.c                  |   4 +-
 drivers/net/netvsc/hn_var.h                   |   3 +-
 drivers/net/nfp/nfp_rxtx.c                    |   4 +-
 drivers/net/nfp/nfp_rxtx.h                    |   3 +-
 drivers/net/octeontx2/otx2_ethdev.c           |   1 -
 drivers/net/octeontx2/otx2_ethdev.h           |   3 +-
 drivers/net/octeontx2/otx2_ethdev_ops.c       |  20 +-
 drivers/net/sfc/sfc_ethdev.c                  |  29 +-
 drivers/net/thunderx/nicvf_ethdev.c           |   3 +-
 drivers/net/thunderx/nicvf_rxtx.c             |   4 +-
 drivers/net/thunderx/nicvf_rxtx.h             |   2 +-
 drivers/net/txgbe/txgbe_ethdev.h              |   3 +-
 drivers/net/txgbe/txgbe_rxtx.c                |   4 +-
 drivers/net/vhost/rte_eth_vhost.c             |   4 +-
 drivers/net/virtio/virtio_ethdev.c            |   1 -
 lib/ethdev/ethdev_driver.h                    | 149 +++++++++
 lib/ethdev/ethdev_private.c                   |  83 +++++
 lib/ethdev/ethdev_private.h                   |   7 +
 lib/ethdev/rte_ethdev.c                       |  89 ++++--
 lib/ethdev/rte_ethdev.h                       | 286 ++++++++++++------
 lib/ethdev/rte_ethdev_core.h                  | 171 +++--------
 lib/ethdev/version.map                        |  10 +-
 lib/eventdev/rte_event_eth_rx_adapter.c       |   2 +-
 lib/eventdev/rte_event_eth_tx_adapter.c       |   2 +-
 lib/eventdev/rte_eventdev.c                   |   2 +-
 lib/metrics/rte_metrics_telemetry.c           |   2 +-
 68 files changed, 673 insertions(+), 570 deletions(-)

-- 
2.26.3


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v4 4/7] ethdev: make burst functions to use new flat array
  2021-10-04 13:55  4%     ` [dpdk-dev] [PATCH v4 " Konstantin Ananyev
  2021-10-04 13:55  6%       ` [dpdk-dev] [PATCH v4 2/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
  2021-10-04 13:55  2%       ` [dpdk-dev] [PATCH v4 3/7] ethdev: copy ethdev 'fast' API into separate structure Konstantin Ananyev
@ 2021-10-04 13:56  2%       ` Konstantin Ananyev
  2021-10-05  9:54  0%         ` David Marchand
  2021-10-04 13:56  8%       ` [dpdk-dev] [PATCH v4 7/7] ethdev: hide eth dev related structures Konstantin Ananyev
                         ` (2 subsequent siblings)
  5 siblings, 1 reply; 200+ results
From: Konstantin Ananyev @ 2021-10-04 13:56 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Rework 'fast' burst functions to use rte_eth_fp_ops[].
While it is an API/ABI breakage, this change is intended to be
transparent for both users (no changes in user app is required) and
PMD developers (no changes in PMD is required).
One extra thing to note - RX/TX callback invocation will cause extra
function call with these changes. That might cause some insignificant
slowdown for code-path where RX/TX callbacks are heavily involved.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/ethdev/ethdev_private.c |  31 +++++
 lib/ethdev/rte_ethdev.h     | 242 ++++++++++++++++++++++++++----------
 lib/ethdev/version.map      |   5 +
 3 files changed, 210 insertions(+), 68 deletions(-)

diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 3eeda6e9f9..27d29b2ac6 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -226,3 +226,34 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
 	fpo->txq.data = dev->data->tx_queues;
 	fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
 }
+
+uint16_t
+__rte_eth_rx_epilog(uint16_t port_id, uint16_t queue_id,
+	struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
+	void *opaque)
+{
+	const struct rte_eth_rxtx_callback *cb = opaque;
+
+	while (cb != NULL) {
+		nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
+				nb_pkts, cb->param);
+		cb = cb->next;
+	}
+
+	return nb_rx;
+}
+
+uint16_t
+__rte_eth_tx_prolog(uint16_t port_id, uint16_t queue_id,
+	struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque)
+{
+	const struct rte_eth_rxtx_callback *cb = opaque;
+
+	while (cb != NULL) {
+		nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
+				cb->param);
+		cb = cb->next;
+	}
+
+	return nb_pkts;
+}
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 9642b7c00f..7f68be406e 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -4904,6 +4904,33 @@ int rte_eth_representor_info_get(uint16_t port_id,
 
 #include <rte_ethdev_core.h>
 
+/**
+ * @internal
+ * Helper routine for eth driver rx_burst API.
+ * Should be called at exit from PMD's rte_eth_rx_bulk implementation.
+ * Does necessary post-processing - invokes RX callbacks if any, etc.
+ *
+ * @param port_id
+ *  The port identifier of the Ethernet device.
+ * @param queue_id
+ *  The index of the receive queue from which to retrieve input packets.
+ * @param rx_pkts
+ *   The address of an array of pointers to *rte_mbuf* structures that
+ *   have been retrieved from the device.
+ * @param nb_pkts
+ *   The number of packets that were retrieved from the device.
+ * @param nb_pkts
+ *   The number of elements in *rx_pkts* array.
+ * @param opaque
+ *   Opaque pointer of RX queue callback related data.
+ *
+ * @return
+ *  The number of packets effectively supplied to the *rx_pkts* array.
+ */
+uint16_t __rte_eth_rx_epilog(uint16_t port_id, uint16_t queue_id,
+		struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
+		void *opaque);
+
 /**
  *
  * Retrieve a burst of input packets from a receive queue of an Ethernet
@@ -4995,23 +5022,37 @@ static inline uint16_t
 rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
 		 struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
 {
-	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 	uint16_t nb_rx;
+	struct rte_eth_fp_ops *p;
+	void *cb, *qd;
+
+#ifdef RTE_ETHDEV_DEBUG_RX
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return 0;
+	}
+#endif
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->rxq.data[queue_id];
 
 #ifdef RTE_ETHDEV_DEBUG_RX
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
 
-	if (queue_id >= dev->data->nb_rx_queues) {
-		RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u\n", queue_id);
+	if (qd == NULL) {
+		RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u for port_id=%u\n",
+			queue_id, port_id);
 		return 0;
 	}
 #endif
-	nb_rx = (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id],
-				     rx_pkts, nb_pkts);
+
+	nb_rx = p->rx_pkt_burst(qd, rx_pkts, nb_pkts);
 
 #ifdef RTE_ETHDEV_RXTX_CALLBACKS
-	struct rte_eth_rxtx_callback *cb;
 
 	/* __ATOMIC_RELEASE memory order was used when the
 	 * call back was inserted into the list.
@@ -5019,16 +5060,10 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
 	 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
 	 * not required.
 	 */
-	cb = __atomic_load_n(&dev->post_rx_burst_cbs[queue_id],
-				__ATOMIC_RELAXED);
-
-	if (unlikely(cb != NULL)) {
-		do {
-			nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
-						nb_pkts, cb->param);
-			cb = cb->next;
-		} while (cb != NULL);
-	}
+	cb = __atomic_load_n((void **)&p->rxq.clbk[queue_id], __ATOMIC_RELAXED);
+	if (unlikely(cb != NULL))
+		nb_rx = __rte_eth_rx_epilog(port_id, queue_id, rx_pkts, nb_rx,
+				nb_pkts, cb);
 #endif
 
 	rte_ethdev_trace_rx_burst(port_id, queue_id, (void **)rx_pkts, nb_rx);
@@ -5051,16 +5086,27 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
 static inline int
 rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
 {
-	struct rte_eth_dev *dev;
+	struct rte_eth_fp_ops *p;
+	void *qd;
+
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return -EINVAL;
+	}
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->rxq.data[queue_id];
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
-	dev = &rte_eth_devices[port_id];
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_queue_count, -ENOTSUP);
-	if (queue_id >= dev->data->nb_rx_queues ||
-	    dev->data->rx_queues[queue_id] == NULL)
+	RTE_FUNC_PTR_OR_ERR_RET(*p->rx_queue_count, -ENOTSUP);
+	if (qd == NULL)
 		return -EINVAL;
 
-	return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]);
+	return (int)(*p->rx_queue_count)(qd);
 }
 
 /**
@@ -5133,21 +5179,30 @@ static inline int
 rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
 	uint16_t offset)
 {
-	struct rte_eth_dev *dev;
-	void *rxq;
+	struct rte_eth_fp_ops *p;
+	void *qd;
 
 #ifdef RTE_ETHDEV_DEBUG_RX
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return -EINVAL;
+	}
 #endif
-	dev = &rte_eth_devices[port_id];
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->rxq.data[queue_id];
+
 #ifdef RTE_ETHDEV_DEBUG_RX
-	if (queue_id >= dev->data->nb_rx_queues)
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (qd == NULL)
 		return -ENODEV;
 #endif
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_descriptor_status, -ENOTSUP);
-	rxq = dev->data->rx_queues[queue_id];
-
-	return (*dev->rx_descriptor_status)(rxq, offset);
+	RTE_FUNC_PTR_OR_ERR_RET(*p->rx_descriptor_status, -ENOTSUP);
+	return (*p->rx_descriptor_status)(qd, offset);
 }
 
 /**@{@name Tx hardware descriptor states
@@ -5194,23 +5249,54 @@ rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
 static inline int rte_eth_tx_descriptor_status(uint16_t port_id,
 	uint16_t queue_id, uint16_t offset)
 {
-	struct rte_eth_dev *dev;
-	void *txq;
+	struct rte_eth_fp_ops *p;
+	void *qd;
 
 #ifdef RTE_ETHDEV_DEBUG_TX
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return -EINVAL;
+	}
 #endif
-	dev = &rte_eth_devices[port_id];
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->txq.data[queue_id];
+
 #ifdef RTE_ETHDEV_DEBUG_TX
-	if (queue_id >= dev->data->nb_tx_queues)
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (qd == NULL)
 		return -ENODEV;
 #endif
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_descriptor_status, -ENOTSUP);
-	txq = dev->data->tx_queues[queue_id];
-
-	return (*dev->tx_descriptor_status)(txq, offset);
+	RTE_FUNC_PTR_OR_ERR_RET(*p->tx_descriptor_status, -ENOTSUP);
+	return (*p->tx_descriptor_status)(qd, offset);
 }
 
+/**
+ * @internal
+ * Helper routine for eth driver tx_burst API.
+ * Should be called before entry PMD's rte_eth_tx_bulk implementation.
+ * Does necessary pre-processing - invokes TX callbacks if any, etc.
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device.
+ * @param queue_id
+ *   The index of the transmit queue through which output packets must be
+ *   sent.
+ * @param tx_pkts
+ *   The address of an array of *nb_pkts* pointers to *rte_mbuf* structures
+ *   which contain the output packets.
+ * @param nb_pkts
+ *   The maximum number of packets to transmit.
+ * @return
+ *   The number of output packets to transmit.
+ */
+uint16_t __rte_eth_tx_prolog(uint16_t port_id, uint16_t queue_id,
+	struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque);
+
 /**
  * Send a burst of output packets on a transmit queue of an Ethernet device.
  *
@@ -5281,20 +5367,34 @@ static inline uint16_t
 rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
 		 struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 {
-	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	struct rte_eth_fp_ops *p;
+	void *cb, *qd;
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return 0;
+	}
+#endif
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->txq.data[queue_id];
 
 #ifdef RTE_ETHDEV_DEBUG_TX
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0);
 
-	if (queue_id >= dev->data->nb_tx_queues) {
-		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
+	if (qd == NULL) {
+		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
+			queue_id, port_id);
 		return 0;
 	}
 #endif
 
 #ifdef RTE_ETHDEV_RXTX_CALLBACKS
-	struct rte_eth_rxtx_callback *cb;
 
 	/* __ATOMIC_RELEASE memory order was used when the
 	 * call back was inserted into the list.
@@ -5302,21 +5402,16 @@ rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
 	 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
 	 * not required.
 	 */
-	cb = __atomic_load_n(&dev->pre_tx_burst_cbs[queue_id],
-				__ATOMIC_RELAXED);
-
-	if (unlikely(cb != NULL)) {
-		do {
-			nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
-					cb->param);
-			cb = cb->next;
-		} while (cb != NULL);
-	}
+	cb = __atomic_load_n((void **)&p->txq.clbk[queue_id], __ATOMIC_RELAXED);
+	if (unlikely(cb != NULL))
+		nb_pkts = __rte_eth_tx_prolog(port_id, queue_id, tx_pkts,
+				nb_pkts, cb);
 #endif
 
-	rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts,
-		nb_pkts);
-	return (*dev->tx_pkt_burst)(dev->data->tx_queues[queue_id], tx_pkts, nb_pkts);
+	nb_pkts = p->tx_pkt_burst(qd, tx_pkts, nb_pkts);
+
+	rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts, nb_pkts);
+	return nb_pkts;
 }
 
 /**
@@ -5379,31 +5474,42 @@ static inline uint16_t
 rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id,
 		struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 {
-	struct rte_eth_dev *dev;
+	struct rte_eth_fp_ops *p;
+	void *qd;
 
 #ifdef RTE_ETHDEV_DEBUG_TX
-	if (!rte_eth_dev_is_valid_port(port_id)) {
-		RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
 		rte_errno = ENODEV;
 		return 0;
 	}
 #endif
 
-	dev = &rte_eth_devices[port_id];
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->txq.data[queue_id];
 
 #ifdef RTE_ETHDEV_DEBUG_TX
-	if (queue_id >= dev->data->nb_tx_queues) {
-		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
+	if (!rte_eth_dev_is_valid_port(port_id)) {
+		RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
+		rte_errno = ENODEV;
+		return 0;
+	}
+	if (qd == NULL) {
+		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
+			queue_id, port_id);
 		rte_errno = EINVAL;
 		return 0;
 	}
 #endif
 
-	if (!dev->tx_pkt_prepare)
+	if (!p->tx_pkt_prepare)
 		return nb_pkts;
 
-	return (*dev->tx_pkt_prepare)(dev->data->tx_queues[queue_id],
-			tx_pkts, nb_pkts);
+	return p->tx_pkt_prepare(qd, tx_pkts, nb_pkts);
 }
 
 #else
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 904bce6ea1..2348ec3c3c 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -1,6 +1,10 @@
 DPDK_22 {
 	global:
 
+	# internal functions called by public inline ones
+	__rte_eth_rx_epilog;
+	__rte_eth_tx_prolog;
+
 	rte_eth_add_first_rx_callback;
 	rte_eth_add_rx_callback;
 	rte_eth_add_tx_callback;
@@ -76,6 +80,7 @@ DPDK_22 {
 	rte_eth_find_next_of;
 	rte_eth_find_next_owned_by;
 	rte_eth_find_next_sibling;
+	rte_eth_fp_ops;
 	rte_eth_iterator_cleanup;
 	rte_eth_iterator_init;
 	rte_eth_iterator_next;
-- 
2.26.3


^ permalink raw reply	[relevance 2%]

* [dpdk-dev] [PATCH v4 7/7] ethdev: hide eth dev related structures
  2021-10-04 13:55  4%     ` [dpdk-dev] [PATCH v4 " Konstantin Ananyev
                         ` (2 preceding siblings ...)
  2021-10-04 13:56  2%       ` [dpdk-dev] [PATCH v4 4/7] ethdev: make burst functions to use new flat array Konstantin Ananyev
@ 2021-10-04 13:56  8%       ` Konstantin Ananyev
  2021-10-05 10:04  0%         ` David Marchand
  2021-10-06 16:42  0%       ` [dpdk-dev] [PATCH v4 0/7] " Ali Alnubani
  2021-10-07 11:27  4%       ` [dpdk-dev] [PATCH v5 " Konstantin Ananyev
  5 siblings, 1 reply; 200+ results
From: Konstantin Ananyev @ 2021-10-04 13:56 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
data into private header (ethdev_driver.h).
Few minor changes to keep DPDK building after that.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/release_21_11.rst        |   6 +
 drivers/common/octeontx2/otx2_sec_idev.c      |   2 +-
 drivers/crypto/octeontx2/otx2_cryptodev_ops.c |   2 +-
 drivers/net/cxgbe/base/adapter.h              |   2 +-
 drivers/net/dpaa2/dpaa2_ptp.c                 |   2 +-
 drivers/net/netvsc/hn_var.h                   |   1 +
 lib/ethdev/ethdev_driver.h                    | 149 ++++++++++++++++++
 lib/ethdev/rte_ethdev_core.h                  | 143 -----------------
 lib/ethdev/version.map                        |   2 +-
 lib/eventdev/rte_event_eth_rx_adapter.c       |   2 +-
 lib/eventdev/rte_event_eth_tx_adapter.c       |   2 +-
 lib/eventdev/rte_eventdev.c                   |   2 +-
 lib/metrics/rte_metrics_telemetry.c           |   2 +-
 13 files changed, 165 insertions(+), 152 deletions(-)

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 6055551443..2944149943 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -228,6 +228,12 @@ ABI Changes
   to user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
   is used by  public inline function ``rte_eth_rx_queue_count``.
 
+* ethdev: Made ``rte_eth_dev``, ``rte_eth_dev_data``, ``rte_eth_rxtx_callback``
+  private data structures. ``rte_eth_devices[]`` can't be accessible directly
+  by user any more. While it is an ABI breakage, this change is intended
+  to be transparent for both users (no changes in user app is required) and
+  PMD developers (no changes in PMD is required).
+
 
 Known Issues
 ------------
diff --git a/drivers/common/octeontx2/otx2_sec_idev.c b/drivers/common/octeontx2/otx2_sec_idev.c
index 6e9643c383..b561b67174 100644
--- a/drivers/common/octeontx2/otx2_sec_idev.c
+++ b/drivers/common/octeontx2/otx2_sec_idev.c
@@ -4,7 +4,7 @@
 
 #include <rte_atomic.h>
 #include <rte_bus_pci.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_spinlock.h>
 
 #include "otx2_common.h"
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
index 37fad11d91..f0b72e05c2 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
@@ -6,7 +6,7 @@
 
 #include <cryptodev_pmd.h>
 #include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_event_crypto_adapter.h>
 
 #include "otx2_cryptodev.h"
diff --git a/drivers/net/cxgbe/base/adapter.h b/drivers/net/cxgbe/base/adapter.h
index 01a2a9d147..1c7c8afe16 100644
--- a/drivers/net/cxgbe/base/adapter.h
+++ b/drivers/net/cxgbe/base/adapter.h
@@ -12,7 +12,7 @@
 #include <rte_mbuf.h>
 #include <rte_io.h>
 #include <rte_rwlock.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 
 #include "../cxgbe_compat.h"
 #include "../cxgbe_ofld.h"
diff --git a/drivers/net/dpaa2/dpaa2_ptp.c b/drivers/net/dpaa2/dpaa2_ptp.c
index 899dd5d442..8d79e39244 100644
--- a/drivers/net/dpaa2/dpaa2_ptp.c
+++ b/drivers/net/dpaa2/dpaa2_ptp.c
@@ -10,7 +10,7 @@
 #include <unistd.h>
 #include <stdarg.h>
 
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_log.h>
 #include <rte_eth_ctrl.h>
 #include <rte_malloc.h>
diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h
index 2a2bac9338..74e6e6010d 100644
--- a/drivers/net/netvsc/hn_var.h
+++ b/drivers/net/netvsc/hn_var.h
@@ -7,6 +7,7 @@
  */
 
 #include <rte_eal_paging.h>
+#include <ethdev_driver.h>
 
 /*
  * Tunable ethdev params
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index cc2c75261c..63b04dce32 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -17,6 +17,155 @@
 
 #include <rte_ethdev.h>
 
+/**
+ * @internal
+ * Structure used to hold information about the callbacks to be called for a
+ * queue on RX and TX.
+ */
+struct rte_eth_rxtx_callback {
+	struct rte_eth_rxtx_callback *next;
+	union{
+		rte_rx_callback_fn rx;
+		rte_tx_callback_fn tx;
+	} fn;
+	void *param;
+};
+
+/**
+ * @internal
+ * The generic data structure associated with each ethernet device.
+ *
+ * Pointers to burst-oriented packet receive and transmit functions are
+ * located at the beginning of the structure, along with the pointer to
+ * where all the data elements for the particular device are stored in shared
+ * memory. This split allows the function pointer and driver data to be per-
+ * process, while the actual configuration data for the device is shared.
+ */
+struct rte_eth_dev {
+	eth_rx_burst_t rx_pkt_burst; /**< Pointer to PMD receive function. */
+	eth_tx_burst_t tx_pkt_burst; /**< Pointer to PMD transmit function. */
+	eth_tx_prep_t tx_pkt_prepare;
+	/**< Pointer to PMD transmit prepare function. */
+	eth_rx_queue_count_t rx_queue_count;
+	/**< Get the number of used RX descriptors. */
+	eth_rx_descriptor_status_t rx_descriptor_status;
+	/**< Check the status of a Rx descriptor. */
+	eth_tx_descriptor_status_t tx_descriptor_status;
+	/**< Check the status of a Tx descriptor. */
+
+	/**
+	 * Next two fields are per-device data but *data is shared between
+	 * primary and secondary processes and *process_private is per-process
+	 * private. The second one is managed by PMDs if necessary.
+	 */
+	struct rte_eth_dev_data *data;  /**< Pointer to device data. */
+	void *process_private; /**< Pointer to per-process device data. */
+	const struct eth_dev_ops *dev_ops; /**< Functions exported by PMD */
+	struct rte_device *device; /**< Backing device */
+	struct rte_intr_handle *intr_handle; /**< Device interrupt handle */
+	/** User application callbacks for NIC interrupts */
+	struct rte_eth_dev_cb_list link_intr_cbs;
+	/**
+	 * User-supplied functions called from rx_burst to post-process
+	 * received packets before passing them to the user
+	 */
+	struct rte_eth_rxtx_callback *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
+	/**
+	 * User-supplied functions called from tx_burst to pre-process
+	 * received packets before passing them to the driver for transmission.
+	 */
+	struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
+	enum rte_eth_dev_state state; /**< Flag indicating the port state */
+	void *security_ctx; /**< Context for security ops */
+
+	uint64_t reserved_64s[4]; /**< Reserved for future fields */
+	void *reserved_ptrs[4];   /**< Reserved for future fields */
+} __rte_cache_aligned;
+
+struct rte_eth_dev_sriov;
+struct rte_eth_dev_owner;
+
+/**
+ * @internal
+ * The data part, with no function pointers, associated with each ethernet
+ * device. This structure is safe to place in shared memory to be common
+ * among different processes in a multi-process configuration.
+ */
+struct rte_eth_dev_data {
+	char name[RTE_ETH_NAME_MAX_LEN]; /**< Unique identifier name */
+
+	void **rx_queues; /**< Array of pointers to RX queues. */
+	void **tx_queues; /**< Array of pointers to TX queues. */
+	uint16_t nb_rx_queues; /**< Number of RX queues. */
+	uint16_t nb_tx_queues; /**< Number of TX queues. */
+
+	struct rte_eth_dev_sriov sriov;    /**< SRIOV data */
+
+	void *dev_private;
+			/**< PMD-specific private data.
+			 *   @see rte_eth_dev_release_port()
+			 */
+
+	struct rte_eth_link dev_link;   /**< Link-level information & status. */
+	struct rte_eth_conf dev_conf;   /**< Configuration applied to device. */
+	uint16_t mtu;                   /**< Maximum Transmission Unit. */
+	uint32_t min_rx_buf_size;
+			/**< Common RX buffer size handled by all queues. */
+
+	uint64_t rx_mbuf_alloc_failed; /**< RX ring mbuf allocation failures. */
+	struct rte_ether_addr *mac_addrs;
+			/**< Device Ethernet link address.
+			 *   @see rte_eth_dev_release_port()
+			 */
+	uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
+			/**< Bitmap associating MAC addresses to pools. */
+	struct rte_ether_addr *hash_mac_addrs;
+			/**< Device Ethernet MAC addresses of hash filtering.
+			 *   @see rte_eth_dev_release_port()
+			 */
+	uint16_t port_id;           /**< Device [external] port identifier. */
+
+	__extension__
+	uint8_t promiscuous   : 1,
+		/**< RX promiscuous mode ON(1) / OFF(0). */
+		scattered_rx : 1,
+		/**< RX of scattered packets is ON(1) / OFF(0) */
+		all_multicast : 1,
+		/**< RX all multicast mode ON(1) / OFF(0). */
+		dev_started : 1,
+		/**< Device state: STARTED(1) / STOPPED(0). */
+		lro         : 1,
+		/**< RX LRO is ON(1) / OFF(0) */
+		dev_configured : 1;
+		/**< Indicates whether the device is configured.
+		 *   CONFIGURED(1) / NOT CONFIGURED(0).
+		 */
+	uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
+		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
+	uint8_t tx_queue_state[RTE_MAX_QUEUES_PER_PORT];
+		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
+	uint32_t dev_flags;             /**< Capabilities. */
+	int numa_node;                  /**< NUMA node connection. */
+	struct rte_vlan_filter_conf vlan_filter_conf;
+			/**< VLAN filter configuration. */
+	struct rte_eth_dev_owner owner; /**< The port owner. */
+	uint16_t representor_id;
+			/**< Switch-specific identifier.
+			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
+			 */
+
+	pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
+	uint64_t reserved_64s[4]; /**< Reserved for future fields */
+	void *reserved_ptrs[4];   /**< Reserved for future fields */
+} __rte_cache_aligned;
+
+/**
+ * @internal
+ * The pool of *rte_eth_dev* structures. The size of the pool
+ * is configured at compile-time in the <rte_ethdev.c> file.
+ */
+extern struct rte_eth_dev rte_eth_devices[];
+
 /**< @internal Declaration of the hairpin peer queue information structure. */
 struct rte_hairpin_peer_info;
 
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index 63078e1ef4..2d07db0811 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -95,147 +95,4 @@ struct rte_eth_fp_ops {
 
 extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
 
-
-/**
- * @internal
- * Structure used to hold information about the callbacks to be called for a
- * queue on RX and TX.
- */
-struct rte_eth_rxtx_callback {
-	struct rte_eth_rxtx_callback *next;
-	union{
-		rte_rx_callback_fn rx;
-		rte_tx_callback_fn tx;
-	} fn;
-	void *param;
-};
-
-/**
- * @internal
- * The generic data structure associated with each ethernet device.
- *
- * Pointers to burst-oriented packet receive and transmit functions are
- * located at the beginning of the structure, along with the pointer to
- * where all the data elements for the particular device are stored in shared
- * memory. This split allows the function pointer and driver data to be per-
- * process, while the actual configuration data for the device is shared.
- */
-struct rte_eth_dev {
-	eth_rx_burst_t rx_pkt_burst; /**< Pointer to PMD receive function. */
-	eth_tx_burst_t tx_pkt_burst; /**< Pointer to PMD transmit function. */
-	eth_tx_prep_t tx_pkt_prepare; /**< Pointer to PMD transmit prepare function. */
-
-	eth_rx_queue_count_t       rx_queue_count; /**< Get the number of used RX descriptors. */
-	eth_rx_descriptor_status_t rx_descriptor_status; /**< Check the status of a Rx descriptor. */
-	eth_tx_descriptor_status_t tx_descriptor_status; /**< Check the status of a Tx descriptor. */
-
-	/**
-	 * Next two fields are per-device data but *data is shared between
-	 * primary and secondary processes and *process_private is per-process
-	 * private. The second one is managed by PMDs if necessary.
-	 */
-	struct rte_eth_dev_data *data;  /**< Pointer to device data. */
-	void *process_private; /**< Pointer to per-process device data. */
-	const struct eth_dev_ops *dev_ops; /**< Functions exported by PMD */
-	struct rte_device *device; /**< Backing device */
-	struct rte_intr_handle *intr_handle; /**< Device interrupt handle */
-	/** User application callbacks for NIC interrupts */
-	struct rte_eth_dev_cb_list link_intr_cbs;
-	/**
-	 * User-supplied functions called from rx_burst to post-process
-	 * received packets before passing them to the user
-	 */
-	struct rte_eth_rxtx_callback *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
-	/**
-	 * User-supplied functions called from tx_burst to pre-process
-	 * received packets before passing them to the driver for transmission.
-	 */
-	struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
-	enum rte_eth_dev_state state; /**< Flag indicating the port state */
-	void *security_ctx; /**< Context for security ops */
-
-	uint64_t reserved_64s[4]; /**< Reserved for future fields */
-	void *reserved_ptrs[4];   /**< Reserved for future fields */
-} __rte_cache_aligned;
-
-struct rte_eth_dev_sriov;
-struct rte_eth_dev_owner;
-
-/**
- * @internal
- * The data part, with no function pointers, associated with each ethernet device.
- *
- * This structure is safe to place in shared memory to be common among different
- * processes in a multi-process configuration.
- */
-struct rte_eth_dev_data {
-	char name[RTE_ETH_NAME_MAX_LEN]; /**< Unique identifier name */
-
-	void **rx_queues; /**< Array of pointers to RX queues. */
-	void **tx_queues; /**< Array of pointers to TX queues. */
-	uint16_t nb_rx_queues; /**< Number of RX queues. */
-	uint16_t nb_tx_queues; /**< Number of TX queues. */
-
-	struct rte_eth_dev_sriov sriov;    /**< SRIOV data */
-
-	void *dev_private;
-			/**< PMD-specific private data.
-			 *   @see rte_eth_dev_release_port()
-			 */
-
-	struct rte_eth_link dev_link;   /**< Link-level information & status. */
-	struct rte_eth_conf dev_conf;   /**< Configuration applied to device. */
-	uint16_t mtu;                   /**< Maximum Transmission Unit. */
-	uint32_t min_rx_buf_size;
-			/**< Common RX buffer size handled by all queues. */
-
-	uint64_t rx_mbuf_alloc_failed; /**< RX ring mbuf allocation failures. */
-	struct rte_ether_addr *mac_addrs;
-			/**< Device Ethernet link address.
-			 *   @see rte_eth_dev_release_port()
-			 */
-	uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
-			/**< Bitmap associating MAC addresses to pools. */
-	struct rte_ether_addr *hash_mac_addrs;
-			/**< Device Ethernet MAC addresses of hash filtering.
-			 *   @see rte_eth_dev_release_port()
-			 */
-	uint16_t port_id;           /**< Device [external] port identifier. */
-
-	__extension__
-	uint8_t promiscuous   : 1, /**< RX promiscuous mode ON(1) / OFF(0). */
-		scattered_rx : 1,  /**< RX of scattered packets is ON(1) / OFF(0) */
-		all_multicast : 1, /**< RX all multicast mode ON(1) / OFF(0). */
-		dev_started : 1,   /**< Device state: STARTED(1) / STOPPED(0). */
-		lro         : 1,   /**< RX LRO is ON(1) / OFF(0) */
-		dev_configured : 1;
-		/**< Indicates whether the device is configured.
-		 *   CONFIGURED(1) / NOT CONFIGURED(0).
-		 */
-	uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
-		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
-	uint8_t tx_queue_state[RTE_MAX_QUEUES_PER_PORT];
-		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
-	uint32_t dev_flags;             /**< Capabilities. */
-	int numa_node;                  /**< NUMA node connection. */
-	struct rte_vlan_filter_conf vlan_filter_conf;
-			/**< VLAN filter configuration. */
-	struct rte_eth_dev_owner owner; /**< The port owner. */
-	uint16_t representor_id;
-			/**< Switch-specific identifier.
-			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
-			 */
-
-	pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
-	uint64_t reserved_64s[4]; /**< Reserved for future fields */
-	void *reserved_ptrs[4];   /**< Reserved for future fields */
-} __rte_cache_aligned;
-
-/**
- * @internal
- * The pool of *rte_eth_dev* structures. The size of the pool
- * is configured at compile-time in the <rte_ethdev.c> file.
- */
-extern struct rte_eth_dev rte_eth_devices[];
-
 #endif /* _RTE_ETHDEV_CORE_H_ */
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 0881202381..3dc494a016 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -75,7 +75,6 @@ DPDK_22 {
 	rte_eth_dev_udp_tunnel_port_add;
 	rte_eth_dev_udp_tunnel_port_delete;
 	rte_eth_dev_vlan_filter;
-	rte_eth_devices;
 	rte_eth_find_next;
 	rte_eth_find_next_of;
 	rte_eth_find_next_owned_by;
@@ -272,6 +271,7 @@ INTERNAL {
 	rte_eth_dev_release_port;
 	rte_eth_dev_internal_reset;
 	rte_eth_devargs_parse;
+	rte_eth_devices;
 	rte_eth_dma_zone_free;
 	rte_eth_dma_zone_reserve;
 	rte_eth_hairpin_queue_peer_bind;
diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c
index 13dfb28401..89c4ca5d40 100644
--- a/lib/eventdev/rte_event_eth_rx_adapter.c
+++ b/lib/eventdev/rte_event_eth_rx_adapter.c
@@ -11,7 +11,7 @@
 #include <rte_common.h>
 #include <rte_dev.h>
 #include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_log.h>
 #include <rte_malloc.h>
 #include <rte_service_component.h>
diff --git a/lib/eventdev/rte_event_eth_tx_adapter.c b/lib/eventdev/rte_event_eth_tx_adapter.c
index 18c0359db7..1c06c8707c 100644
--- a/lib/eventdev/rte_event_eth_tx_adapter.c
+++ b/lib/eventdev/rte_event_eth_tx_adapter.c
@@ -3,7 +3,7 @@
  */
 #include <rte_spinlock.h>
 #include <rte_service_component.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 
 #include "eventdev_pmd.h"
 #include "rte_eventdev_trace.h"
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index e347d6dfd5..ebef5f0906 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -29,7 +29,7 @@
 #include <rte_common.h>
 #include <rte_malloc.h>
 #include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_cryptodev.h>
 #include <cryptodev_pmd.h>
 #include <rte_telemetry.h>
diff --git a/lib/metrics/rte_metrics_telemetry.c b/lib/metrics/rte_metrics_telemetry.c
index 269f8ef613..5be21b2e86 100644
--- a/lib/metrics/rte_metrics_telemetry.c
+++ b/lib/metrics/rte_metrics_telemetry.c
@@ -2,7 +2,7 @@
  * Copyright(c) 2020 Intel Corporation
  */
 
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_string_fns.h>
 #ifdef RTE_LIB_TELEMETRY
 #include <telemetry_internal.h>
-- 
2.26.3


^ permalink raw reply	[relevance 8%]

* [dpdk-dev] [PATCH v4 3/7] ethdev: copy ethdev 'fast' API into separate structure
  2021-10-04 13:55  4%     ` [dpdk-dev] [PATCH v4 " Konstantin Ananyev
  2021-10-04 13:55  6%       ` [dpdk-dev] [PATCH v4 2/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
@ 2021-10-04 13:55  2%       ` Konstantin Ananyev
  2021-10-05 13:09  0%         ` Thomas Monjalon
  2021-10-04 13:56  2%       ` [dpdk-dev] [PATCH v4 4/7] ethdev: make burst functions to use new flat array Konstantin Ananyev
                         ` (3 subsequent siblings)
  5 siblings, 1 reply; 200+ results
From: Konstantin Ananyev @ 2021-10-04 13:55 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Copy public function pointers (rx_pkt_burst(), etc.) and related
pointers to internal data from rte_eth_dev structure into a
separate flat array. That array will remain in a public header.
The intention here is to make rte_eth_dev and related structures internal.
That should allow future possible changes to core eth_dev structures
to be transparent to the user and help to avoid ABI/API breakages.
The plan is to keep minimal part of data from rte_eth_dev public,
so we still can use inline functions for 'fast' calls
(like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/ethdev/ethdev_private.c  | 52 ++++++++++++++++++++++++++++++++++++
 lib/ethdev/ethdev_private.h  |  7 +++++
 lib/ethdev/rte_ethdev.c      | 27 +++++++++++++++++++
 lib/ethdev/rte_ethdev_core.h | 45 +++++++++++++++++++++++++++++++
 4 files changed, 131 insertions(+)

diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 012cf73ca2..3eeda6e9f9 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -174,3 +174,55 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
 		RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
 	return str == NULL ? -1 : 0;
 }
+
+static uint16_t
+dummy_eth_rx_burst(__rte_unused void *rxq,
+		__rte_unused struct rte_mbuf **rx_pkts,
+		__rte_unused uint16_t nb_pkts)
+{
+	RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+static uint16_t
+dummy_eth_tx_burst(__rte_unused void *txq,
+		__rte_unused struct rte_mbuf **tx_pkts,
+		__rte_unused uint16_t nb_pkts)
+{
+	RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n");
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+void
+eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo)
+{
+	static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
+	static const struct rte_eth_fp_ops dummy_ops = {
+		.rx_pkt_burst = dummy_eth_rx_burst,
+		.tx_pkt_burst = dummy_eth_tx_burst,
+		.rxq = {.data = dummy_data, .clbk = dummy_data,},
+		.txq = {.data = dummy_data, .clbk = dummy_data,},
+	};
+
+	*fpo = dummy_ops;
+}
+
+void
+eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
+		const struct rte_eth_dev *dev)
+{
+	fpo->rx_pkt_burst = dev->rx_pkt_burst;
+	fpo->tx_pkt_burst = dev->tx_pkt_burst;
+	fpo->tx_pkt_prepare = dev->tx_pkt_prepare;
+	fpo->rx_queue_count = dev->rx_queue_count;
+	fpo->rx_descriptor_status = dev->rx_descriptor_status;
+	fpo->tx_descriptor_status = dev->tx_descriptor_status;
+
+	fpo->rxq.data = dev->data->rx_queues;
+	fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
+
+	fpo->txq.data = dev->data->tx_queues;
+	fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
+}
diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
index 3724429577..40333e7651 100644
--- a/lib/ethdev/ethdev_private.h
+++ b/lib/ethdev/ethdev_private.h
@@ -26,4 +26,11 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
 /* Parse devargs value for representor parameter. */
 int rte_eth_devargs_parse_representor_ports(char *str, void *data);
 
+/* reset eth 'fast' API to dummy values */
+void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
+
+/* setup eth 'fast' API to ethdev values */
+void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
+		const struct rte_eth_dev *dev);
+
 #endif /* _ETH_PRIVATE_H_ */
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 424bc260fa..036c82cbfb 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -44,6 +44,9 @@
 static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
 struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
 
+/* public 'fast' API */
+struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
+
 /* spinlock for eth device callbacks */
 static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
 
@@ -578,6 +581,8 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
 		rte_eth_dev_callback_process(eth_dev,
 				RTE_ETH_EVENT_DESTROY, NULL);
 
+	eth_dev_fp_ops_reset(rte_eth_fp_ops + eth_dev->data->port_id);
+
 	rte_spinlock_lock(&eth_dev_shared_data->ownership_lock);
 
 	eth_dev->state = RTE_ETH_DEV_UNUSED;
@@ -1788,6 +1793,9 @@ rte_eth_dev_start(uint16_t port_id)
 		(*dev->dev_ops->link_update)(dev, 0);
 	}
 
+	/* expose selection of PMD rx/tx function */
+	eth_dev_fp_ops_setup(rte_eth_fp_ops + port_id, dev);
+
 	rte_ethdev_trace_start(port_id);
 	return 0;
 }
@@ -1810,6 +1818,9 @@ rte_eth_dev_stop(uint16_t port_id)
 		return 0;
 	}
 
+	/* point rx/tx functions to dummy ones */
+	eth_dev_fp_ops_reset(rte_eth_fp_ops + port_id);
+
 	dev->data->dev_started = 0;
 	ret = (*dev->dev_ops->dev_stop)(dev);
 	rte_ethdev_trace_stop(port_id, ret);
@@ -4568,6 +4579,14 @@ rte_eth_mirror_rule_reset(uint16_t port_id, uint8_t rule_id)
 	return eth_err(port_id, (*dev->dev_ops->mirror_rule_reset)(dev, rule_id));
 }
 
+RTE_INIT(eth_dev_init_fp_ops)
+{
+	uint32_t i;
+
+	for (i = 0; i != RTE_DIM(rte_eth_fp_ops); i++)
+		eth_dev_fp_ops_reset(rte_eth_fp_ops + i);
+}
+
 RTE_INIT(eth_dev_init_cb_lists)
 {
 	uint16_t i;
@@ -4736,6 +4755,14 @@ rte_eth_dev_probing_finish(struct rte_eth_dev *dev)
 	if (dev == NULL)
 		return;
 
+	/*
+	 * for secondary process, at that point we expect device
+	 * to be already 'usable', so shared data and all function pointers
+	 * for 'fast' devops have to be setup properly inside rte_eth_dev.
+	 */
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev);
+
 	rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL);
 
 	dev->state = RTE_ETH_DEV_ATTACHED;
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index 948c0b71c1..fe47a660c7 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -53,6 +53,51 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
 typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
 /**< @internal Check the status of a Tx descriptor */
 
+/**
+ * @internal
+ * Structure used to hold opaque pointernals to internal ethdev RX/TXi
+ * queues data.
+ * The main purpose to expose these pointers at all - allow compiler
+ * to fetch this data for 'fast' ethdev inline functions in advance.
+ */
+struct rte_ethdev_qdata {
+	void **data;
+	/**< points to array of internal queue data pointers */
+	void **clbk;
+	/**< points to array of queue callback data pointers */
+};
+
+/**
+ * @internal
+ * 'fast' ethdev funcions and related data are hold in a flat array.
+ * one entry per ethdev.
+ */
+struct rte_eth_fp_ops {
+
+	/** first 64B line */
+	eth_rx_burst_t rx_pkt_burst;
+	/**< PMD receive function. */
+	eth_tx_burst_t tx_pkt_burst;
+	/**< PMD transmit function. */
+	eth_tx_prep_t tx_pkt_prepare;
+	/**< PMD transmit prepare function. */
+	eth_rx_queue_count_t rx_queue_count;
+	/**< Get the number of used RX descriptors. */
+	eth_rx_descriptor_status_t rx_descriptor_status;
+	/**< Check the status of a Rx descriptor. */
+	eth_tx_descriptor_status_t tx_descriptor_status;
+	/**< Check the status of a Tx descriptor. */
+	uintptr_t reserved[2];
+
+	/** second 64B line */
+	struct rte_ethdev_qdata rxq;
+	struct rte_ethdev_qdata txq;
+	uintptr_t reserved2[4];
+
+} __rte_cache_aligned;
+
+extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
+
 
 /**
  * @internal
-- 
2.26.3


^ permalink raw reply	[relevance 2%]

* Re: [dpdk-dev] [PATCH 3/3] cryptodev: rework session framework
  2021-10-01 15:53  0%   ` Zhang, Roy Fan
@ 2021-10-04 19:07  0%     ` Akhil Goyal
  0 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2021-10-04 19:07 UTC (permalink / raw)
  To: Zhang, Roy Fan, dev
  Cc: thomas, david.marchand, hemant.agrawal, Anoob Joseph,
	De Lara Guarch, Pablo, Trahe, Fiona, Doherty, Declan, matan,
	g.singh, jianjay.zhou, asomalap, ruifeng.wang, Ananyev,
	Konstantin, Nicolau, Radu, ajit.khaparde, Nagadheeraj Rottela,
	Ankur Dwivedi, Power, Ciara

Hi Fan,

> Hi Akhil,
> 
> Your patch failed all QAT tests - maybe it will fail on all PMDs requiring to
> know the session's physical address (such as nitrox, caam).
> 
> The reason is QAT PMD driver cannot know the physical address of the
> session private data passed into it - as the private data is computed by
> rte_cryptodev_sym_session_init() from the start of the session (which has a
> mempool obj header with valid physical address) to the offset for that driver
> type. The session private data pointer - although is a valid buffer - does not
> have the obj header that contains the physical address.
> 
> I think the solution can be - instead of passing the session private data, the
> rte_cryptodev_sym_session pointer should be passed and a macro shall be
> provided to the drivers to manually compute the offset and then find the
> correct session private data from the session for itself.

Thanks for trying this patchset.
Instead of passing the rte_cryptodev_sym_session, Can we add another
Argument in the sym_session_configure() to pass physical address of session.
This will reduce the overhead of PMD to get the offset from lib and then call rte_mempool_virt2iova.
sym_session_configure(dev, xforms, sess_priv, sess_priv_iova)

The motive for above change: Since the mempool is not exposed to PMD,
It does not make sense to call rte_mempool_virt2iova Inside PMD.
What do you suggest?

Regards,
Akhil

> 
> Regards,
> Fan
> 
> > -----Original Message-----
> > From: Akhil Goyal <gakhil@marvell.com>
> > Sent: Thursday, September 30, 2021 3:50 PM
> > To: dev@dpdk.org
> > Cc: thomas@monjalon.net; david.marchand@redhat.com;
> > hemant.agrawal@nxp.com; anoobj@marvell.com; De Lara Guarch, Pablo
> > <pablo.de.lara.guarch@intel.com>; Trahe, Fiona <fiona.trahe@intel.com>;
> > Doherty, Declan <declan.doherty@intel.com>; matan@nvidia.com;
> > g.singh@nxp.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> > jianjay.zhou@huawei.com; asomalap@amd.com; ruifeng.wang@arm.com;
> > Ananyev, Konstantin <konstantin.ananyev@intel.com>; Nicolau, Radu
> > <radu.nicolau@intel.com>; ajit.khaparde@broadcom.com;
> > rnagadheeraj@marvell.com; adwivedi@marvell.com; Power, Ciara
> > <ciara.power@intel.com>; Akhil Goyal <gakhil@marvell.com>
> > Subject: [PATCH 3/3] cryptodev: rework session framework
> >
> > As per current design, rte_cryptodev_sym_session_create() and
> > rte_cryptodev_sym_session_init() use separate mempool objects
> > for a single session.
> > And structure rte_cryptodev_sym_session is not directly used
> > by the application, it may cause ABI breakage if the structure
> > is modified in future.
> >
> > To address these two issues, the rte_cryptodev_sym_session_create
> > will take one mempool object for both the session and session
> > private data. The API rte_cryptodev_sym_session_init will now not
> > take mempool object.
> > rte_cryptodev_sym_session_create will now return an opaque session
> > pointer which will be used by the app in rte_cryptodev_sym_session_init
> > and other APIs.
> >
> > With this change, rte_cryptodev_sym_session_init will send
> > pointer to session private data of corresponding driver to the PMD
> > based on the driver_id for filling the PMD data.
> >
> > In data path, opaque session pointer is attached to rte_crypto_op
> > and the PMD can call an internal library API to get the session
> > private data pointer based on the driver id.
> >
> > TODO:
> > - inline APIs for opaque data
> > - move rte_cryptodev_sym_session struct to cryptodev_pmd.h
> > - currently nb_drivers are getting updated in RTE_INIT which
> > result in increasing the memory requirements for session.
> > This will be moved to PMD probe so that memory is created
> > only for those PMDs which are probed and not just compiled in.
> >
> > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > ---
> >  app/test-crypto-perf/cperf.h                  |   1 -
> >  app/test-crypto-perf/cperf_ops.c              |  33 ++---
> >  app/test-crypto-perf/cperf_ops.h              |   6 +-
> >  app/test-crypto-perf/cperf_test_latency.c     |   5 +-
> >  app/test-crypto-perf/cperf_test_latency.h     |   1 -
> >  .../cperf_test_pmd_cyclecount.c               |   5 +-
> >  .../cperf_test_pmd_cyclecount.h               |   1 -
> >  app/test-crypto-perf/cperf_test_throughput.c  |   5 +-
> >  app/test-crypto-perf/cperf_test_throughput.h  |   1 -
> >  app/test-crypto-perf/cperf_test_verify.c      |   5 +-
> >  app/test-crypto-perf/cperf_test_verify.h      |   1 -
> >  app/test-crypto-perf/main.c                   |  29 +---
> >  app/test/test_cryptodev.c                     | 130 +++++-------------
> >  app/test/test_cryptodev.h                     |   1 -
> >  app/test/test_cryptodev_asym.c                |   1 -
> >  app/test/test_cryptodev_blockcipher.c         |   6 +-
> >  app/test/test_event_crypto_adapter.c          |  28 +---
> >  app/test/test_ipsec.c                         |  22 +--
> >  drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c  |  33 +----
> >  .../crypto/aesni_mb/rte_aesni_mb_pmd_ops.c    |  34 +----
> >  drivers/crypto/armv8/rte_armv8_pmd_ops.c      |  34 +----
> >  drivers/crypto/bcmfs/bcmfs_sym_session.c      |  36 +----
> >  drivers/crypto/bcmfs/bcmfs_sym_session.h      |   6 +-
> >  drivers/crypto/caam_jr/caam_jr.c              |  32 ++---
> >  drivers/crypto/ccp/ccp_pmd_ops.c              |  32 +----
> >  drivers/crypto/cnxk/cn10k_cryptodev_ops.c     |  18 ++-
> >  drivers/crypto/cnxk/cn9k_cryptodev_ops.c      |  18 +--
> >  drivers/crypto/cnxk/cnxk_cryptodev_ops.c      |  61 +++-----
> >  drivers/crypto/cnxk/cnxk_cryptodev_ops.h      |  13 +-
> >  drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c   |  29 +---
> >  drivers/crypto/dpaa_sec/dpaa_sec.c            |  31 +----
> >  drivers/crypto/kasumi/rte_kasumi_pmd_ops.c    |  34 +----
> >  drivers/crypto/mlx5/mlx5_crypto.c             |  24 +---
> >  drivers/crypto/mvsam/rte_mrvl_pmd_ops.c       |  36 ++---
> >  drivers/crypto/nitrox/nitrox_sym.c            |  31 +----
> >  drivers/crypto/null/null_crypto_pmd_ops.c     |  34 +----
> >  .../crypto/octeontx/otx_cryptodev_hw_access.h |   1 -
> >  drivers/crypto/octeontx/otx_cryptodev_ops.c   |  60 +++-----
> >  drivers/crypto/octeontx2/otx2_cryptodev_ops.c |  52 +++----
> >  .../octeontx2/otx2_cryptodev_ops_helper.h     |  16 +--
> >  drivers/crypto/openssl/rte_openssl_pmd_ops.c  |  35 +----
> >  drivers/crypto/qat/qat_sym_session.c          |  29 +---
> >  drivers/crypto/qat/qat_sym_session.h          |   6 +-
> >  drivers/crypto/scheduler/scheduler_pmd_ops.c  |   9 +-
> >  drivers/crypto/snow3g/rte_snow3g_pmd_ops.c    |  34 +----
> >  drivers/crypto/virtio/virtio_cryptodev.c      |  31 ++---
> >  drivers/crypto/zuc/rte_zuc_pmd_ops.c          |  35 +----
> >  .../octeontx2/otx2_evdev_crypto_adptr_rx.h    |   3 +-
> >  examples/fips_validation/fips_dev_self_test.c |  32 ++---
> >  examples/fips_validation/main.c               |  20 +--
> >  examples/ipsec-secgw/ipsec-secgw.c            |  72 +++++-----
> >  examples/ipsec-secgw/ipsec.c                  |   3 +-
> >  examples/ipsec-secgw/ipsec.h                  |   1 -
> >  examples/ipsec-secgw/ipsec_worker.c           |   4 -
> >  examples/l2fwd-crypto/main.c                  |  41 +-----
> >  examples/vhost_crypto/main.c                  |  16 +--
> >  lib/cryptodev/cryptodev_pmd.h                 |   7 +-
> >  lib/cryptodev/rte_crypto.h                    |   2 +-
> >  lib/cryptodev/rte_crypto_sym.h                |   2 +-
> >  lib/cryptodev/rte_cryptodev.c                 |  73 ++++++----
> >  lib/cryptodev/rte_cryptodev.h                 |  23 ++--
> >  lib/cryptodev/rte_cryptodev_trace.h           |   5 +-
> >  lib/pipeline/rte_table_action.c               |   8 +-
> >  lib/pipeline/rte_table_action.h               |   2 +-
> >  lib/vhost/rte_vhost_crypto.h                  |   3 -
> >  lib/vhost/vhost_crypto.c                      |   7 +-
> >  66 files changed, 399 insertions(+), 1050 deletions(-)
> >
> > diff --git a/app/test-crypto-perf/cperf.h b/app/test-crypto-perf/cperf.h
> > index 2b0aad095c..db58228dce 100644
> > --- a/app/test-crypto-perf/cperf.h
> > +++ b/app/test-crypto-perf/cperf.h
> > @@ -15,7 +15,6 @@ struct cperf_op_fns;
> >
> >  typedef void  *(*cperf_constructor_t)(
> >  		struct rte_mempool *sess_mp,
> > -		struct rte_mempool *sess_priv_mp,
> >  		uint8_t dev_id,
> >  		uint16_t qp_id,
> >  		const struct cperf_options *options,
> > diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-
> > perf/cperf_ops.c
> > index 1b3cbe77b9..f094bc656d 100644
> > --- a/app/test-crypto-perf/cperf_ops.c
> > +++ b/app/test-crypto-perf/cperf_ops.c
> > @@ -12,7 +12,7 @@ static int
> >  cperf_set_ops_asym(struct rte_crypto_op **ops,
> >  		   uint32_t src_buf_offset __rte_unused,
> >  		   uint32_t dst_buf_offset __rte_unused, uint16_t nb_ops,
> > -		   struct rte_cryptodev_sym_session *sess,
> > +		   void *sess,
> >  		   const struct cperf_options *options __rte_unused,
> >  		   const struct cperf_test_vector *test_vector __rte_unused,
> >  		   uint16_t iv_offset __rte_unused,
> > @@ -40,7 +40,7 @@ static int
> >  cperf_set_ops_security(struct rte_crypto_op **ops,
> >  		uint32_t src_buf_offset __rte_unused,
> >  		uint32_t dst_buf_offset __rte_unused,
> > -		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
> > +		uint16_t nb_ops, void *sess,
> >  		const struct cperf_options *options __rte_unused,
> >  		const struct cperf_test_vector *test_vector __rte_unused,
> >  		uint16_t iv_offset __rte_unused, uint32_t *imix_idx)
> > @@ -106,7 +106,7 @@ cperf_set_ops_security(struct rte_crypto_op **ops,
> >  static int
> >  cperf_set_ops_null_cipher(struct rte_crypto_op **ops,
> >  		uint32_t src_buf_offset, uint32_t dst_buf_offset,
> > -		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
> > +		uint16_t nb_ops, void *sess,
> >  		const struct cperf_options *options,
> >  		const struct cperf_test_vector *test_vector __rte_unused,
> >  		uint16_t iv_offset __rte_unused, uint32_t *imix_idx)
> > @@ -145,7 +145,7 @@ cperf_set_ops_null_cipher(struct rte_crypto_op
> > **ops,
> >  static int
> >  cperf_set_ops_null_auth(struct rte_crypto_op **ops,
> >  		uint32_t src_buf_offset, uint32_t dst_buf_offset,
> > -		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
> > +		uint16_t nb_ops, void *sess,
> >  		const struct cperf_options *options,
> >  		const struct cperf_test_vector *test_vector __rte_unused,
> >  		uint16_t iv_offset __rte_unused, uint32_t *imix_idx)
> > @@ -184,7 +184,7 @@ cperf_set_ops_null_auth(struct rte_crypto_op
> **ops,
> >  static int
> >  cperf_set_ops_cipher(struct rte_crypto_op **ops,
> >  		uint32_t src_buf_offset, uint32_t dst_buf_offset,
> > -		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
> > +		uint16_t nb_ops, void *sess,
> >  		const struct cperf_options *options,
> >  		const struct cperf_test_vector *test_vector,
> >  		uint16_t iv_offset, uint32_t *imix_idx)
> > @@ -240,7 +240,7 @@ cperf_set_ops_cipher(struct rte_crypto_op **ops,
> >  static int
> >  cperf_set_ops_auth(struct rte_crypto_op **ops,
> >  		uint32_t src_buf_offset, uint32_t dst_buf_offset,
> > -		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
> > +		uint16_t nb_ops, void *sess,
> >  		const struct cperf_options *options,
> >  		const struct cperf_test_vector *test_vector,
> >  		uint16_t iv_offset, uint32_t *imix_idx)
> > @@ -340,7 +340,7 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
> >  static int
> >  cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
> >  		uint32_t src_buf_offset, uint32_t dst_buf_offset,
> > -		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
> > +		uint16_t nb_ops, void *sess,
> >  		const struct cperf_options *options,
> >  		const struct cperf_test_vector *test_vector,
> >  		uint16_t iv_offset, uint32_t *imix_idx)
> > @@ -455,7 +455,7 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op
> > **ops,
> >  static int
> >  cperf_set_ops_aead(struct rte_crypto_op **ops,
> >  		uint32_t src_buf_offset, uint32_t dst_buf_offset,
> > -		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
> > +		uint16_t nb_ops, void *sess,
> >  		const struct cperf_options *options,
> >  		const struct cperf_test_vector *test_vector,
> >  		uint16_t iv_offset, uint32_t *imix_idx)
> > @@ -563,9 +563,8 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
> >  	return 0;
> >  }
> >
> > -static struct rte_cryptodev_sym_session *
> > +static void *
> >  cperf_create_session(struct rte_mempool *sess_mp,
> > -	struct rte_mempool *priv_mp,
> >  	uint8_t dev_id,
> >  	const struct cperf_options *options,
> >  	const struct cperf_test_vector *test_vector,
> > @@ -590,7 +589,7 @@ cperf_create_session(struct rte_mempool
> *sess_mp,
> >  		if (sess == NULL)
> >  			return NULL;
> >  		rc = rte_cryptodev_asym_session_init(dev_id, (void *)sess,
> > -						     &xform, priv_mp);
> > +						     &xform, sess_mp);
> >  		if (rc < 0) {
> >  			if (sess != NULL) {
> >  				rte_cryptodev_asym_session_clear(dev_id,
> > @@ -742,8 +741,7 @@ cperf_create_session(struct rte_mempool
> *sess_mp,
> >  			cipher_xform.cipher.iv.length = 0;
> >  		}
> >  		/* create crypto session */
> > -		rte_cryptodev_sym_session_init(dev_id, sess,
> > &cipher_xform,
> > -				priv_mp);
> > +		rte_cryptodev_sym_session_init(dev_id, sess,
> > &cipher_xform);
> >  	/*
> >  	 *  auth only
> >  	 */
> > @@ -770,8 +768,7 @@ cperf_create_session(struct rte_mempool
> *sess_mp,
> >  			auth_xform.auth.iv.length = 0;
> >  		}
> >  		/* create crypto session */
> > -		rte_cryptodev_sym_session_init(dev_id, sess, &auth_xform,
> > -				priv_mp);
> > +		rte_cryptodev_sym_session_init(dev_id, sess,
> > &auth_xform);
> >  	/*
> >  	 * cipher and auth
> >  	 */
> > @@ -830,12 +827,12 @@ cperf_create_session(struct rte_mempool
> > *sess_mp,
> >  			cipher_xform.next = &auth_xform;
> >  			/* create crypto session */
> >  			rte_cryptodev_sym_session_init(dev_id,
> > -					sess, &cipher_xform, priv_mp);
> > +					sess, &cipher_xform);
> >  		} else { /* auth then cipher */
> >  			auth_xform.next = &cipher_xform;
> >  			/* create crypto session */
> >  			rte_cryptodev_sym_session_init(dev_id,
> > -					sess, &auth_xform, priv_mp);
> > +					sess, &auth_xform);
> >  		}
> >  	} else { /* options->op_type == CPERF_AEAD */
> >  		aead_xform.type = RTE_CRYPTO_SYM_XFORM_AEAD;
> > @@ -856,7 +853,7 @@ cperf_create_session(struct rte_mempool
> *sess_mp,
> >
> >  		/* Create crypto session */
> >  		rte_cryptodev_sym_session_init(dev_id,
> > -					sess, &aead_xform, priv_mp);
> > +					sess, &aead_xform);
> >  	}
> >
> >  	return sess;
> > diff --git a/app/test-crypto-perf/cperf_ops.h b/app/test-crypto-
> > perf/cperf_ops.h
> > index ff125d12cd..3ff10491a0 100644
> > --- a/app/test-crypto-perf/cperf_ops.h
> > +++ b/app/test-crypto-perf/cperf_ops.h
> > @@ -12,15 +12,15 @@
> >  #include "cperf_test_vectors.h"
> >
> >
> > -typedef struct rte_cryptodev_sym_session *(*cperf_sessions_create_t)(
> > -		struct rte_mempool *sess_mp, struct rte_mempool
> > *sess_priv_mp,
> > +typedef void *(*cperf_sessions_create_t)(
> > +		struct rte_mempool *sess_mp,
> >  		uint8_t dev_id, const struct cperf_options *options,
> >  		const struct cperf_test_vector *test_vector,
> >  		uint16_t iv_offset);
> >
> >  typedef int (*cperf_populate_ops_t)(struct rte_crypto_op **ops,
> >  		uint32_t src_buf_offset, uint32_t dst_buf_offset,
> > -		uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
> > +		uint16_t nb_ops, void *sess,
> >  		const struct cperf_options *options,
> >  		const struct cperf_test_vector *test_vector,
> >  		uint16_t iv_offset, uint32_t *imix_idx);
> > diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-
> > perf/cperf_test_latency.c
> > index 159fe8492b..4193f7e777 100644
> > --- a/app/test-crypto-perf/cperf_test_latency.c
> > +++ b/app/test-crypto-perf/cperf_test_latency.c
> > @@ -24,7 +24,7 @@ struct cperf_latency_ctx {
> >
> >  	struct rte_mempool *pool;
> >
> > -	struct rte_cryptodev_sym_session *sess;
> > +	void *sess;
> >
> >  	cperf_populate_ops_t populate_ops;
> >
> > @@ -59,7 +59,6 @@ cperf_latency_test_free(struct cperf_latency_ctx *ctx)
> >
> >  void *
> >  cperf_latency_test_constructor(struct rte_mempool *sess_mp,
> > -		struct rte_mempool *sess_priv_mp,
> >  		uint8_t dev_id, uint16_t qp_id,
> >  		const struct cperf_options *options,
> >  		const struct cperf_test_vector *test_vector,
> > @@ -84,7 +83,7 @@ cperf_latency_test_constructor(struct rte_mempool
> > *sess_mp,
> >  		sizeof(struct rte_crypto_sym_op) +
> >  		sizeof(struct cperf_op_result *);
> >
> > -	ctx->sess = op_fns->sess_create(sess_mp, sess_priv_mp, dev_id,
> > options,
> > +	ctx->sess = op_fns->sess_create(sess_mp, dev_id, options,
> >  			test_vector, iv_offset);
> >  	if (ctx->sess == NULL)
> >  		goto err;
> > diff --git a/app/test-crypto-perf/cperf_test_latency.h b/app/test-crypto-
> > perf/cperf_test_latency.h
> > index ed5b0a07bb..d3fc3218d7 100644
> > --- a/app/test-crypto-perf/cperf_test_latency.h
> > +++ b/app/test-crypto-perf/cperf_test_latency.h
> > @@ -17,7 +17,6 @@
> >  void *
> >  cperf_latency_test_constructor(
> >  		struct rte_mempool *sess_mp,
> > -		struct rte_mempool *sess_priv_mp,
> >  		uint8_t dev_id,
> >  		uint16_t qp_id,
> >  		const struct cperf_options *options,
> > diff --git a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c b/app/test-
> > crypto-perf/cperf_test_pmd_cyclecount.c
> > index cbbbedd9ba..3dd489376f 100644
> > --- a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
> > +++ b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c
> > @@ -27,7 +27,7 @@ struct cperf_pmd_cyclecount_ctx {
> >  	struct rte_crypto_op **ops;
> >  	struct rte_crypto_op **ops_processed;
> >
> > -	struct rte_cryptodev_sym_session *sess;
> > +	void *sess;
> >
> >  	cperf_populate_ops_t populate_ops;
> >
> > @@ -93,7 +93,6 @@ cperf_pmd_cyclecount_test_free(struct
> > cperf_pmd_cyclecount_ctx *ctx)
> >
> >  void *
> >  cperf_pmd_cyclecount_test_constructor(struct rte_mempool *sess_mp,
> > -		struct rte_mempool *sess_priv_mp,
> >  		uint8_t dev_id, uint16_t qp_id,
> >  		const struct cperf_options *options,
> >  		const struct cperf_test_vector *test_vector,
> > @@ -120,7 +119,7 @@ cperf_pmd_cyclecount_test_constructor(struct
> > rte_mempool *sess_mp,
> >  	uint16_t iv_offset = sizeof(struct rte_crypto_op) +
> >  			sizeof(struct rte_crypto_sym_op);
> >
> > -	ctx->sess = op_fns->sess_create(sess_mp, sess_priv_mp, dev_id,
> > options,
> > +	ctx->sess = op_fns->sess_create(sess_mp, dev_id, options,
> >  			test_vector, iv_offset);
> >  	if (ctx->sess == NULL)
> >  		goto err;
> > diff --git a/app/test-crypto-perf/cperf_test_pmd_cyclecount.h b/app/test-
> > crypto-perf/cperf_test_pmd_cyclecount.h
> > index 3084038a18..beb4419910 100644
> > --- a/app/test-crypto-perf/cperf_test_pmd_cyclecount.h
> > +++ b/app/test-crypto-perf/cperf_test_pmd_cyclecount.h
> > @@ -18,7 +18,6 @@
> >  void *
> >  cperf_pmd_cyclecount_test_constructor(
> >  		struct rte_mempool *sess_mp,
> > -		struct rte_mempool *sess_priv_mp,
> >  		uint8_t dev_id,
> >  		uint16_t qp_id,
> >  		const struct cperf_options *options,
> > diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-
> crypto-
> > perf/cperf_test_throughput.c
> > index 76fcda47ff..dc5c48b4da 100644
> > --- a/app/test-crypto-perf/cperf_test_throughput.c
> > +++ b/app/test-crypto-perf/cperf_test_throughput.c
> > @@ -18,7 +18,7 @@ struct cperf_throughput_ctx {
> >
> >  	struct rte_mempool *pool;
> >
> > -	struct rte_cryptodev_sym_session *sess;
> > +	void *sess;
> >
> >  	cperf_populate_ops_t populate_ops;
> >
> > @@ -64,7 +64,6 @@ cperf_throughput_test_free(struct
> > cperf_throughput_ctx *ctx)
> >
> >  void *
> >  cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
> > -		struct rte_mempool *sess_priv_mp,
> >  		uint8_t dev_id, uint16_t qp_id,
> >  		const struct cperf_options *options,
> >  		const struct cperf_test_vector *test_vector,
> > @@ -87,7 +86,7 @@ cperf_throughput_test_constructor(struct
> > rte_mempool *sess_mp,
> >  	uint16_t iv_offset = sizeof(struct rte_crypto_op) +
> >  		sizeof(struct rte_crypto_sym_op);
> >
> > -	ctx->sess = op_fns->sess_create(sess_mp, sess_priv_mp, dev_id,
> > options,
> > +	ctx->sess = op_fns->sess_create(sess_mp, dev_id, options,
> >  			test_vector, iv_offset);
> >  	if (ctx->sess == NULL)
> >  		goto err;
> > diff --git a/app/test-crypto-perf/cperf_test_throughput.h b/app/test-
> > crypto-perf/cperf_test_throughput.h
> > index 91e1a4b708..439ec8e559 100644
> > --- a/app/test-crypto-perf/cperf_test_throughput.h
> > +++ b/app/test-crypto-perf/cperf_test_throughput.h
> > @@ -18,7 +18,6 @@
> >  void *
> >  cperf_throughput_test_constructor(
> >  		struct rte_mempool *sess_mp,
> > -		struct rte_mempool *sess_priv_mp,
> >  		uint8_t dev_id,
> >  		uint16_t qp_id,
> >  		const struct cperf_options *options,
> > diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-
> > perf/cperf_test_verify.c
> > index 2939aeaa93..cf561dd700 100644
> > --- a/app/test-crypto-perf/cperf_test_verify.c
> > +++ b/app/test-crypto-perf/cperf_test_verify.c
> > @@ -18,7 +18,7 @@ struct cperf_verify_ctx {
> >
> >  	struct rte_mempool *pool;
> >
> > -	struct rte_cryptodev_sym_session *sess;
> > +	void *sess;
> >
> >  	cperf_populate_ops_t populate_ops;
> >
> > @@ -51,7 +51,6 @@ cperf_verify_test_free(struct cperf_verify_ctx *ctx)
> >
> >  void *
> >  cperf_verify_test_constructor(struct rte_mempool *sess_mp,
> > -		struct rte_mempool *sess_priv_mp,
> >  		uint8_t dev_id, uint16_t qp_id,
> >  		const struct cperf_options *options,
> >  		const struct cperf_test_vector *test_vector,
> > @@ -74,7 +73,7 @@ cperf_verify_test_constructor(struct rte_mempool
> > *sess_mp,
> >  	uint16_t iv_offset = sizeof(struct rte_crypto_op) +
> >  		sizeof(struct rte_crypto_sym_op);
> >
> > -	ctx->sess = op_fns->sess_create(sess_mp, sess_priv_mp, dev_id,
> > options,
> > +	ctx->sess = op_fns->sess_create(sess_mp, dev_id, options,
> >  			test_vector, iv_offset);
> >  	if (ctx->sess == NULL)
> >  		goto err;
> > diff --git a/app/test-crypto-perf/cperf_test_verify.h b/app/test-crypto-
> > perf/cperf_test_verify.h
> > index ac2192ba99..9f70ad87ba 100644
> > --- a/app/test-crypto-perf/cperf_test_verify.h
> > +++ b/app/test-crypto-perf/cperf_test_verify.h
> > @@ -18,7 +18,6 @@
> >  void *
> >  cperf_verify_test_constructor(
> >  		struct rte_mempool *sess_mp,
> > -		struct rte_mempool *sess_priv_mp,
> >  		uint8_t dev_id,
> >  		uint16_t qp_id,
> >  		const struct cperf_options *options,
> > diff --git a/app/test-crypto-perf/main.c b/app/test-crypto-perf/main.c
> > index 390380898e..c3327b7e55 100644
> > --- a/app/test-crypto-perf/main.c
> > +++ b/app/test-crypto-perf/main.c
> > @@ -118,35 +118,14 @@ fill_session_pool_socket(int32_t socket_id,
> > uint32_t session_priv_size,
> >  	char mp_name[RTE_MEMPOOL_NAMESIZE];
> >  	struct rte_mempool *sess_mp;
> >
> > -	if (session_pool_socket[socket_id].priv_mp == NULL) {
> > -		snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
> > -			"priv_sess_mp_%u", socket_id);
> > -
> > -		sess_mp = rte_mempool_create(mp_name,
> > -					nb_sessions,
> > -					session_priv_size,
> > -					0, 0, NULL, NULL, NULL,
> > -					NULL, socket_id,
> > -					0);
> > -
> > -		if (sess_mp == NULL) {
> > -			printf("Cannot create pool \"%s\" on socket %d\n",
> > -				mp_name, socket_id);
> > -			return -ENOMEM;
> > -		}
> > -
> > -		printf("Allocated pool \"%s\" on socket %d\n",
> > -			mp_name, socket_id);
> > -		session_pool_socket[socket_id].priv_mp = sess_mp;
> > -	}
> > -
> >  	if (session_pool_socket[socket_id].sess_mp == NULL) {
> >
> >  		snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
> >  			"sess_mp_%u", socket_id);
> >
> >  		sess_mp =
> > rte_cryptodev_sym_session_pool_create(mp_name,
> > -					nb_sessions, 0, 0, 0, socket_id);
> > +					nb_sessions, session_priv_size,
> > +					0, 0, socket_id);
> >
> >  		if (sess_mp == NULL) {
> >  			printf("Cannot create pool \"%s\" on socket %d\n",
> > @@ -344,12 +323,9 @@ cperf_initialize_cryptodev(struct cperf_options
> > *opts, uint8_t *enabled_cdevs)
> >  			return ret;
> >
> >  		qp_conf.mp_session =
> > session_pool_socket[socket_id].sess_mp;
> > -		qp_conf.mp_session_private =
> > -				session_pool_socket[socket_id].priv_mp;
> >
> >  		if (opts->op_type == CPERF_ASYM_MODEX) {
> >  			qp_conf.mp_session = NULL;
> > -			qp_conf.mp_session_private = NULL;
> >  		}
> >
> >  		ret = rte_cryptodev_configure(cdev_id, &conf);
> > @@ -704,7 +680,6 @@ main(int argc, char **argv)
> >
> >  		ctx[i] = cperf_testmap[opts.test].constructor(
> >  				session_pool_socket[socket_id].sess_mp,
> > -				session_pool_socket[socket_id].priv_mp,
> >  				cdev_id, qp_id,
> >  				&opts, t_vec, &op_fns);
> >  		if (ctx[i] == NULL) {
> > diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
> > index 82f819211a..e5c7930c63 100644
> > --- a/app/test/test_cryptodev.c
> > +++ b/app/test/test_cryptodev.c
> > @@ -79,7 +79,7 @@ struct crypto_unittest_params {
> >  #endif
> >
> >  	union {
> > -		struct rte_cryptodev_sym_session *sess;
> > +		void *sess;
> >  #ifdef RTE_LIB_SECURITY
> >  		void *sec_session;
> >  #endif
> > @@ -119,7 +119,7 @@
> > test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
> >  		uint8_t *hmac_key);
> >
> >  static int
> > -test_AES_CBC_HMAC_SHA512_decrypt_perform(struct
> > rte_cryptodev_sym_session *sess,
> > +test_AES_CBC_HMAC_SHA512_decrypt_perform(void *sess,
> >  		struct crypto_unittest_params *ut_params,
> >  		struct crypto_testsuite_params *ts_param,
> >  		const uint8_t *cipher,
> > @@ -596,23 +596,11 @@ testsuite_setup(void)
> >  	}
> >
> >  	ts_params->session_mpool =
> > rte_cryptodev_sym_session_pool_create(
> > -			"test_sess_mp", MAX_NB_SESSIONS, 0, 0, 0,
> > +			"test_sess_mp", MAX_NB_SESSIONS, session_size, 0,
> > 0,
> >  			SOCKET_ID_ANY);
> >  	TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
> >  			"session mempool allocation failed");
> >
> > -	ts_params->session_priv_mpool = rte_mempool_create(
> > -			"test_sess_mp_priv",
> > -			MAX_NB_SESSIONS,
> > -			session_size,
> > -			0, 0, NULL, NULL, NULL,
> > -			NULL, SOCKET_ID_ANY,
> > -			0);
> > -	TEST_ASSERT_NOT_NULL(ts_params->session_priv_mpool,
> > -			"session mempool allocation failed");
> > -
> > -
> > -
> >  	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id,
> >  			&ts_params->conf),
> >  			"Failed to configure cryptodev %u with %u qps",
> > @@ -620,7 +608,6 @@ testsuite_setup(void)
> >
> >  	ts_params->qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
> >  	ts_params->qp_conf.mp_session = ts_params->session_mpool;
> > -	ts_params->qp_conf.mp_session_private = ts_params-
> > >session_priv_mpool;
> >
> >  	for (qp_id = 0; qp_id < info.max_nb_queue_pairs; qp_id++) {
> >  		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
> > @@ -650,11 +637,6 @@ testsuite_teardown(void)
> >  	}
> >
> >  	/* Free session mempools */
> > -	if (ts_params->session_priv_mpool != NULL) {
> > -		rte_mempool_free(ts_params->session_priv_mpool);
> > -		ts_params->session_priv_mpool = NULL;
> > -	}
> > -
> >  	if (ts_params->session_mpool != NULL) {
> >  		rte_mempool_free(ts_params->session_mpool);
> >  		ts_params->session_mpool = NULL;
> > @@ -1330,7 +1312,6 @@ dev_configure_and_start(uint64_t ff_disable)
> >  	ts_params->conf.ff_disable = ff_disable;
> >  	ts_params->qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
> >  	ts_params->qp_conf.mp_session = ts_params->session_mpool;
> > -	ts_params->qp_conf.mp_session_private = ts_params-
> > >session_priv_mpool;
> >
> >  	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params-
> > >valid_devs[0],
> >  			&ts_params->conf),
> > @@ -1552,7 +1533,6 @@ test_queue_pair_descriptor_setup(void)
> >  	 */
> >  	qp_conf.nb_descriptors = MIN_NUM_OPS_INFLIGHT; /* min size*/
> >  	qp_conf.mp_session = ts_params->session_mpool;
> > -	qp_conf.mp_session_private = ts_params->session_priv_mpool;
> >
> >  	for (qp_id = 0; qp_id < ts_params->conf.nb_queue_pairs; qp_id++) {
> >  		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
> > @@ -2146,8 +2126,7 @@
> test_AES_CBC_HMAC_SHA1_encrypt_digest(void)
> >
> >  	/* Create crypto session*/
> >  	rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
> > -			ut_params->sess, &ut_params->cipher_xform,
> > -			ts_params->session_priv_mpool);
> > +			ut_params->sess, &ut_params->cipher_xform);
> >  	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation
> > failed");
> >
> >  	/* Generate crypto op data structure */
> > @@ -2247,7 +2226,7 @@
> > test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
> >  		uint8_t *hmac_key);
> >
> >  static int
> > -test_AES_CBC_HMAC_SHA512_decrypt_perform(struct
> > rte_cryptodev_sym_session *sess,
> > +test_AES_CBC_HMAC_SHA512_decrypt_perform(void *sess,
> >  		struct crypto_unittest_params *ut_params,
> >  		struct crypto_testsuite_params *ts_params,
> >  		const uint8_t *cipher,
> > @@ -2288,7 +2267,7 @@
> > test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
> >
> >
> >  static int
> > -test_AES_CBC_HMAC_SHA512_decrypt_perform(struct
> > rte_cryptodev_sym_session *sess,
> > +test_AES_CBC_HMAC_SHA512_decrypt_perform(void *sess,
> >  		struct crypto_unittest_params *ut_params,
> >  		struct crypto_testsuite_params *ts_params,
> >  		const uint8_t *cipher,
> > @@ -2401,8 +2380,7 @@ create_wireless_algo_hash_session(uint8_t
> dev_id,
> >  			ts_params->session_mpool);
> >
> >  	status = rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
> > -			&ut_params->auth_xform,
> > -			ts_params->session_priv_mpool);
> > +			&ut_params->auth_xform);
> >  	if (status == -ENOTSUP)
> >  		return TEST_SKIPPED;
> >
> > @@ -2443,8 +2421,7 @@ create_wireless_algo_cipher_session(uint8_t
> > dev_id,
> >  			ts_params->session_mpool);
> >
> >  	status = rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
> > -			&ut_params->cipher_xform,
> > -			ts_params->session_priv_mpool);
> > +			&ut_params->cipher_xform);
> >  	if (status == -ENOTSUP)
> >  		return TEST_SKIPPED;
> >
> > @@ -2566,8 +2543,7 @@
> create_wireless_algo_cipher_auth_session(uint8_t
> > dev_id,
> >  	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation
> > failed");
> >
> >  	status = rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
> > -			&ut_params->cipher_xform,
> > -			ts_params->session_priv_mpool);
> > +			&ut_params->cipher_xform);
> >  	if (status == -ENOTSUP)
> >  		return TEST_SKIPPED;
> >
> > @@ -2629,8 +2605,7 @@ create_wireless_cipher_auth_session(uint8_t
> > dev_id,
> >  			ts_params->session_mpool);
> >
> >  	status = rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
> > -			&ut_params->cipher_xform,
> > -			ts_params->session_priv_mpool);
> > +			&ut_params->cipher_xform);
> >  	if (status == -ENOTSUP)
> >  		return TEST_SKIPPED;
> >
> > @@ -2699,13 +2674,11 @@
> > create_wireless_algo_auth_cipher_session(uint8_t dev_id,
> >  		ut_params->auth_xform.next = NULL;
> >  		ut_params->cipher_xform.next = &ut_params->auth_xform;
> >  		status = rte_cryptodev_sym_session_init(dev_id,
> > ut_params->sess,
> > -				&ut_params->cipher_xform,
> > -				ts_params->session_priv_mpool);
> > +				&ut_params->cipher_xform);
> >
> >  	} else
> >  		status = rte_cryptodev_sym_session_init(dev_id,
> > ut_params->sess,
> > -				&ut_params->auth_xform,
> > -				ts_params->session_priv_mpool);
> > +				&ut_params->auth_xform);
> >
> >  	if (status == -ENOTSUP)
> >  		return TEST_SKIPPED;
> > @@ -7838,8 +7811,7 @@ create_aead_session(uint8_t dev_id, enum
> > rte_crypto_aead_algorithm algo,
> >  			ts_params->session_mpool);
> >
> >  	rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
> > -			&ut_params->aead_xform,
> > -			ts_params->session_priv_mpool);
> > +			&ut_params->aead_xform);
> >
> >  	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation
> > failed");
> >
> > @@ -10992,8 +10964,7 @@ static int MD5_HMAC_create_session(struct
> > crypto_testsuite_params *ts_params,
> >  			ts_params->session_mpool);
> >
> >  	rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
> > -			ut_params->sess, &ut_params->auth_xform,
> > -			ts_params->session_priv_mpool);
> > +			ut_params->sess, &ut_params->auth_xform);
> >
> >  	if (ut_params->sess == NULL)
> >  		return TEST_FAILED;
> > @@ -11206,7 +11177,7 @@ test_multi_session(void)
> >  	struct crypto_unittest_params *ut_params = &unittest_params;
> >
> >  	struct rte_cryptodev_info dev_info;
> > -	struct rte_cryptodev_sym_session **sessions;
> > +	void **sessions;
> >
> >  	uint16_t i;
> >
> > @@ -11229,9 +11200,7 @@ test_multi_session(void)
> >
> >  	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
> >
> > -	sessions = rte_malloc(NULL,
> > -			sizeof(struct rte_cryptodev_sym_session *) *
> > -			(MAX_NB_SESSIONS + 1), 0);
> > +	sessions = rte_malloc(NULL, sizeof(void *) * (MAX_NB_SESSIONS +
> > 1), 0);
> >
> >  	/* Create multiple crypto sessions*/
> >  	for (i = 0; i < MAX_NB_SESSIONS; i++) {
> > @@ -11240,8 +11209,7 @@ test_multi_session(void)
> >  				ts_params->session_mpool);
> >
> >  		rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
> > -				sessions[i], &ut_params->auth_xform,
> > -				ts_params->session_priv_mpool);
> > +				sessions[i], &ut_params->auth_xform);
> >  		TEST_ASSERT_NOT_NULL(sessions[i],
> >  				"Session creation failed at session
> > number %u",
> >  				i);
> > @@ -11279,8 +11247,7 @@ test_multi_session(void)
> >  	sessions[i] = NULL;
> >  	/* Next session create should fail */
> >  	rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
> > -			sessions[i], &ut_params->auth_xform,
> > -			ts_params->session_priv_mpool);
> > +			sessions[i], &ut_params->auth_xform);
> >  	TEST_ASSERT_NULL(sessions[i],
> >  			"Session creation succeeded unexpectedly!");
> >
> > @@ -11311,7 +11278,7 @@ test_multi_session_random_usage(void)
> >  {
> >  	struct crypto_testsuite_params *ts_params = &testsuite_params;
> >  	struct rte_cryptodev_info dev_info;
> > -	struct rte_cryptodev_sym_session **sessions;
> > +	void **sessions;
> >  	uint32_t i, j;
> >  	struct multi_session_params ut_paramz[] = {
> >
> > @@ -11355,8 +11322,7 @@ test_multi_session_random_usage(void)
> >  	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
> >
> >  	sessions = rte_malloc(NULL,
> > -			(sizeof(struct rte_cryptodev_sym_session *)
> > -					* MAX_NB_SESSIONS) + 1, 0);
> > +			(sizeof(void *) * MAX_NB_SESSIONS) + 1, 0);
> >
> >  	for (i = 0; i < MB_SESSION_NUMBER; i++) {
> >  		sessions[i] = rte_cryptodev_sym_session_create(
> > @@ -11373,8 +11339,7 @@ test_multi_session_random_usage(void)
> >  		rte_cryptodev_sym_session_init(
> >  				ts_params->valid_devs[0],
> >  				sessions[i],
> > -				&ut_paramz[i].ut_params.auth_xform,
> > -				ts_params->session_priv_mpool);
> > +				&ut_paramz[i].ut_params.auth_xform);
> >
> >  		TEST_ASSERT_NOT_NULL(sessions[i],
> >  				"Session creation failed at session
> > number %u",
> > @@ -11457,8 +11422,7 @@ test_null_invalid_operation(void)
> >
> >  	/* Create Crypto session*/
> >  	ret = rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
> > -			ut_params->sess, &ut_params->cipher_xform,
> > -			ts_params->session_priv_mpool);
> > +			ut_params->sess, &ut_params->cipher_xform);
> >  	TEST_ASSERT(ret < 0,
> >  			"Session creation succeeded unexpectedly");
> >
> > @@ -11475,8 +11439,7 @@ test_null_invalid_operation(void)
> >
> >  	/* Create Crypto session*/
> >  	ret = rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
> > -			ut_params->sess, &ut_params->auth_xform,
> > -			ts_params->session_priv_mpool);
> > +			ut_params->sess, &ut_params->auth_xform);
> >  	TEST_ASSERT(ret < 0,
> >  			"Session creation succeeded unexpectedly");
> >
> > @@ -11521,8 +11484,7 @@ test_null_burst_operation(void)
> >
> >  	/* Create Crypto session*/
> >  	rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
> > -			ut_params->sess, &ut_params->cipher_xform,
> > -			ts_params->session_priv_mpool);
> > +			ut_params->sess, &ut_params->cipher_xform);
> >  	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation
> > failed");
> >
> >  	TEST_ASSERT_EQUAL(rte_crypto_op_bulk_alloc(ts_params-
> > >op_mpool,
> > @@ -11634,7 +11596,6 @@ test_enq_callback_setup(void)
> >
> >  	qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
> >  	qp_conf.mp_session = ts_params->session_mpool;
> > -	qp_conf.mp_session_private = ts_params->session_priv_mpool;
> >
> >  	TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
> >  			ts_params->valid_devs[0], qp_id, &qp_conf,
> > @@ -11734,7 +11695,6 @@ test_deq_callback_setup(void)
> >
> >  	qp_conf.nb_descriptors = MAX_NUM_OPS_INFLIGHT;
> >  	qp_conf.mp_session = ts_params->session_mpool;
> > -	qp_conf.mp_session_private = ts_params->session_priv_mpool;
> >
> >  	TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
> >  			ts_params->valid_devs[0], qp_id, &qp_conf,
> > @@ -11943,8 +11903,7 @@ static int create_gmac_session(uint8_t dev_id,
> >  			ts_params->session_mpool);
> >
> >  	rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
> > -			&ut_params->auth_xform,
> > -			ts_params->session_priv_mpool);
> > +			&ut_params->auth_xform);
> >
> >  	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation
> > failed");
> >
> > @@ -12588,8 +12547,7 @@ create_auth_session(struct
> > crypto_unittest_params *ut_params,
> >  			ts_params->session_mpool);
> >
> >  	rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
> > -				&ut_params->auth_xform,
> > -				ts_params->session_priv_mpool);
> > +				&ut_params->auth_xform);
> >
> >  	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation
> > failed");
> >
> > @@ -12641,8 +12599,7 @@ create_auth_cipher_session(struct
> > crypto_unittest_params *ut_params,
> >  			ts_params->session_mpool);
> >
> >  	rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
> > -				&ut_params->auth_xform,
> > -				ts_params->session_priv_mpool);
> > +				&ut_params->auth_xform);
> >
> >  	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation
> > failed");
> >
> > @@ -13149,8 +13106,7 @@ test_authenticated_encrypt_with_esn(
> >
> >  	rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
> >  				ut_params->sess,
> > -				&ut_params->cipher_xform,
> > -				ts_params->session_priv_mpool);
> > +				&ut_params->cipher_xform);
> >
> >  	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation
> > failed");
> >
> > @@ -13281,8 +13237,7 @@ test_authenticated_decrypt_with_esn(
> >
> >  	rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
> >  				ut_params->sess,
> > -				&ut_params->auth_xform,
> > -				ts_params->session_priv_mpool);
> > +				&ut_params->auth_xform);
> >
> >  	TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation
> > failed");
> >
> > @@ -14003,11 +13958,6 @@ test_scheduler_attach_worker_op(void)
> >  			rte_mempool_free(ts_params->session_mpool);
> >  			ts_params->session_mpool = NULL;
> >  		}
> > -		if (ts_params->session_priv_mpool) {
> > -			rte_mempool_free(ts_params-
> > >session_priv_mpool);
> > -			ts_params->session_priv_mpool = NULL;
> > -		}
> > -
> >  		if (info.sym.max_nb_sessions != 0 &&
> >  				info.sym.max_nb_sessions <
> > MAX_NB_SESSIONS) {
> >  			RTE_LOG(ERR, USER1,
> > @@ -14024,32 +13974,14 @@ test_scheduler_attach_worker_op(void)
> >  			ts_params->session_mpool =
> >  				rte_cryptodev_sym_session_pool_create(
> >  						"test_sess_mp",
> > -						MAX_NB_SESSIONS, 0, 0, 0,
> > +						MAX_NB_SESSIONS,
> > +						session_size, 0, 0,
> >  						SOCKET_ID_ANY);
> >  			TEST_ASSERT_NOT_NULL(ts_params-
> > >session_mpool,
> >  					"session mempool allocation failed");
> >  		}
> >
> > -		/*
> > -		 * Create mempool with maximum number of sessions,
> > -		 * to include device specific session private data
> > -		 */
> > -		if (ts_params->session_priv_mpool == NULL) {
> > -			ts_params->session_priv_mpool =
> > rte_mempool_create(
> > -					"test_sess_mp_priv",
> > -					MAX_NB_SESSIONS,
> > -					session_size,
> > -					0, 0, NULL, NULL, NULL,
> > -					NULL, SOCKET_ID_ANY,
> > -					0);
> > -
> > -			TEST_ASSERT_NOT_NULL(ts_params-
> > >session_priv_mpool,
> > -					"session mempool allocation failed");
> > -		}
> > -
> >  		ts_params->qp_conf.mp_session = ts_params-
> > >session_mpool;
> > -		ts_params->qp_conf.mp_session_private =
> > -				ts_params->session_priv_mpool;
> >
> >  		ret = rte_cryptodev_scheduler_worker_attach(sched_id,
> >  				(uint8_t)i);
> > diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
> > index 1cdd84d01f..a3a10d484b 100644
> > --- a/app/test/test_cryptodev.h
> > +++ b/app/test/test_cryptodev.h
> > @@ -89,7 +89,6 @@ struct crypto_testsuite_params {
> >  	struct rte_mempool *large_mbuf_pool;
> >  	struct rte_mempool *op_mpool;
> >  	struct rte_mempool *session_mpool;
> > -	struct rte_mempool *session_priv_mpool;
> >  	struct rte_cryptodev_config conf;
> >  	struct rte_cryptodev_qp_conf qp_conf;
> >
> > diff --git a/app/test/test_cryptodev_asym.c
> > b/app/test/test_cryptodev_asym.c
> > index 9d19a6d6d9..35da574da8 100644
> > --- a/app/test/test_cryptodev_asym.c
> > +++ b/app/test/test_cryptodev_asym.c
> > @@ -924,7 +924,6 @@ testsuite_setup(void)
> >  	/* configure qp */
> >  	ts_params->qp_conf.nb_descriptors =
> > DEFAULT_NUM_OPS_INFLIGHT;
> >  	ts_params->qp_conf.mp_session = ts_params->session_mpool;
> > -	ts_params->qp_conf.mp_session_private = ts_params-
> > >session_mpool;
> >  	for (qp_id = 0; qp_id < info.max_nb_queue_pairs; qp_id++) {
> >  		TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
> >  			dev_id, qp_id, &ts_params->qp_conf,
> > diff --git a/app/test/test_cryptodev_blockcipher.c
> > b/app/test/test_cryptodev_blockcipher.c
> > index 3cdb2c96e8..9417803f18 100644
> > --- a/app/test/test_cryptodev_blockcipher.c
> > +++ b/app/test/test_cryptodev_blockcipher.c
> > @@ -68,7 +68,6 @@ test_blockcipher_one_case(const struct
> > blockcipher_test_case *t,
> >  	struct rte_mempool *mbuf_pool,
> >  	struct rte_mempool *op_mpool,
> >  	struct rte_mempool *sess_mpool,
> > -	struct rte_mempool *sess_priv_mpool,
> >  	uint8_t dev_id,
> >  	char *test_msg)
> >  {
> > @@ -81,7 +80,7 @@ test_blockcipher_one_case(const struct
> > blockcipher_test_case *t,
> >  	struct rte_crypto_sym_op *sym_op = NULL;
> >  	struct rte_crypto_op *op = NULL;
> >  	struct rte_cryptodev_info dev_info;
> > -	struct rte_cryptodev_sym_session *sess = NULL;
> > +	void *sess = NULL;
> >
> >  	int status = TEST_SUCCESS;
> >  	const struct blockcipher_test_data *tdata = t->test_data;
> > @@ -514,7 +513,7 @@ test_blockcipher_one_case(const struct
> > blockcipher_test_case *t,
> >  		sess = rte_cryptodev_sym_session_create(sess_mpool);
> >
> >  		status = rte_cryptodev_sym_session_init(dev_id, sess,
> > -				init_xform, sess_priv_mpool);
> > +				init_xform);
> >  		if (status == -ENOTSUP) {
> >  			snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
> > "UNSUPPORTED");
> >  			status = TEST_SKIPPED;
> > @@ -831,7 +830,6 @@ blockcipher_test_case_run(const void *data)
> >  			p_testsuite_params->mbuf_pool,
> >  			p_testsuite_params->op_mpool,
> >  			p_testsuite_params->session_mpool,
> > -			p_testsuite_params->session_priv_mpool,
> >  			p_testsuite_params->valid_devs[0],
> >  			test_msg);
> >  	return status;
> > diff --git a/app/test/test_event_crypto_adapter.c
> > b/app/test/test_event_crypto_adapter.c
> > index 3ad20921e2..59229a1cde 100644
> > --- a/app/test/test_event_crypto_adapter.c
> > +++ b/app/test/test_event_crypto_adapter.c
> > @@ -61,7 +61,6 @@ struct event_crypto_adapter_test_params {
> >  	struct rte_mempool *mbuf_pool;
> >  	struct rte_mempool *op_mpool;
> >  	struct rte_mempool *session_mpool;
> > -	struct rte_mempool *session_priv_mpool;
> >  	struct rte_cryptodev_config *config;
> >  	uint8_t crypto_event_port_id;
> >  	uint8_t internal_port_op_fwd;
> > @@ -167,7 +166,7 @@ static int
> >  test_op_forward_mode(uint8_t session_less)
> >  {
> >  	struct rte_crypto_sym_xform cipher_xform;
> > -	struct rte_cryptodev_sym_session *sess;
> > +	void *sess;
> >  	union rte_event_crypto_metadata m_data;
> >  	struct rte_crypto_sym_op *sym_op;
> >  	struct rte_crypto_op *op;
> > @@ -203,7 +202,7 @@ test_op_forward_mode(uint8_t session_less)
> >
> >  		/* Create Crypto session*/
> >  		ret = rte_cryptodev_sym_session_init(TEST_CDEV_ID, sess,
> > -				&cipher_xform,
> params.session_priv_mpool);
> > +				&cipher_xform);
> >  		TEST_ASSERT_SUCCESS(ret, "Failed to init session\n");
> >
> >  		ret = rte_event_crypto_adapter_caps_get(evdev,
> > TEST_CDEV_ID,
> > @@ -367,7 +366,7 @@ static int
> >  test_op_new_mode(uint8_t session_less)
> >  {
> >  	struct rte_crypto_sym_xform cipher_xform;
> > -	struct rte_cryptodev_sym_session *sess;
> > +	void *sess;
> >  	union rte_event_crypto_metadata m_data;
> >  	struct rte_crypto_sym_op *sym_op;
> >  	struct rte_crypto_op *op;
> > @@ -411,7 +410,7 @@ test_op_new_mode(uint8_t session_less)
> >  						&m_data, sizeof(m_data));
> >  		}
> >  		ret = rte_cryptodev_sym_session_init(TEST_CDEV_ID, sess,
> > -				&cipher_xform,
> params.session_priv_mpool);
> > +				&cipher_xform);
> >  		TEST_ASSERT_SUCCESS(ret, "Failed to init session\n");
> >
> >  		rte_crypto_op_attach_sym_session(op, sess);
> > @@ -553,22 +552,12 @@ configure_cryptodev(void)
> >
> >  	params.session_mpool = rte_cryptodev_sym_session_pool_create(
> >  			"CRYPTO_ADAPTER_SESSION_MP",
> > -			MAX_NB_SESSIONS, 0, 0,
> > +			MAX_NB_SESSIONS, session_size, 0,
> >  			sizeof(union rte_event_crypto_metadata),
> >  			SOCKET_ID_ANY);
> >  	TEST_ASSERT_NOT_NULL(params.session_mpool,
> >  			"session mempool allocation failed\n");
> >
> > -	params.session_priv_mpool = rte_mempool_create(
> > -				"CRYPTO_AD_SESS_MP_PRIV",
> > -				MAX_NB_SESSIONS,
> > -				session_size,
> > -				0, 0, NULL, NULL, NULL,
> > -				NULL, SOCKET_ID_ANY,
> > -				0);
> > -	TEST_ASSERT_NOT_NULL(params.session_priv_mpool,
> > -			"session mempool allocation failed\n");
> > -
> >  	rte_cryptodev_info_get(TEST_CDEV_ID, &info);
> >  	conf.nb_queue_pairs = info.max_nb_queue_pairs;
> >  	conf.socket_id = SOCKET_ID_ANY;
> > @@ -580,7 +569,6 @@ configure_cryptodev(void)
> >
> >  	qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
> >  	qp_conf.mp_session = params.session_mpool;
> > -	qp_conf.mp_session_private = params.session_priv_mpool;
> >
> >  	TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
> >  			TEST_CDEV_ID, TEST_CDEV_QP_ID, &qp_conf,
> > @@ -934,12 +922,6 @@ crypto_teardown(void)
> >  		rte_mempool_free(params.session_mpool);
> >  		params.session_mpool = NULL;
> >  	}
> > -	if (params.session_priv_mpool != NULL) {
> > -		rte_mempool_avail_count(params.session_priv_mpool);
> > -		rte_mempool_free(params.session_priv_mpool);
> > -		params.session_priv_mpool = NULL;
> > -	}
> > -
> >  	/* Free ops mempool */
> >  	if (params.op_mpool != NULL) {
> >  		RTE_LOG(DEBUG, USER1, "EVENT_CRYPTO_SYM_OP_POOL
> > count %u\n",
> > diff --git a/app/test/test_ipsec.c b/app/test/test_ipsec.c
> > index 2ffa2a8e79..134545efe1 100644
> > --- a/app/test/test_ipsec.c
> > +++ b/app/test/test_ipsec.c
> > @@ -355,20 +355,9 @@ testsuite_setup(void)
> >  		return TEST_FAILED;
> >  	}
> >
> > -	ts_params->qp_conf.mp_session_private = rte_mempool_create(
> > -				"test_priv_sess_mp",
> > -				MAX_NB_SESSIONS,
> > -				sess_sz,
> > -				0, 0, NULL, NULL, NULL,
> > -				NULL, SOCKET_ID_ANY,
> > -				0);
> > -
> > -	TEST_ASSERT_NOT_NULL(ts_params->qp_conf.mp_session_private,
> > -			"private session mempool allocation failed");
> > -
> >  	ts_params->qp_conf.mp_session =
> >  		rte_cryptodev_sym_session_pool_create("test_sess_mp",
> > -			MAX_NB_SESSIONS, 0, 0, 0, SOCKET_ID_ANY);
> > +			MAX_NB_SESSIONS, sess_sz, 0, 0, SOCKET_ID_ANY);
> >
> >  	TEST_ASSERT_NOT_NULL(ts_params->qp_conf.mp_session,
> >  			"session mempool allocation failed");
> > @@ -413,11 +402,6 @@ testsuite_teardown(void)
> >  		rte_mempool_free(ts_params->qp_conf.mp_session);
> >  		ts_params->qp_conf.mp_session = NULL;
> >  	}
> > -
> > -	if (ts_params->qp_conf.mp_session_private != NULL) {
> > -		rte_mempool_free(ts_params-
> > >qp_conf.mp_session_private);
> > -		ts_params->qp_conf.mp_session_private = NULL;
> > -	}
> >  }
> >
> >  static int
> > @@ -644,7 +628,7 @@ create_crypto_session(struct ipsec_unitest_params
> > *ut,
> >  	struct rte_cryptodev_qp_conf *qp, uint8_t dev_id, uint32_t j)
> >  {
> >  	int32_t rc;
> > -	struct rte_cryptodev_sym_session *s;
> > +	void *s;
> >
> >  	s = rte_cryptodev_sym_session_create(qp->mp_session);
> >  	if (s == NULL)
> > @@ -652,7 +636,7 @@ create_crypto_session(struct ipsec_unitest_params
> > *ut,
> >
> >  	/* initiliaze SA crypto session for device */
> >  	rc = rte_cryptodev_sym_session_init(dev_id, s,
> > -			ut->crypto_xforms, qp->mp_session_private);
> > +			ut->crypto_xforms);
> >  	if (rc == 0) {
> >  		ut->ss[j].crypto.ses = s;
> >  		return 0;
> > diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
> > b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
> > index edb7275e76..75330292af 100644
> > --- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
> > +++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
> > @@ -235,7 +235,6 @@ aesni_gcm_pmd_qp_setup(struct rte_cryptodev
> > *dev, uint16_t qp_id,
> >  		goto qp_setup_cleanup;
> >
> >  	qp->sess_mp = qp_conf->mp_session;
> > -	qp->sess_mp_priv = qp_conf->mp_session_private;
> >
> >  	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
> >
> > @@ -259,10 +258,8 @@ aesni_gcm_pmd_sym_session_get_size(struct
> > rte_cryptodev *dev __rte_unused)
> >  static int
> >  aesni_gcm_pmd_sym_session_configure(struct rte_cryptodev *dev
> > __rte_unused,
> >  		struct rte_crypto_sym_xform *xform,
> > -		struct rte_cryptodev_sym_session *sess,
> > -		struct rte_mempool *mempool)
> > +		void *sess)
> >  {
> > -	void *sess_private_data;
> >  	int ret;
> >  	struct aesni_gcm_private *internals = dev->data->dev_private;
> >
> > @@ -271,42 +268,24 @@ aesni_gcm_pmd_sym_session_configure(struct
> > rte_cryptodev *dev __rte_unused,
> >  		return -EINVAL;
> >  	}
> >
> > -	if (rte_mempool_get(mempool, &sess_private_data)) {
> > -		AESNI_GCM_LOG(ERR,
> > -				"Couldn't get object from session mempool");
> > -		return -ENOMEM;
> > -	}
> >  	ret = aesni_gcm_set_session_parameters(internals->ops,
> > -				sess_private_data, xform);
> > +				sess, xform);
> >  	if (ret != 0) {
> >  		AESNI_GCM_LOG(ERR, "failed configure session
> > parameters");
> > -
> > -		/* Return session to mempool */
> > -		rte_mempool_put(mempool, sess_private_data);
> >  		return ret;
> >  	}
> >
> > -	set_sym_session_private_data(sess, dev->driver_id,
> > -			sess_private_data);
> > -
> >  	return 0;
> >  }
> >
> >  /** Clear the memory of session so it doesn't leave key material behind */
> >  static void
> > -aesni_gcm_pmd_sym_session_clear(struct rte_cryptodev *dev,
> > -		struct rte_cryptodev_sym_session *sess)
> > +aesni_gcm_pmd_sym_session_clear(struct rte_cryptodev *dev, void
> *sess)
> >  {
> > -	uint8_t index = dev->driver_id;
> > -	void *sess_priv = get_sym_session_private_data(sess, index);
> > -
> > +	RTE_SET_USED(dev);
> >  	/* Zero out the whole structure */
> > -	if (sess_priv) {
> > -		memset(sess_priv, 0, sizeof(struct aesni_gcm_session));
> > -		struct rte_mempool *sess_mp =
> > rte_mempool_from_obj(sess_priv);
> > -		set_sym_session_private_data(sess, index, NULL);
> > -		rte_mempool_put(sess_mp, sess_priv);
> > -	}
> > +	if (sess)
> > +		memset(sess, 0, sizeof(struct aesni_gcm_session));
> >  }
> >
> >  struct rte_cryptodev_ops aesni_gcm_pmd_ops = {
> > diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
> > b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
> > index 39c67e3952..efdc05c45f 100644
> > --- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
> > +++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
> > @@ -944,7 +944,6 @@ aesni_mb_pmd_qp_setup(struct rte_cryptodev
> *dev,
> > uint16_t qp_id,
> >  	}
> >
> >  	qp->sess_mp = qp_conf->mp_session;
> > -	qp->sess_mp_priv = qp_conf->mp_session_private;
> >
> >  	memset(&qp->stats, 0, sizeof(qp->stats));
> >
> > @@ -974,11 +973,8 @@ aesni_mb_pmd_sym_session_get_size(struct
> > rte_cryptodev *dev __rte_unused)
> >  /** Configure a aesni multi-buffer session from a crypto xform chain */
> >  static int
> >  aesni_mb_pmd_sym_session_configure(struct rte_cryptodev *dev,
> > -		struct rte_crypto_sym_xform *xform,
> > -		struct rte_cryptodev_sym_session *sess,
> > -		struct rte_mempool *mempool)
> > +		struct rte_crypto_sym_xform *xform, void *sess)
> >  {
> > -	void *sess_private_data;
> >  	struct aesni_mb_private *internals = dev->data->dev_private;
> >  	int ret;
> >
> > @@ -987,43 +983,25 @@ aesni_mb_pmd_sym_session_configure(struct
> > rte_cryptodev *dev,
> >  		return -EINVAL;
> >  	}
> >
> > -	if (rte_mempool_get(mempool, &sess_private_data)) {
> > -		AESNI_MB_LOG(ERR,
> > -				"Couldn't get object from session mempool");
> > -		return -ENOMEM;
> > -	}
> > -
> >  	ret = aesni_mb_set_session_parameters(internals->mb_mgr,
> > -			sess_private_data, xform);
> > +			sess, xform);
> >  	if (ret != 0) {
> >  		AESNI_MB_LOG(ERR, "failed configure session parameters");
> > -
> > -		/* Return session to mempool */
> > -		rte_mempool_put(mempool, sess_private_data);
> >  		return ret;
> >  	}
> >
> > -	set_sym_session_private_data(sess, dev->driver_id,
> > -			sess_private_data);
> > -
> >  	return 0;
> >  }
> >
> >  /** Clear the memory of session so it doesn't leave key material behind */
> >  static void
> > -aesni_mb_pmd_sym_session_clear(struct rte_cryptodev *dev,
> > -		struct rte_cryptodev_sym_session *sess)
> > +aesni_mb_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
> >  {
> > -	uint8_t index = dev->driver_id;
> > -	void *sess_priv = get_sym_session_private_data(sess, index);
> > +	RTE_SET_USED(dev);
> >
> >  	/* Zero out the whole structure */
> > -	if (sess_priv) {
> > -		memset(sess_priv, 0, sizeof(struct aesni_mb_session));
> > -		struct rte_mempool *sess_mp =
> > rte_mempool_from_obj(sess_priv);
> > -		set_sym_session_private_data(sess, index, NULL);
> > -		rte_mempool_put(sess_mp, sess_priv);
> > -	}
> > +	if (sess)
> > +		memset(sess, 0, sizeof(struct aesni_mb_session));
> >  }
> >
> >  struct rte_cryptodev_ops aesni_mb_pmd_ops = {
> > diff --git a/drivers/crypto/armv8/rte_armv8_pmd_ops.c
> > b/drivers/crypto/armv8/rte_armv8_pmd_ops.c
> > index 1b2749fe62..2d3b54b063 100644
> > --- a/drivers/crypto/armv8/rte_armv8_pmd_ops.c
> > +++ b/drivers/crypto/armv8/rte_armv8_pmd_ops.c
> > @@ -244,7 +244,6 @@ armv8_crypto_pmd_qp_setup(struct rte_cryptodev
> > *dev, uint16_t qp_id,
> >  		goto qp_setup_cleanup;
> >
> >  	qp->sess_mp = qp_conf->mp_session;
> > -	qp->sess_mp_priv = qp_conf->mp_session_private;
> >
> >  	memset(&qp->stats, 0, sizeof(qp->stats));
> >
> > @@ -268,10 +267,8 @@ armv8_crypto_pmd_sym_session_get_size(struct
> > rte_cryptodev *dev __rte_unused)
> >  static int
> >  armv8_crypto_pmd_sym_session_configure(struct rte_cryptodev *dev,
> >  		struct rte_crypto_sym_xform *xform,
> > -		struct rte_cryptodev_sym_session *sess,
> > -		struct rte_mempool *mempool)
> > +		void *sess)
> >  {
> > -	void *sess_private_data;
> >  	int ret;
> >
> >  	if (unlikely(sess == NULL)) {
> > @@ -279,42 +276,23 @@
> armv8_crypto_pmd_sym_session_configure(struct
> > rte_cryptodev *dev,
> >  		return -EINVAL;
> >  	}
> >
> > -	if (rte_mempool_get(mempool, &sess_private_data)) {
> > -		CDEV_LOG_ERR(
> > -			"Couldn't get object from session mempool");
> > -		return -ENOMEM;
> > -	}
> > -
> > -	ret = armv8_crypto_set_session_parameters(sess_private_data,
> > xform);
> > +	ret = armv8_crypto_set_session_parameters(sess, xform);
> >  	if (ret != 0) {
> >  		ARMV8_CRYPTO_LOG_ERR("failed configure session
> > parameters");
> > -
> > -		/* Return session to mempool */
> > -		rte_mempool_put(mempool, sess_private_data);
> >  		return ret;
> >  	}
> >
> > -	set_sym_session_private_data(sess, dev->driver_id,
> > -			sess_private_data);
> > -
> >  	return 0;
> >  }
> >
> >  /** Clear the memory of session so it doesn't leave key material behind */
> >  static void
> > -armv8_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev,
> > -		struct rte_cryptodev_sym_session *sess)
> > +armv8_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev, void
> > *sess)
> >  {
> > -	uint8_t index = dev->driver_id;
> > -	void *sess_priv = get_sym_session_private_data(sess, index);
> > -
> > +	RTE_SET_USED(dev);
> >  	/* Zero out the whole structure */
> > -	if (sess_priv) {
> > -		memset(sess_priv, 0, sizeof(struct armv8_crypto_session));
> > -		struct rte_mempool *sess_mp =
> > rte_mempool_from_obj(sess_priv);
> > -		set_sym_session_private_data(sess, index, NULL);
> > -		rte_mempool_put(sess_mp, sess_priv);
> > -	}
> > +	if (sess)
> > +		memset(sess, 0, sizeof(struct armv8_crypto_session));
> >  }
> >
> >  struct rte_cryptodev_ops armv8_crypto_pmd_ops = {
> > diff --git a/drivers/crypto/bcmfs/bcmfs_sym_session.c
> > b/drivers/crypto/bcmfs/bcmfs_sym_session.c
> > index 675ed0ad55..b4b167d0c2 100644
> > --- a/drivers/crypto/bcmfs/bcmfs_sym_session.c
> > +++ b/drivers/crypto/bcmfs/bcmfs_sym_session.c
> > @@ -224,10 +224,9 @@ bcmfs_sym_get_session(struct rte_crypto_op
> *op)
> >  int
> >  bcmfs_sym_session_configure(struct rte_cryptodev *dev,
> >  			    struct rte_crypto_sym_xform *xform,
> > -			    struct rte_cryptodev_sym_session *sess,
> > -			    struct rte_mempool *mempool)
> > +			    void *sess)
> >  {
> > -	void *sess_private_data;
> > +	RTE_SET_USED(dev);
> >  	int ret;
> >
> >  	if (unlikely(sess == NULL)) {
> > @@ -235,44 +234,23 @@ bcmfs_sym_session_configure(struct
> > rte_cryptodev *dev,
> >  		return -EINVAL;
> >  	}
> >
> > -	if (rte_mempool_get(mempool, &sess_private_data)) {
> > -		BCMFS_DP_LOG(ERR,
> > -			"Couldn't get object from session mempool");
> > -		return -ENOMEM;
> > -	}
> > -
> > -	ret = crypto_set_session_parameters(sess_private_data, xform);
> > +	ret = crypto_set_session_parameters(sess, xform);
> >
> >  	if (ret != 0) {
> >  		BCMFS_DP_LOG(ERR, "Failed configure session parameters");
> > -		/* Return session to mempool */
> > -		rte_mempool_put(mempool, sess_private_data);
> >  		return ret;
> >  	}
> >
> > -	set_sym_session_private_data(sess, dev->driver_id,
> > -				     sess_private_data);
> > -
> >  	return 0;
> >  }
> >
> >  /* Clear the memory of session so it doesn't leave key material behind */
> >  void
> > -bcmfs_sym_session_clear(struct rte_cryptodev *dev,
> > -			struct rte_cryptodev_sym_session  *sess)
> > +bcmfs_sym_session_clear(struct rte_cryptodev *dev, void *sess)
> >  {
> > -	uint8_t index = dev->driver_id;
> > -	void *sess_priv = get_sym_session_private_data(sess, index);
> > -
> > -	if (sess_priv) {
> > -		struct rte_mempool *sess_mp;
> > -
> > -		memset(sess_priv, 0, sizeof(struct bcmfs_sym_session));
> > -		sess_mp = rte_mempool_from_obj(sess_priv);
> > -
> > -		set_sym_session_private_data(sess, index, NULL);
> > -		rte_mempool_put(sess_mp, sess_priv);
> > -	}
> > +	RTE_SET_USED(dev);
> > +	if (sess)
> > +		memset(sess, 0, sizeof(struct bcmfs_sym_session));
> >  }
> >
> >  unsigned int
> > diff --git a/drivers/crypto/bcmfs/bcmfs_sym_session.h
> > b/drivers/crypto/bcmfs/bcmfs_sym_session.h
> > index d40595b4bd..7faafe2fd5 100644
> > --- a/drivers/crypto/bcmfs/bcmfs_sym_session.h
> > +++ b/drivers/crypto/bcmfs/bcmfs_sym_session.h
> > @@ -93,12 +93,10 @@ bcmfs_process_crypto_op(struct rte_crypto_op
> *op,
> >  int
> >  bcmfs_sym_session_configure(struct rte_cryptodev *dev,
> >  			    struct rte_crypto_sym_xform *xform,
> > -			    struct rte_cryptodev_sym_session *sess,
> > -			    struct rte_mempool *mempool);
> > +			    void *sess);
> >
> >  void
> > -bcmfs_sym_session_clear(struct rte_cryptodev *dev,
> > -			struct rte_cryptodev_sym_session  *sess);
> > +bcmfs_sym_session_clear(struct rte_cryptodev *dev, void *sess);
> >
> >  unsigned int
> >  bcmfs_sym_session_get_private_size(struct rte_cryptodev *dev
> > __rte_unused);
> > diff --git a/drivers/crypto/caam_jr/caam_jr.c
> > b/drivers/crypto/caam_jr/caam_jr.c
> > index ce7a100778..8a04820fa6 100644
> > --- a/drivers/crypto/caam_jr/caam_jr.c
> > +++ b/drivers/crypto/caam_jr/caam_jr.c
> > @@ -1692,52 +1692,36 @@ caam_jr_set_session_parameters(struct
> > rte_cryptodev *dev,
> >  static int
> >  caam_jr_sym_session_configure(struct rte_cryptodev *dev,
> >  			      struct rte_crypto_sym_xform *xform,
> > -			      struct rte_cryptodev_sym_session *sess,
> > -			      struct rte_mempool *mempool)
> > +			      void *sess)
> >  {
> > -	void *sess_private_data;
> >  	int ret;
> >
> >  	PMD_INIT_FUNC_TRACE();
> >
> > -	if (rte_mempool_get(mempool, &sess_private_data)) {
> > -		CAAM_JR_ERR("Couldn't get object from session mempool");
> > -		return -ENOMEM;
> > -	}
> > -
> > -	memset(sess_private_data, 0, sizeof(struct caam_jr_session));
> > -	ret = caam_jr_set_session_parameters(dev, xform,
> > sess_private_data);
> > +	memset(sess, 0, sizeof(struct caam_jr_session));
> > +	ret = caam_jr_set_session_parameters(dev, xform, sess);
> >  	if (ret != 0) {
> >  		CAAM_JR_ERR("failed to configure session parameters");
> > -		/* Return session to mempool */
> > -		rte_mempool_put(mempool, sess_private_data);
> >  		return ret;
> >  	}
> >
> > -	set_sym_session_private_data(sess, dev->driver_id,
> > sess_private_data);
> > -
> >  	return 0;
> >  }
> >
> >  /* Clear the memory of session so it doesn't leave key material behind */
> >  static void
> > -caam_jr_sym_session_clear(struct rte_cryptodev *dev,
> > -		struct rte_cryptodev_sym_session *sess)
> > +caam_jr_sym_session_clear(struct rte_cryptodev *dev, void *sess)
> >  {
> > -	uint8_t index = dev->driver_id;
> > -	void *sess_priv = get_sym_session_private_data(sess, index);
> > -	struct caam_jr_session *s = (struct caam_jr_session *)sess_priv;
> > +	RTE_SET_USED(dev);
> > +
> > +	struct caam_jr_session *s = (struct caam_jr_session *)sess;
> >
> >  	PMD_INIT_FUNC_TRACE();
> >
> > -	if (sess_priv) {
> > -		struct rte_mempool *sess_mp =
> > rte_mempool_from_obj(sess_priv);
> > -
> > +	if (sess) {
> >  		rte_free(s->cipher_key.data);
> >  		rte_free(s->auth_key.data);
> >  		memset(s, 0, sizeof(struct caam_jr_session));
> > -		set_sym_session_private_data(sess, index, NULL);
> > -		rte_mempool_put(sess_mp, sess_priv);
> >  	}
> >  }
> >
> > diff --git a/drivers/crypto/ccp/ccp_pmd_ops.c
> > b/drivers/crypto/ccp/ccp_pmd_ops.c
> > index 0d615d311c..cac1268130 100644
> > --- a/drivers/crypto/ccp/ccp_pmd_ops.c
> > +++ b/drivers/crypto/ccp/ccp_pmd_ops.c
> > @@ -727,7 +727,6 @@ ccp_pmd_qp_setup(struct rte_cryptodev *dev,
> > uint16_t qp_id,
> >  	}
> >
> >  	qp->sess_mp = qp_conf->mp_session;
> > -	qp->sess_mp_priv = qp_conf->mp_session_private;
> >
> >  	/* mempool for batch info */
> >  	qp->batch_mp = rte_mempool_create(
> > @@ -758,11 +757,9 @@ ccp_pmd_sym_session_get_size(struct
> > rte_cryptodev *dev __rte_unused)
> >  static int
> >  ccp_pmd_sym_session_configure(struct rte_cryptodev *dev,
> >  			  struct rte_crypto_sym_xform *xform,
> > -			  struct rte_cryptodev_sym_session *sess,
> > -			  struct rte_mempool *mempool)
> > +			  void *sess)
> >  {
> >  	int ret;
> > -	void *sess_private_data;
> >  	struct ccp_private *internals;
> >
> >  	if (unlikely(sess == NULL || xform == NULL)) {
> > @@ -770,39 +767,22 @@ ccp_pmd_sym_session_configure(struct
> > rte_cryptodev *dev,
> >  		return -ENOMEM;
> >  	}
> >
> > -	if (rte_mempool_get(mempool, &sess_private_data)) {
> > -		CCP_LOG_ERR("Couldn't get object from session mempool");
> > -		return -ENOMEM;
> > -	}
> >  	internals = (struct ccp_private *)dev->data->dev_private;
> > -	ret = ccp_set_session_parameters(sess_private_data, xform,
> > internals);
> > +	ret = ccp_set_session_parameters(sess, xform, internals);
> >  	if (ret != 0) {
> >  		CCP_LOG_ERR("failed configure session parameters");
> > -
> > -		/* Return session to mempool */
> > -		rte_mempool_put(mempool, sess_private_data);
> >  		return ret;
> >  	}
> > -	set_sym_session_private_data(sess, dev->driver_id,
> > -				 sess_private_data);
> >
> >  	return 0;
> >  }
> >
> >  static void
> > -ccp_pmd_sym_session_clear(struct rte_cryptodev *dev,
> > -		      struct rte_cryptodev_sym_session *sess)
> > +ccp_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
> >  {
> > -	uint8_t index = dev->driver_id;
> > -	void *sess_priv = get_sym_session_private_data(sess, index);
> > -
> > -	if (sess_priv) {
> > -		struct rte_mempool *sess_mp =
> > rte_mempool_from_obj(sess_priv);
> > -
> > -		rte_mempool_put(sess_mp, sess_priv);
> > -		memset(sess_priv, 0, sizeof(struct ccp_session));
> > -		set_sym_session_private_data(sess, index, NULL);
> > -	}
> > +	RTE_SET_USED(dev);
> > +	if (sess)
> > +		memset(sess, 0, sizeof(struct ccp_session));
> >  }
> >
> >  struct rte_cryptodev_ops ccp_ops = {
> > diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
> > b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
> > index 99968cc353..50cae5e3d6 100644
> > --- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
> > +++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
> > @@ -32,17 +32,18 @@ cn10k_cpt_sym_temp_sess_create(struct
> > cnxk_cpt_qp *qp, struct rte_crypto_op *op)
> >  	if (sess == NULL)
> >  		return NULL;
> >
> > -	ret = sym_session_configure(qp->lf.roc_cpt, driver_id, sym_op-
> > >xform,
> > -				    sess, qp->sess_mp_priv);
> > +	sess->sess_data[driver_id].data =
> > +			(void *)((uint8_t *)sess +
> > +			rte_cryptodev_sym_get_header_session_size() +
> > +			(driver_id * sess->priv_sz));
> > +	priv = get_sym_session_private_data(sess, driver_id);
> > +	ret = sym_session_configure(qp->lf.roc_cpt, sym_op->xform, (void
> > *)priv);
> >  	if (ret)
> >  		goto sess_put;
> >
> > -	priv = get_sym_session_private_data(sess, driver_id);
> > -
> >  	sym_op->session = sess;
> >
> >  	return priv;
> > -
> >  sess_put:
> >  	rte_mempool_put(qp->sess_mp, sess);
> >  	return NULL;
> > @@ -144,9 +145,7 @@ cn10k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct
> > rte_crypto_op *ops[],
> >  			ret = cpt_sym_inst_fill(qp, op, sess, infl_req,
> >  						&inst[0]);
> >  			if (unlikely(ret)) {
> > -
> > 	sym_session_clear(cn10k_cryptodev_driver_id,
> > -						  op->sym->session);
> > -				rte_mempool_put(qp->sess_mp, op->sym-
> > >session);
> > +				sym_session_clear(op->sym->session);
> >  				return 0;
> >  			}
> >  			w7 = sess->cpt_inst_w7;
> > @@ -437,8 +436,7 @@ cn10k_cpt_dequeue_post_process(struct
> > cnxk_cpt_qp *qp,
> >  temp_sess_free:
> >  	if (unlikely(cop->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) {
> >  		if (cop->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
> > -			sym_session_clear(cn10k_cryptodev_driver_id,
> > -					  cop->sym->session);
> > +			sym_session_clear(cop->sym->session);
> >  			sz =
> > rte_cryptodev_sym_get_existing_header_session_size(
> >  				cop->sym->session);
> >  			memset(cop->sym->session, 0, sz);
> > diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
> > b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
> > index 4c2dc5b080..5f83581131 100644
> > --- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
> > +++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
> > @@ -81,17 +81,19 @@ cn9k_cpt_sym_temp_sess_create(struct
> > cnxk_cpt_qp *qp, struct rte_crypto_op *op)
> >  	if (sess == NULL)
> >  		return NULL;
> >
> > -	ret = sym_session_configure(qp->lf.roc_cpt, driver_id, sym_op-
> > >xform,
> > -				    sess, qp->sess_mp_priv);
> > +	sess->sess_data[driver_id].data =
> > +			(void *)((uint8_t *)sess +
> > +			rte_cryptodev_sym_get_header_session_size() +
> > +			(driver_id * sess->priv_sz));
> > +	priv = get_sym_session_private_data(sess, driver_id);
> > +	ret = sym_session_configure(qp->lf.roc_cpt, sym_op->xform,
> > +			(void *)priv);
> >  	if (ret)
> >  		goto sess_put;
> >
> > -	priv = get_sym_session_private_data(sess, driver_id);
> > -
> >  	sym_op->session = sess;
> >
> >  	return priv;
> > -
> >  sess_put:
> >  	rte_mempool_put(qp->sess_mp, sess);
> >  	return NULL;
> > @@ -126,8 +128,7 @@ cn9k_cpt_inst_prep(struct cnxk_cpt_qp *qp, struct
> > rte_crypto_op *op,
> >  			ret = cn9k_cpt_sym_inst_fill(qp, op, sess, infl_req,
> >  						     inst);
> >  			if (unlikely(ret)) {
> > -
> > 	sym_session_clear(cn9k_cryptodev_driver_id,
> > -						  op->sym->session);
> > +				sym_session_clear(op->sym->session);
> >  				rte_mempool_put(qp->sess_mp, op->sym-
> > >session);
> >  			}
> >  			inst->w7.u64 = sess->cpt_inst_w7;
> > @@ -484,8 +485,7 @@ cn9k_cpt_dequeue_post_process(struct
> > cnxk_cpt_qp *qp, struct rte_crypto_op *cop,
> >  temp_sess_free:
> >  	if (unlikely(cop->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) {
> >  		if (cop->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
> > -			sym_session_clear(cn9k_cryptodev_driver_id,
> > -					  cop->sym->session);
> > +			sym_session_clear(cop->sym->session);
> >  			sz =
> > rte_cryptodev_sym_get_existing_header_session_size(
> >  				cop->sym->session);
> >  			memset(cop->sym->session, 0, sz);
> > diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
> > b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
> > index 41d8fe49e1..52d9cf0cf3 100644
> > --- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
> > +++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
> > @@ -379,7 +379,6 @@ cnxk_cpt_queue_pair_setup(struct rte_cryptodev
> > *dev, uint16_t qp_id,
> >  	}
> >
> >  	qp->sess_mp = conf->mp_session;
> > -	qp->sess_mp_priv = conf->mp_session_private;
> >  	dev->data->queue_pairs[qp_id] = qp;
> >
> >  	return 0;
> > @@ -493,27 +492,20 @@ cnxk_cpt_inst_w7_get(struct cnxk_se_sess *sess,
> > struct roc_cpt *roc_cpt)
> >  }
> >
> >  int
> > -sym_session_configure(struct roc_cpt *roc_cpt, int driver_id,
> > +sym_session_configure(struct roc_cpt *roc_cpt,
> >  		      struct rte_crypto_sym_xform *xform,
> > -		      struct rte_cryptodev_sym_session *sess,
> > -		      struct rte_mempool *pool)
> > +		      void *sess)
> >  {
> >  	struct cnxk_se_sess *sess_priv;
> > -	void *priv;
> >  	int ret;
> >
> >  	ret = sym_xform_verify(xform);
> >  	if (unlikely(ret < 0))
> >  		return ret;
> >
> > -	if (unlikely(rte_mempool_get(pool, &priv))) {
> > -		plt_dp_err("Could not allocate session private data");
> > -		return -ENOMEM;
> > -	}
> > +	memset(sess, 0, sizeof(struct cnxk_se_sess));
> >
> > -	memset(priv, 0, sizeof(struct cnxk_se_sess));
> > -
> > -	sess_priv = priv;
> > +	sess_priv = sess;
> >
> >  	switch (ret) {
> >  	case CNXK_CPT_CIPHER:
> > @@ -547,7 +539,7 @@ sym_session_configure(struct roc_cpt *roc_cpt, int
> > driver_id,
> >  	}
> >
> >  	if (ret)
> > -		goto priv_put;
> > +		return -ENOTSUP;
> >
> >  	if ((sess_priv->roc_se_ctx.fc_type == ROC_SE_HASH_HMAC) &&
> >  	    cpt_mac_len_verify(&xform->auth)) {
> > @@ -557,66 +549,45 @@ sym_session_configure(struct roc_cpt *roc_cpt,
> int
> > driver_id,
> >  			sess_priv->roc_se_ctx.auth_key = NULL;
> >  		}
> >
> > -		ret = -ENOTSUP;
> > -		goto priv_put;
> > +		return -ENOTSUP;
> >  	}
> >
> >  	sess_priv->cpt_inst_w7 = cnxk_cpt_inst_w7_get(sess_priv, roc_cpt);
> >
> > -	set_sym_session_private_data(sess, driver_id, sess_priv);
> > -
> >  	return 0;
> > -
> > -priv_put:
> > -	rte_mempool_put(pool, priv);
> > -
> > -	return -ENOTSUP;
> >  }
> >
> >  int
> >  cnxk_cpt_sym_session_configure(struct rte_cryptodev *dev,
> >  			       struct rte_crypto_sym_xform *xform,
> > -			       struct rte_cryptodev_sym_session *sess,
> > -			       struct rte_mempool *pool)
> > +			       void *sess)
> >  {
> >  	struct cnxk_cpt_vf *vf = dev->data->dev_private;
> >  	struct roc_cpt *roc_cpt = &vf->cpt;
> > -	uint8_t driver_id;
> >
> > -	driver_id = dev->driver_id;
> > -
> > -	return sym_session_configure(roc_cpt, driver_id, xform, sess, pool);
> > +	return sym_session_configure(roc_cpt, xform, sess);
> >  }
> >
> >  void
> > -sym_session_clear(int driver_id, struct rte_cryptodev_sym_session *sess)
> > +sym_session_clear(void *sess)
> >  {
> > -	void *priv = get_sym_session_private_data(sess, driver_id);
> > -	struct cnxk_se_sess *sess_priv;
> > -	struct rte_mempool *pool;
> > +	struct cnxk_se_sess *sess_priv = sess;
> >
> > -	if (priv == NULL)
> > +	if (sess == NULL)
> >  		return;
> >
> > -	sess_priv = priv;
> > -
> >  	if (sess_priv->roc_se_ctx.auth_key != NULL)
> >  		plt_free(sess_priv->roc_se_ctx.auth_key);
> >
> > -	memset(priv, 0, cnxk_cpt_sym_session_get_size(NULL));
> > -
> > -	pool = rte_mempool_from_obj(priv);
> > -
> > -	set_sym_session_private_data(sess, driver_id, NULL);
> > -
> > -	rte_mempool_put(pool, priv);
> > +	memset(sess_priv, 0, cnxk_cpt_sym_session_get_size(NULL));
> >  }
> >
> >  void
> > -cnxk_cpt_sym_session_clear(struct rte_cryptodev *dev,
> > -			   struct rte_cryptodev_sym_session *sess)
> > +cnxk_cpt_sym_session_clear(struct rte_cryptodev *dev, void *sess)
> >  {
> > -	return sym_session_clear(dev->driver_id, sess);
> > +	RTE_SET_USED(dev);
> > +
> > +	return sym_session_clear(sess);
> >  }
> >
> >  unsigned int
> > diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
> > b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
> > index c5332dec53..3c09d10582 100644
> > --- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
> > +++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
> > @@ -111,18 +111,15 @@ unsigned int
> > cnxk_cpt_sym_session_get_size(struct rte_cryptodev *dev);
> >
> >  int cnxk_cpt_sym_session_configure(struct rte_cryptodev *dev,
> >  				   struct rte_crypto_sym_xform *xform,
> > -				   struct rte_cryptodev_sym_session *sess,
> > -				   struct rte_mempool *pool);
> > +				   void *sess);
> >
> > -int sym_session_configure(struct roc_cpt *roc_cpt, int driver_id,
> > +int sym_session_configure(struct roc_cpt *roc_cpt,
> >  			  struct rte_crypto_sym_xform *xform,
> > -			  struct rte_cryptodev_sym_session *sess,
> > -			  struct rte_mempool *pool);
> > +			  void *sess);
> >
> > -void cnxk_cpt_sym_session_clear(struct rte_cryptodev *dev,
> > -				struct rte_cryptodev_sym_session *sess);
> > +void cnxk_cpt_sym_session_clear(struct rte_cryptodev *dev, void *sess);
> >
> > -void sym_session_clear(int driver_id, struct rte_cryptodev_sym_session
> > *sess);
> > +void sym_session_clear(void *sess);
> >
> >  unsigned int cnxk_ae_session_size_get(struct rte_cryptodev *dev
> > __rte_unused);
> >
> > diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> > b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> > index 176f1a27a0..42229763f8 100644
> > --- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> > +++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
> > @@ -3438,49 +3438,32 @@ dpaa2_sec_security_session_destroy(void
> *dev
> > __rte_unused, void *sess)
> >  static int
> >  dpaa2_sec_sym_session_configure(struct rte_cryptodev *dev,
> >  		struct rte_crypto_sym_xform *xform,
> > -		struct rte_cryptodev_sym_session *sess,
> > -		struct rte_mempool *mempool)
> > +		void *sess)
> >  {
> > -	void *sess_private_data;
> >  	int ret;
> >
> > -	if (rte_mempool_get(mempool, &sess_private_data)) {
> > -		DPAA2_SEC_ERR("Couldn't get object from session
> > mempool");
> > -		return -ENOMEM;
> > -	}
> > -
> > -	ret = dpaa2_sec_set_session_parameters(dev, xform,
> > sess_private_data);
> > +	ret = dpaa2_sec_set_session_parameters(dev, xform, sess);
> >  	if (ret != 0) {
> >  		DPAA2_SEC_ERR("Failed to configure session parameters");
> > -		/* Return session to mempool */
> > -		rte_mempool_put(mempool, sess_private_data);
> >  		return ret;
> >  	}
> >
> > -	set_sym_session_private_data(sess, dev->driver_id,
> > -		sess_private_data);
> > -
> >  	return 0;
> >  }
> >
> >  /** Clear the memory of session so it doesn't leave key material behind */
> >  static void
> > -dpaa2_sec_sym_session_clear(struct rte_cryptodev *dev,
> > -		struct rte_cryptodev_sym_session *sess)
> > +dpaa2_sec_sym_session_clear(struct rte_cryptodev *dev, void *sess)
> >  {
> >  	PMD_INIT_FUNC_TRACE();
> > -	uint8_t index = dev->driver_id;
> > -	void *sess_priv = get_sym_session_private_data(sess, index);
> > -	dpaa2_sec_session *s = (dpaa2_sec_session *)sess_priv;
> > +	RTE_SET_USED(dev);
> > +	dpaa2_sec_session *s = (dpaa2_sec_session *)sess;
> >
> > -	if (sess_priv) {
> > +	if (sess) {
> >  		rte_free(s->ctxt);
> >  		rte_free(s->cipher_key.data);
> >  		rte_free(s->auth_key.data);
> >  		memset(s, 0, sizeof(dpaa2_sec_session));
> > -		struct rte_mempool *sess_mp =
> > rte_mempool_from_obj(sess_priv);
> > -		set_sym_session_private_data(sess, index, NULL);
> > -		rte_mempool_put(sess_mp, sess_priv);
> >  	}
> >  }
> >
> > diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c
> > b/drivers/crypto/dpaa_sec/dpaa_sec.c
> > index 5a087df090..4727088b45 100644
> > --- a/drivers/crypto/dpaa_sec/dpaa_sec.c
> > +++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
> > @@ -2537,33 +2537,18 @@ dpaa_sec_set_session_parameters(struct
> > rte_cryptodev *dev,
> >
> >  static int
> >  dpaa_sec_sym_session_configure(struct rte_cryptodev *dev,
> > -		struct rte_crypto_sym_xform *xform,
> > -		struct rte_cryptodev_sym_session *sess,
> > -		struct rte_mempool *mempool)
> > +		struct rte_crypto_sym_xform *xform, void *sess)
> >  {
> > -	void *sess_private_data;
> >  	int ret;
> >
> >  	PMD_INIT_FUNC_TRACE();
> >
> > -	if (rte_mempool_get(mempool, &sess_private_data)) {
> > -		DPAA_SEC_ERR("Couldn't get object from session
> > mempool");
> > -		return -ENOMEM;
> > -	}
> > -
> > -	ret = dpaa_sec_set_session_parameters(dev, xform,
> > sess_private_data);
> > +	ret = dpaa_sec_set_session_parameters(dev, xform, sess);
> >  	if (ret != 0) {
> >  		DPAA_SEC_ERR("failed to configure session parameters");
> > -
> > -		/* Return session to mempool */
> > -		rte_mempool_put(mempool, sess_private_data);
> >  		return ret;
> >  	}
> >
> > -	set_sym_session_private_data(sess, dev->driver_id,
> > -			sess_private_data);
> > -
> > -
> >  	return 0;
> >  }
> >
> > @@ -2584,18 +2569,14 @@ free_session_memory(struct rte_cryptodev
> > *dev, dpaa_sec_session *s)
> >
> >  /** Clear the memory of session so it doesn't leave key material behind */
> >  static void
> > -dpaa_sec_sym_session_clear(struct rte_cryptodev *dev,
> > -		struct rte_cryptodev_sym_session *sess)
> > +dpaa_sec_sym_session_clear(struct rte_cryptodev *dev, void *sess)
> >  {
> >  	PMD_INIT_FUNC_TRACE();
> > -	uint8_t index = dev->driver_id;
> > -	void *sess_priv = get_sym_session_private_data(sess, index);
> > -	dpaa_sec_session *s = (dpaa_sec_session *)sess_priv;
> > +	RTE_SET_USED(dev);
> > +	dpaa_sec_session *s = (dpaa_sec_session *)sess;
> >
> > -	if (sess_priv) {
> > +	if (sess)
> >  		free_session_memory(dev, s);
> > -		set_sym_session_private_data(sess, index, NULL);
> > -	}
> >  }
> >
> >  #ifdef RTE_LIB_SECURITY
> > diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
> > b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
> > index f075054807..b2e5c92598 100644
> > --- a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
> > +++ b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
> > @@ -220,7 +220,6 @@ kasumi_pmd_qp_setup(struct rte_cryptodev *dev,
> > uint16_t qp_id,
> >
> >  	qp->mgr = internals->mgr;
> >  	qp->sess_mp = qp_conf->mp_session;
> > -	qp->sess_mp_priv = qp_conf->mp_session_private;
> >
> >  	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
> >
> > @@ -243,10 +242,8 @@ kasumi_pmd_sym_session_get_size(struct
> > rte_cryptodev *dev __rte_unused)
> >  static int
> >  kasumi_pmd_sym_session_configure(struct rte_cryptodev *dev,
> >  		struct rte_crypto_sym_xform *xform,
> > -		struct rte_cryptodev_sym_session *sess,
> > -		struct rte_mempool *mempool)
> > +		void *sess)
> >  {
> > -	void *sess_private_data;
> >  	int ret;
> >  	struct kasumi_private *internals = dev->data->dev_private;
> >
> > @@ -255,43 +252,24 @@ kasumi_pmd_sym_session_configure(struct
> > rte_cryptodev *dev,
> >  		return -EINVAL;
> >  	}
> >
> > -	if (rte_mempool_get(mempool, &sess_private_data)) {
> > -		KASUMI_LOG(ERR,
> > -				"Couldn't get object from session mempool");
> > -		return -ENOMEM;
> > -	}
> > -
> >  	ret = kasumi_set_session_parameters(internals->mgr,
> > -					sess_private_data, xform);
> > +					sess, xform);
> >  	if (ret != 0) {
> >  		KASUMI_LOG(ERR, "failed configure session parameters");
> > -
> > -		/* Return session to mempool */
> > -		rte_mempool_put(mempool, sess_private_data);
> >  		return ret;
> >  	}
> >
> > -	set_sym_session_private_data(sess, dev->driver_id,
> > -		sess_private_data);
> > -
> >  	return 0;
> >  }
> >
> >  /** Clear the memory of session so it doesn't leave key material behind */
> >  static void
> > -kasumi_pmd_sym_session_clear(struct rte_cryptodev *dev,
> > -		struct rte_cryptodev_sym_session *sess)
> > +kasumi_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
> >  {
> > -	uint8_t index = dev->driver_id;
> > -	void *sess_priv = get_sym_session_private_data(sess, index);
> > -
> > +	RTE_SET_USED(dev);
> >  	/* Zero out the whole structure */
> > -	if (sess_priv) {
> > -		memset(sess_priv, 0, sizeof(struct kasumi_session));
> > -		struct rte_mempool *sess_mp =
> > rte_mempool_from_obj(sess_priv);
> > -		set_sym_session_private_data(sess, index, NULL);
> > -		rte_mempool_put(sess_mp, sess_priv);
> > -	}
> > +	if (sess)
> > +		memset(sess, 0, sizeof(struct kasumi_session));
> >  }
> >
> >  struct rte_cryptodev_ops kasumi_pmd_ops = {
> > diff --git a/drivers/crypto/mlx5/mlx5_crypto.c
> > b/drivers/crypto/mlx5/mlx5_crypto.c
> > index 682cf8b607..615ab9f45d 100644
> > --- a/drivers/crypto/mlx5/mlx5_crypto.c
> > +++ b/drivers/crypto/mlx5/mlx5_crypto.c
> > @@ -165,14 +165,12 @@ mlx5_crypto_sym_session_get_size(struct
> > rte_cryptodev *dev __rte_unused)
> >  static int
> >  mlx5_crypto_sym_session_configure(struct rte_cryptodev *dev,
> >  				  struct rte_crypto_sym_xform *xform,
> > -				  struct rte_cryptodev_sym_session *session,
> > -				  struct rte_mempool *mp)
> > +				  void *session)
> >  {
> >  	struct mlx5_crypto_priv *priv = dev->data->dev_private;
> > -	struct mlx5_crypto_session *sess_private_data;
> > +	struct mlx5_crypto_session *sess_private_data = session;
> >  	struct rte_crypto_cipher_xform *cipher;
> >  	uint8_t encryption_order;
> > -	int ret;
> >
> >  	if (unlikely(xform->next != NULL)) {
> >  		DRV_LOG(ERR, "Xform next is not supported.");
> > @@ -183,17 +181,9 @@ mlx5_crypto_sym_session_configure(struct
> > rte_cryptodev *dev,
> >  		DRV_LOG(ERR, "Only AES-XTS algorithm is supported.");
> >  		return -ENOTSUP;
> >  	}
> > -	ret = rte_mempool_get(mp, (void *)&sess_private_data);
> > -	if (ret != 0) {
> > -		DRV_LOG(ERR,
> > -			"Failed to get session %p private data from
> > mempool.",
> > -			sess_private_data);
> > -		return -ENOMEM;
> > -	}
> >  	cipher = &xform->cipher;
> >  	sess_private_data->dek = mlx5_crypto_dek_prepare(priv, cipher);
> >  	if (sess_private_data->dek == NULL) {
> > -		rte_mempool_put(mp, sess_private_data);
> >  		DRV_LOG(ERR, "Failed to prepare dek.");
> >  		return -ENOMEM;
> >  	}
> > @@ -228,27 +218,21 @@ mlx5_crypto_sym_session_configure(struct
> > rte_cryptodev *dev,
> >  	sess_private_data->dek_id =
> >  			rte_cpu_to_be_32(sess_private_data->dek->obj->id
> > &
> >  					 0xffffff);
> > -	set_sym_session_private_data(session, dev->driver_id,
> > -				     sess_private_data);
> >  	DRV_LOG(DEBUG, "Session %p was configured.", sess_private_data);
> >  	return 0;
> >  }
> >
> >  static void
> > -mlx5_crypto_sym_session_clear(struct rte_cryptodev *dev,
> > -			      struct rte_cryptodev_sym_session *sess)
> > +mlx5_crypto_sym_session_clear(struct rte_cryptodev *dev, void *sess)
> >  {
> >  	struct mlx5_crypto_priv *priv = dev->data->dev_private;
> > -	struct mlx5_crypto_session *spriv =
> > get_sym_session_private_data(sess,
> > -								dev-
> > >driver_id);
> > +	struct mlx5_crypto_session *spriv = sess;
> >
> >  	if (unlikely(spriv == NULL)) {
> >  		DRV_LOG(ERR, "Failed to get session %p private data.",
> spriv);
> >  		return;
> >  	}
> >  	mlx5_crypto_dek_destroy(priv, spriv->dek);
> > -	set_sym_session_private_data(sess, dev->driver_id, NULL);
> > -	rte_mempool_put(rte_mempool_from_obj(spriv), spriv);
> >  	DRV_LOG(DEBUG, "Session %p was cleared.", spriv);
> >  }
> >
> > diff --git a/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c
> > b/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c
> > index e04a2c88c7..2e4b27ea21 100644
> > --- a/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c
> > +++ b/drivers/crypto/mvsam/rte_mrvl_pmd_ops.c
> > @@ -704,7 +704,6 @@ mrvl_crypto_pmd_qp_setup(struct rte_cryptodev
> > *dev, uint16_t qp_id,
> >  			break;
> >
> >  		qp->sess_mp = qp_conf->mp_session;
> > -		qp->sess_mp_priv = qp_conf->mp_session_private;
> >
> >  		memset(&qp->stats, 0, sizeof(qp->stats));
> >  		dev->data->queue_pairs[qp_id] = qp;
> > @@ -735,12 +734,9 @@
> > mrvl_crypto_pmd_sym_session_get_size(__rte_unused struct
> > rte_cryptodev *dev)
> >   */
> >  static int
> >  mrvl_crypto_pmd_sym_session_configure(__rte_unused struct
> > rte_cryptodev *dev,
> > -		struct rte_crypto_sym_xform *xform,
> > -		struct rte_cryptodev_sym_session *sess,
> > -		struct rte_mempool *mp)
> > +		struct rte_crypto_sym_xform *xform, void *sess)
> >  {
> >  	struct mrvl_crypto_session *mrvl_sess;
> > -	void *sess_private_data;
> >  	int ret;
> >
> >  	if (sess == NULL) {
> > @@ -748,25 +744,16 @@
> > mrvl_crypto_pmd_sym_session_configure(__rte_unused struct
> > rte_cryptodev *dev,
> >  		return -EINVAL;
> >  	}
> >
> > -	if (rte_mempool_get(mp, &sess_private_data)) {
> > -		CDEV_LOG_ERR("Couldn't get object from session
> > mempool.");
> > -		return -ENOMEM;
> > -	}
> > +	memset(sess, 0, sizeof(struct mrvl_crypto_session));
> >
> > -	memset(sess_private_data, 0, sizeof(struct mrvl_crypto_session));
> > -
> > -	ret = mrvl_crypto_set_session_parameters(sess_private_data,
> > xform);
> > +	ret = mrvl_crypto_set_session_parameters(sess, xform);
> >  	if (ret != 0) {
> >  		MRVL_LOG(ERR, "Failed to configure session parameters!");
> > -
> > -		/* Return session to mempool */
> > -		rte_mempool_put(mp, sess_private_data);
> >  		return ret;
> >  	}
> >
> > -	set_sym_session_private_data(sess, dev->driver_id,
> > sess_private_data);
> >
> > -	mrvl_sess = (struct mrvl_crypto_session *)sess_private_data;
> > +	mrvl_sess = (struct mrvl_crypto_session *)sess;
> >  	if (sam_session_create(&mrvl_sess->sam_sess_params,
> >  				&mrvl_sess->sam_sess) < 0) {
> >  		MRVL_LOG(DEBUG, "Failed to create session!");
> > @@ -789,17 +776,13 @@
> > mrvl_crypto_pmd_sym_session_configure(__rte_unused struct
> > rte_cryptodev *dev,
> >   * @returns 0. Always.
> >   */
> >  static void
> > -mrvl_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev,
> > -		struct rte_cryptodev_sym_session *sess)
> > +mrvl_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev, void
> > *sess)
> >  {
> > -
> > -	uint8_t index = dev->driver_id;
> > -	void *sess_priv = get_sym_session_private_data(sess, index);
> > -
> > +	RTE_SET_USED(dev);
> >  	/* Zero out the whole structure */
> > -	if (sess_priv) {
> > +	if (sess) {
> >  		struct mrvl_crypto_session *mrvl_sess =
> > -			(struct mrvl_crypto_session *)sess_priv;
> > +			(struct mrvl_crypto_session *)sess;
> >
> >  		if (mrvl_sess->sam_sess &&
> >  		    sam_session_destroy(mrvl_sess->sam_sess) < 0) {
> > @@ -807,9 +790,6 @@ mrvl_crypto_pmd_sym_session_clear(struct
> > rte_cryptodev *dev,
> >  		}
> >
> >  		memset(mrvl_sess, 0, sizeof(struct mrvl_crypto_session));
> > -		struct rte_mempool *sess_mp =
> > rte_mempool_from_obj(sess_priv);
> > -		set_sym_session_private_data(sess, index, NULL);
> > -		rte_mempool_put(sess_mp, sess_priv);
> >  	}
> >  }
> >
> > diff --git a/drivers/crypto/nitrox/nitrox_sym.c
> > b/drivers/crypto/nitrox/nitrox_sym.c
> > index f8b7edcd69..0c9bbfef46 100644
> > --- a/drivers/crypto/nitrox/nitrox_sym.c
> > +++ b/drivers/crypto/nitrox/nitrox_sym.c
> > @@ -532,22 +532,16 @@ configure_aead_ctx(struct
> rte_crypto_aead_xform
> > *xform,
> >  static int
> >  nitrox_sym_dev_sess_configure(struct rte_cryptodev *cdev,
> >  			      struct rte_crypto_sym_xform *xform,
> > -			      struct rte_cryptodev_sym_session *sess,
> > -			      struct rte_mempool *mempool)
> > +			      void *sess)
> >  {
> > -	void *mp_obj;
> >  	struct nitrox_crypto_ctx *ctx;
> >  	struct rte_crypto_cipher_xform *cipher_xform = NULL;
> >  	struct rte_crypto_auth_xform *auth_xform = NULL;
> >  	struct rte_crypto_aead_xform *aead_xform = NULL;
> >  	int ret = -EINVAL;
> >
> > -	if (rte_mempool_get(mempool, &mp_obj)) {
> > -		NITROX_LOG(ERR, "Couldn't allocate context\n");
> > -		return -ENOMEM;
> > -	}
> > -
> > -	ctx = mp_obj;
> > +	RTE_SET_USED(cdev);
> > +	ctx = sess;
> >  	ctx->nitrox_chain = get_crypto_chain_order(xform);
> >  	switch (ctx->nitrox_chain) {
> >  	case NITROX_CHAIN_CIPHER_ONLY:
> > @@ -586,28 +580,17 @@ nitrox_sym_dev_sess_configure(struct
> > rte_cryptodev *cdev,
> >  	}
> >
> >  	ctx->iova = rte_mempool_virt2iova(ctx);
> > -	set_sym_session_private_data(sess, cdev->driver_id, ctx);
> >  	return 0;
> >  err:
> > -	rte_mempool_put(mempool, mp_obj);
> >  	return ret;
> >  }
> >
> >  static void
> > -nitrox_sym_dev_sess_clear(struct rte_cryptodev *cdev,
> > -			  struct rte_cryptodev_sym_session *sess)
> > +nitrox_sym_dev_sess_clear(struct rte_cryptodev *cdev, void *sess)
> >  {
> > -	struct nitrox_crypto_ctx *ctx = get_sym_session_private_data(sess,
> > -							cdev->driver_id);
> > -	struct rte_mempool *sess_mp;
> > -
> > -	if (!ctx)
> > -		return;
> > -
> > -	memset(ctx, 0, sizeof(*ctx));
> > -	sess_mp = rte_mempool_from_obj(ctx);
> > -	set_sym_session_private_data(sess, cdev->driver_id, NULL);
> > -	rte_mempool_put(sess_mp, ctx);
> > +	RTE_SET_USED(cdev);
> > +	if (sess)
> > +		memset(sess, 0, sizeof(struct nitrox_crypto_ctx));
> >  }
> >
> >  static struct nitrox_crypto_ctx *
> > diff --git a/drivers/crypto/null/null_crypto_pmd_ops.c
> > b/drivers/crypto/null/null_crypto_pmd_ops.c
> > index a8b5a06e7f..65bfa8dcf7 100644
> > --- a/drivers/crypto/null/null_crypto_pmd_ops.c
> > +++ b/drivers/crypto/null/null_crypto_pmd_ops.c
> > @@ -234,7 +234,6 @@ null_crypto_pmd_qp_setup(struct rte_cryptodev
> > *dev, uint16_t qp_id,
> >  	}
> >
> >  	qp->sess_mp = qp_conf->mp_session;
> > -	qp->sess_mp_priv = qp_conf->mp_session_private;
> >
> >  	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
> >
> > @@ -258,10 +257,8 @@ null_crypto_pmd_sym_session_get_size(struct
> > rte_cryptodev *dev __rte_unused)
> >  static int
> >  null_crypto_pmd_sym_session_configure(struct rte_cryptodev *dev
> > __rte_unused,
> >  		struct rte_crypto_sym_xform *xform,
> > -		struct rte_cryptodev_sym_session *sess,
> > -		struct rte_mempool *mp)
> > +		void *sess)
> >  {
> > -	void *sess_private_data;
> >  	int ret;
> >
> >  	if (unlikely(sess == NULL)) {
> > @@ -269,42 +266,23 @@ null_crypto_pmd_sym_session_configure(struct
> > rte_cryptodev *dev __rte_unused,
> >  		return -EINVAL;
> >  	}
> >
> > -	if (rte_mempool_get(mp, &sess_private_data)) {
> > -		NULL_LOG(ERR,
> > -				"Couldn't get object from session mempool");
> > -		return -ENOMEM;
> > -	}
> > -
> > -	ret = null_crypto_set_session_parameters(sess_private_data,
> > xform);
> > +	ret = null_crypto_set_session_parameters(sess, xform);
> >  	if (ret != 0) {
> >  		NULL_LOG(ERR, "failed configure session parameters");
> > -
> > -		/* Return session to mempool */
> > -		rte_mempool_put(mp, sess_private_data);
> >  		return ret;
> >  	}
> >
> > -	set_sym_session_private_data(sess, dev->driver_id,
> > -		sess_private_data);
> > -
> >  	return 0;
> >  }
> >
> >  /** Clear the memory of session so it doesn't leave key material behind */
> >  static void
> > -null_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev,
> > -		struct rte_cryptodev_sym_session *sess)
> > +null_crypto_pmd_sym_session_clear(struct rte_cryptodev *dev, void
> > *sess)
> >  {
> > -	uint8_t index = dev->driver_id;
> > -	void *sess_priv = get_sym_session_private_data(sess, index);
> > -
> > +	RTE_SET_USED(dev);
> >  	/* Zero out the whole structure */
> > -	if (sess_priv) {
> > -		memset(sess_priv, 0, sizeof(struct null_crypto_session));
> > -		struct rte_mempool *sess_mp =
> > rte_mempool_from_obj(sess_priv);
> > -		set_sym_session_private_data(sess, index, NULL);
> > -		rte_mempool_put(sess_mp, sess_priv);
> > -	}
> > +	if (sess)
> > +		memset(sess, 0, sizeof(struct null_crypto_session));
> >  }
> >
> >  static struct rte_cryptodev_ops pmd_ops = {
> > diff --git a/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
> > b/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
> > index 7c6b1e45b4..95659e472b 100644
> > --- a/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
> > +++ b/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
> > @@ -49,7 +49,6 @@ struct cpt_instance {
> >  	uint32_t queue_id;
> >  	uintptr_t rsvd;
> >  	struct rte_mempool *sess_mp;
> > -	struct rte_mempool *sess_mp_priv;
> >  	struct cpt_qp_meta_info meta_info;
> >  	uint8_t ca_enabled;
> >  };
> > diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c
> > b/drivers/crypto/octeontx/otx_cryptodev_ops.c
> > index 9e8fd495cf..abd0963be0 100644
> > --- a/drivers/crypto/octeontx/otx_cryptodev_ops.c
> > +++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c
> > @@ -171,7 +171,6 @@ otx_cpt_que_pair_setup(struct rte_cryptodev *dev,
> >
> >  	instance->queue_id = que_pair_id;
> >  	instance->sess_mp = qp_conf->mp_session;
> > -	instance->sess_mp_priv = qp_conf->mp_session_private;
> >  	dev->data->queue_pairs[que_pair_id] = instance;
> >
> >  	return 0;
> > @@ -243,29 +242,22 @@ sym_xform_verify(struct rte_crypto_sym_xform
> > *xform)
> >  }
> >
> >  static int
> > -sym_session_configure(int driver_id, struct rte_crypto_sym_xform
> *xform,
> > -		      struct rte_cryptodev_sym_session *sess,
> > -		      struct rte_mempool *pool)
> > +sym_session_configure(struct rte_crypto_sym_xform *xform,
> > +		      void *sess)
> >  {
> >  	struct rte_crypto_sym_xform *temp_xform = xform;
> >  	struct cpt_sess_misc *misc;
> >  	vq_cmd_word3_t vq_cmd_w3;
> > -	void *priv;
> >  	int ret;
> >
> >  	ret = sym_xform_verify(xform);
> >  	if (unlikely(ret))
> >  		return ret;
> >
> > -	if (unlikely(rte_mempool_get(pool, &priv))) {
> > -		CPT_LOG_ERR("Could not allocate session private data");
> > -		return -ENOMEM;
> > -	}
> > -
> > -	memset(priv, 0, sizeof(struct cpt_sess_misc) +
> > +	memset(sess, 0, sizeof(struct cpt_sess_misc) +
> >  			offsetof(struct cpt_ctx, mc_ctx));
> >
> > -	misc = priv;
> > +	misc = sess;
> >
> >  	for ( ; xform != NULL; xform = xform->next) {
> >  		switch (xform->type) {
> > @@ -301,8 +293,6 @@ sym_session_configure(int driver_id, struct
> > rte_crypto_sym_xform *xform,
> >  		goto priv_put;
> >  	}
> >
> > -	set_sym_session_private_data(sess, driver_id, priv);
> > -
> >  	misc->ctx_dma_addr = rte_mempool_virt2iova(misc) +
> >  			     sizeof(struct cpt_sess_misc);
> >
> > @@ -316,56 +306,46 @@ sym_session_configure(int driver_id, struct
> > rte_crypto_sym_xform *xform,
> >  	return 0;
> >
> >  priv_put:
> > -	if (priv)
> > -		rte_mempool_put(pool, priv);
> >  	return -ENOTSUP;
> >  }
> >
> >  static void
> > -sym_session_clear(int driver_id, struct rte_cryptodev_sym_session *sess)
> > +sym_session_clear(void *sess)
> >  {
> > -	void *priv = get_sym_session_private_data(sess, driver_id);
> >  	struct cpt_sess_misc *misc;
> > -	struct rte_mempool *pool;
> >  	struct cpt_ctx *ctx;
> >
> > -	if (priv == NULL)
> > +	if (sess == NULL)
> >  		return;
> >
> > -	misc = priv;
> > +	misc = sess;
> >  	ctx = SESS_PRIV(misc);
> >
> >  	if (ctx->auth_key != NULL)
> >  		rte_free(ctx->auth_key);
> >
> > -	memset(priv, 0, cpt_get_session_size());
> > -
> > -	pool = rte_mempool_from_obj(priv);
> > -
> > -	set_sym_session_private_data(sess, driver_id, NULL);
> > -
> > -	rte_mempool_put(pool, priv);
> > +	memset(sess, 0, cpt_get_session_size());
> >  }
> >
> >  static int
> >  otx_cpt_session_cfg(struct rte_cryptodev *dev,
> >  		    struct rte_crypto_sym_xform *xform,
> > -		    struct rte_cryptodev_sym_session *sess,
> > -		    struct rte_mempool *pool)
> > +		    void *sess)
> >  {
> >  	CPT_PMD_INIT_FUNC_TRACE();
> > +	RTE_SET_USED(dev);
> >
> > -	return sym_session_configure(dev->driver_id, xform, sess, pool);
> > +	return sym_session_configure(xform, sess);
> >  }
> >
> >
> >  static void
> > -otx_cpt_session_clear(struct rte_cryptodev *dev,
> > -		  struct rte_cryptodev_sym_session *sess)
> > +otx_cpt_session_clear(struct rte_cryptodev *dev, void *sess)
> >  {
> >  	CPT_PMD_INIT_FUNC_TRACE();
> > +	RTE_SET_USED(dev);
> >
> > -	return sym_session_clear(dev->driver_id, sess);
> > +	return sym_session_clear(sess);
> >  }
> >
> >  static unsigned int
> > @@ -576,7 +556,6 @@ static __rte_always_inline void * __rte_hot
> >  otx_cpt_enq_single_sym_sessless(struct cpt_instance *instance,
> >  				struct rte_crypto_op *op)
> >  {
> > -	const int driver_id = otx_cryptodev_driver_id;
> >  	struct rte_crypto_sym_op *sym_op = op->sym;
> >  	struct rte_cryptodev_sym_session *sess;
> >  	void *req;
> > @@ -589,8 +568,12 @@ otx_cpt_enq_single_sym_sessless(struct
> > cpt_instance *instance,
> >  		return NULL;
> >  	}
> >
> > -	ret = sym_session_configure(driver_id, sym_op->xform, sess,
> > -				    instance->sess_mp_priv);
> > +	sess->sess_data[otx_cryptodev_driver_id].data =
> > +			(void *)((uint8_t *)sess +
> > +			rte_cryptodev_sym_get_header_session_size() +
> > +			(otx_cryptodev_driver_id * sess->priv_sz));
> > +	ret = sym_session_configure(sym_op->xform,
> > +			sess->sess_data[otx_cryptodev_driver_id].data);
> >  	if (ret)
> >  		goto sess_put;
> >
> > @@ -604,7 +587,7 @@ otx_cpt_enq_single_sym_sessless(struct
> > cpt_instance *instance,
> >  	return req;
> >
> >  priv_put:
> > -	sym_session_clear(driver_id, sess);
> > +	sym_session_clear(sess);
> >  sess_put:
> >  	rte_mempool_put(instance->sess_mp, sess);
> >  	return NULL;
> > @@ -913,7 +896,6 @@ free_sym_session_data(const struct cpt_instance
> > *instance,
> >  	memset(cop->sym->session, 0,
> >  	       rte_cryptodev_sym_get_existing_header_session_size(
> >  		       cop->sym->session));
> > -	rte_mempool_put(instance->sess_mp_priv, sess_private_data_t);
> >  	rte_mempool_put(instance->sess_mp, cop->sym->session);
> >  	cop->sym->session = NULL;
> >  }
> > diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
> > b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
> > index 7b744cd4b4..dcfbc49996 100644
> > --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
> > +++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
> > @@ -371,29 +371,21 @@ sym_xform_verify(struct rte_crypto_sym_xform
> > *xform)
> >  }
> >
> >  static int
> > -sym_session_configure(int driver_id, struct rte_crypto_sym_xform
> *xform,
> > -		      struct rte_cryptodev_sym_session *sess,
> > -		      struct rte_mempool *pool)
> > +sym_session_configure(struct rte_crypto_sym_xform *xform, void *sess)
> >  {
> >  	struct rte_crypto_sym_xform *temp_xform = xform;
> >  	struct cpt_sess_misc *misc;
> >  	vq_cmd_word3_t vq_cmd_w3;
> > -	void *priv;
> >  	int ret;
> >
> >  	ret = sym_xform_verify(xform);
> >  	if (unlikely(ret))
> >  		return ret;
> >
> > -	if (unlikely(rte_mempool_get(pool, &priv))) {
> > -		CPT_LOG_ERR("Could not allocate session private data");
> > -		return -ENOMEM;
> > -	}
> > -
> > -	memset(priv, 0, sizeof(struct cpt_sess_misc) +
> > +	memset(sess, 0, sizeof(struct cpt_sess_misc) +
> >  			offsetof(struct cpt_ctx, mc_ctx));
> >
> > -	misc = priv;
> > +	misc = sess;
> >
> >  	for ( ; xform != NULL; xform = xform->next) {
> >  		switch (xform->type) {
> > @@ -414,7 +406,7 @@ sym_session_configure(int driver_id, struct
> > rte_crypto_sym_xform *xform,
> >  		}
> >
> >  		if (ret)
> > -			goto priv_put;
> > +			return -ENOTSUP;
> >  	}
> >
> >  	if ((GET_SESS_FC_TYPE(misc) == HASH_HMAC) &&
> > @@ -425,12 +417,9 @@ sym_session_configure(int driver_id, struct
> > rte_crypto_sym_xform *xform,
> >  			rte_free(ctx->auth_key);
> >  			ctx->auth_key = NULL;
> >  		}
> > -		ret = -ENOTSUP;
> > -		goto priv_put;
> > +		return -ENOTSUP;
> >  	}
> >
> > -	set_sym_session_private_data(sess, driver_id, misc);
> > -
> >  	misc->ctx_dma_addr = rte_mempool_virt2iova(misc) +
> >  			     sizeof(struct cpt_sess_misc);
> >
> > @@ -451,11 +440,6 @@ sym_session_configure(int driver_id, struct
> > rte_crypto_sym_xform *xform,
> >  	misc->cpt_inst_w7 = vq_cmd_w3.u64;
> >
> >  	return 0;
> > -
> > -priv_put:
> > -	rte_mempool_put(pool, priv);
> > -
> > -	return -ENOTSUP;
> >  }
> >
> >  static __rte_always_inline int32_t __rte_hot
> > @@ -765,7 +749,6 @@ otx2_cpt_enqueue_sym_sessless(struct
> otx2_cpt_qp
> > *qp, struct rte_crypto_op *op,
> >  			      struct pending_queue *pend_q,
> >  			      unsigned int burst_index)
> >  {
> > -	const int driver_id = otx2_cryptodev_driver_id;
> >  	struct rte_crypto_sym_op *sym_op = op->sym;
> >  	struct rte_cryptodev_sym_session *sess;
> >  	int ret;
> > @@ -775,8 +758,12 @@ otx2_cpt_enqueue_sym_sessless(struct
> > otx2_cpt_qp *qp, struct rte_crypto_op *op,
> >  	if (sess == NULL)
> >  		return -ENOMEM;
> >
> > -	ret = sym_session_configure(driver_id, sym_op->xform, sess,
> > -				    qp->sess_mp_priv);
> > +	sess->sess_data[otx2_cryptodev_driver_id].data =
> > +			(void *)((uint8_t *)sess +
> > +			rte_cryptodev_sym_get_header_session_size() +
> > +			(otx2_cryptodev_driver_id * sess->priv_sz));
> > +	ret = sym_session_configure(sym_op->xform,
> > +			sess->sess_data[otx2_cryptodev_driver_id].data);
> >  	if (ret)
> >  		goto sess_put;
> >
> > @@ -790,7 +777,7 @@ otx2_cpt_enqueue_sym_sessless(struct
> otx2_cpt_qp
> > *qp, struct rte_crypto_op *op,
> >  	return 0;
> >
> >  priv_put:
> > -	sym_session_clear(driver_id, sess);
> > +	sym_session_clear(sess);
> >  sess_put:
> >  	rte_mempool_put(qp->sess_mp, sess);
> >  	return ret;
> > @@ -1035,8 +1022,7 @@ otx2_cpt_dequeue_post_process(struct
> > otx2_cpt_qp *qp, struct rte_crypto_op *cop,
> >  		}
> >
> >  		if (unlikely(cop->sess_type ==
> > RTE_CRYPTO_OP_SESSIONLESS)) {
> > -			sym_session_clear(otx2_cryptodev_driver_id,
> > -					  cop->sym->session);
> > +			sym_session_clear(cop->sym->session);
> >  			sz =
> > rte_cryptodev_sym_get_existing_header_session_size(
> >  					cop->sym->session);
> >  			memset(cop->sym->session, 0, sz);
> > @@ -1291,7 +1277,6 @@ otx2_cpt_queue_pair_setup(struct rte_cryptodev
> > *dev, uint16_t qp_id,
> >  	}
> >
> >  	qp->sess_mp = conf->mp_session;
> > -	qp->sess_mp_priv = conf->mp_session_private;
> >  	dev->data->queue_pairs[qp_id] = qp;
> >
> >  	return 0;
> > @@ -1330,21 +1315,22 @@ otx2_cpt_sym_session_get_size(struct
> > rte_cryptodev *dev __rte_unused)
> >  static int
> >  otx2_cpt_sym_session_configure(struct rte_cryptodev *dev,
> >  			       struct rte_crypto_sym_xform *xform,
> > -			       struct rte_cryptodev_sym_session *sess,
> > -			       struct rte_mempool *pool)
> > +			       void *sess)
> >  {
> >  	CPT_PMD_INIT_FUNC_TRACE();
> > +	RTE_SET_USED(dev);
> >
> > -	return sym_session_configure(dev->driver_id, xform, sess, pool);
> > +	return sym_session_configure(xform, sess);
> >  }
> >
> >  static void
> >  otx2_cpt_sym_session_clear(struct rte_cryptodev *dev,
> > -			   struct rte_cryptodev_sym_session *sess)
> > +			   void *sess)
> >  {
> >  	CPT_PMD_INIT_FUNC_TRACE();
> > +	RTE_SET_USED(dev);
> >
> > -	return sym_session_clear(dev->driver_id, sess);
> > +	return sym_session_clear(sess);
> >  }
> >
> >  static unsigned int
> > diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
> > b/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
> > index 01c081a216..5f63eaf7b7 100644
> > --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
> > +++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
> > @@ -8,29 +8,21 @@
> >  #include "cpt_pmd_logs.h"
> >
> >  static void
> > -sym_session_clear(int driver_id, struct rte_cryptodev_sym_session *sess)
> > +sym_session_clear(void *sess)
> >  {
> > -	void *priv = get_sym_session_private_data(sess, driver_id);
> >  	struct cpt_sess_misc *misc;
> > -	struct rte_mempool *pool;
> >  	struct cpt_ctx *ctx;
> >
> > -	if (priv == NULL)
> > +	if (sess == NULL)
> >  		return;
> >
> > -	misc = priv;
> > +	misc = sess;
> >  	ctx = SESS_PRIV(misc);
> >
> >  	if (ctx->auth_key != NULL)
> >  		rte_free(ctx->auth_key);
> >
> > -	memset(priv, 0, cpt_get_session_size());
> > -
> > -	pool = rte_mempool_from_obj(priv);
> > -
> > -	set_sym_session_private_data(sess, driver_id, NULL);
> > -
> > -	rte_mempool_put(pool, priv);
> > +	memset(sess, 0, cpt_get_session_size());
> >  }
> >
> >  static __rte_always_inline uint8_t
> > diff --git a/drivers/crypto/openssl/rte_openssl_pmd_ops.c
> > b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
> > index 52715f86f8..1b48a6b400 100644
> > --- a/drivers/crypto/openssl/rte_openssl_pmd_ops.c
> > +++ b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
> > @@ -741,7 +741,6 @@ openssl_pmd_qp_setup(struct rte_cryptodev *dev,
> > uint16_t qp_id,
> >  		goto qp_setup_cleanup;
> >
> >  	qp->sess_mp = qp_conf->mp_session;
> > -	qp->sess_mp_priv = qp_conf->mp_session_private;
> >
> >  	memset(&qp->stats, 0, sizeof(qp->stats));
> >
> > @@ -772,10 +771,8 @@ openssl_pmd_asym_session_get_size(struct
> > rte_cryptodev *dev __rte_unused)
> >  static int
> >  openssl_pmd_sym_session_configure(struct rte_cryptodev *dev
> > __rte_unused,
> >  		struct rte_crypto_sym_xform *xform,
> > -		struct rte_cryptodev_sym_session *sess,
> > -		struct rte_mempool *mempool)
> > +		void *sess)
> >  {
> > -	void *sess_private_data;
> >  	int ret;
> >
> >  	if (unlikely(sess == NULL)) {
> > @@ -783,24 +780,12 @@ openssl_pmd_sym_session_configure(struct
> > rte_cryptodev *dev __rte_unused,
> >  		return -EINVAL;
> >  	}
> >
> > -	if (rte_mempool_get(mempool, &sess_private_data)) {
> > -		OPENSSL_LOG(ERR,
> > -			"Couldn't get object from session mempool");
> > -		return -ENOMEM;
> > -	}
> > -
> > -	ret = openssl_set_session_parameters(sess_private_data, xform);
> > +	ret = openssl_set_session_parameters(sess, xform);
> >  	if (ret != 0) {
> >  		OPENSSL_LOG(ERR, "failed configure session parameters");
> > -
> > -		/* Return session to mempool */
> > -		rte_mempool_put(mempool, sess_private_data);
> >  		return ret;
> >  	}
> >
> > -	set_sym_session_private_data(sess, dev->driver_id,
> > -			sess_private_data);
> > -
> >  	return 0;
> >  }
> >
> > @@ -1154,19 +1139,13 @@ openssl_pmd_asym_session_configure(struct
> > rte_cryptodev *dev __rte_unused,
> >
> >  /** Clear the memory of session so it doesn't leave key material behind */
> >  static void
> > -openssl_pmd_sym_session_clear(struct rte_cryptodev *dev,
> > -		struct rte_cryptodev_sym_session *sess)
> > +openssl_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
> >  {
> > -	uint8_t index = dev->driver_id;
> > -	void *sess_priv = get_sym_session_private_data(sess, index);
> > -
> > +	RTE_SET_USED(dev);
> >  	/* Zero out the whole structure */
> > -	if (sess_priv) {
> > -		openssl_reset_session(sess_priv);
> > -		memset(sess_priv, 0, sizeof(struct openssl_session));
> > -		struct rte_mempool *sess_mp =
> > rte_mempool_from_obj(sess_priv);
> > -		set_sym_session_private_data(sess, index, NULL);
> > -		rte_mempool_put(sess_mp, sess_priv);
> > +	if (sess) {
> > +		openssl_reset_session(sess);
> > +		memset(sess, 0, sizeof(struct openssl_session));
> >  	}
> >  }
> >
> > diff --git a/drivers/crypto/qat/qat_sym_session.c
> > b/drivers/crypto/qat/qat_sym_session.c
> > index 2a22347c7f..114bf081c1 100644
> > --- a/drivers/crypto/qat/qat_sym_session.c
> > +++ b/drivers/crypto/qat/qat_sym_session.c
> > @@ -172,21 +172,14 @@ qat_is_auth_alg_supported(enum
> > rte_crypto_auth_algorithm algo,
> >  }
> >
> >  void
> > -qat_sym_session_clear(struct rte_cryptodev *dev,
> > -		struct rte_cryptodev_sym_session *sess)
> > +qat_sym_session_clear(struct rte_cryptodev *dev, void *sess)
> >  {
> > -	uint8_t index = dev->driver_id;
> > -	void *sess_priv = get_sym_session_private_data(sess, index);
> > -	struct qat_sym_session *s = (struct qat_sym_session *)sess_priv;
> > +	struct qat_sym_session *s = (struct qat_sym_session *)sess;
> >
> > -	if (sess_priv) {
> > +	if (sess) {
> >  		if (s->bpi_ctx)
> >  			bpi_cipher_ctx_free(s->bpi_ctx);
> >  		memset(s, 0, qat_sym_session_get_private_size(dev));
> > -		struct rte_mempool *sess_mp =
> > rte_mempool_from_obj(sess_priv);
> > -
> > -		set_sym_session_private_data(sess, index, NULL);
> > -		rte_mempool_put(sess_mp, sess_priv);
> >  	}
> >  }
> >
> > @@ -458,31 +451,17 @@ qat_sym_session_configure_cipher(struct
> > rte_cryptodev *dev,
> >  int
> >  qat_sym_session_configure(struct rte_cryptodev *dev,
> >  		struct rte_crypto_sym_xform *xform,
> > -		struct rte_cryptodev_sym_session *sess,
> > -		struct rte_mempool *mempool)
> > +		void *sess_private_data)
> >  {
> > -	void *sess_private_data;
> >  	int ret;
> >
> > -	if (rte_mempool_get(mempool, &sess_private_data)) {
> > -		CDEV_LOG_ERR(
> > -			"Couldn't get object from session mempool");
> > -		return -ENOMEM;
> > -	}
> > -
> >  	ret = qat_sym_session_set_parameters(dev, xform,
> > sess_private_data);
> >  	if (ret != 0) {
> >  		QAT_LOG(ERR,
> >  		    "Crypto QAT PMD: failed to configure session
> parameters");
> > -
> > -		/* Return session to mempool */
> > -		rte_mempool_put(mempool, sess_private_data);
> >  		return ret;
> >  	}
> >
> > -	set_sym_session_private_data(sess, dev->driver_id,
> > -		sess_private_data);
> > -
> >  	return 0;
> >  }
> >
> > diff --git a/drivers/crypto/qat/qat_sym_session.h
> > b/drivers/crypto/qat/qat_sym_session.h
> > index 7fcc1d6f7b..6da29e2305 100644
> > --- a/drivers/crypto/qat/qat_sym_session.h
> > +++ b/drivers/crypto/qat/qat_sym_session.h
> > @@ -112,8 +112,7 @@ struct qat_sym_session {
> >  int
> >  qat_sym_session_configure(struct rte_cryptodev *dev,
> >  		struct rte_crypto_sym_xform *xform,
> > -		struct rte_cryptodev_sym_session *sess,
> > -		struct rte_mempool *mempool);
> > +		void *sess);
> >
> >  int
> >  qat_sym_session_set_parameters(struct rte_cryptodev *dev,
> > @@ -135,8 +134,7 @@ qat_sym_session_configure_auth(struct
> > rte_cryptodev *dev,
> >  				struct qat_sym_session *session);
> >
> >  void
> > -qat_sym_session_clear(struct rte_cryptodev *dev,
> > -		struct rte_cryptodev_sym_session *session);
> > +qat_sym_session_clear(struct rte_cryptodev *dev, void *session);
> >
> >  unsigned int
> >  qat_sym_session_get_private_size(struct rte_cryptodev *dev);
> > diff --git a/drivers/crypto/scheduler/scheduler_pmd_ops.c
> > b/drivers/crypto/scheduler/scheduler_pmd_ops.c
> > index 465b88ade8..87260b5a22 100644
> > --- a/drivers/crypto/scheduler/scheduler_pmd_ops.c
> > +++ b/drivers/crypto/scheduler/scheduler_pmd_ops.c
> > @@ -476,9 +476,7 @@ scheduler_pmd_sym_session_get_size(struct
> > rte_cryptodev *dev __rte_unused)
> >
> >  static int
> >  scheduler_pmd_sym_session_configure(struct rte_cryptodev *dev,
> > -	struct rte_crypto_sym_xform *xform,
> > -	struct rte_cryptodev_sym_session *sess,
> > -	struct rte_mempool *mempool)
> > +	struct rte_crypto_sym_xform *xform, void *sess)
> >  {
> >  	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> >  	uint32_t i;
> > @@ -488,7 +486,7 @@ scheduler_pmd_sym_session_configure(struct
> > rte_cryptodev *dev,
> >  		struct scheduler_worker *worker = &sched_ctx->workers[i];
> >
> >  		ret = rte_cryptodev_sym_session_init(worker->dev_id, sess,
> > -					xform, mempool);
> > +					xform);
> >  		if (ret < 0) {
> >  			CR_SCHED_LOG(ERR, "unable to config sym
> session");
> >  			return ret;
> > @@ -500,8 +498,7 @@ scheduler_pmd_sym_session_configure(struct
> > rte_cryptodev *dev,
> >
> >  /** Clear the memory of session so it doesn't leave key material behind */
> >  static void
> > -scheduler_pmd_sym_session_clear(struct rte_cryptodev *dev,
> > -		struct rte_cryptodev_sym_session *sess)
> > +scheduler_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
> >  {
> >  	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> >  	uint32_t i;
> > diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
> > b/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
> > index 3f46014b7d..b0f8f6d86a 100644
> > --- a/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
> > +++ b/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
> > @@ -226,7 +226,6 @@ snow3g_pmd_qp_setup(struct rte_cryptodev *dev,
> > uint16_t qp_id,
> >
> >  	qp->mgr = internals->mgr;
> >  	qp->sess_mp = qp_conf->mp_session;
> > -	qp->sess_mp_priv = qp_conf->mp_session_private;
> >
> >  	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
> >
> > @@ -250,10 +249,8 @@ snow3g_pmd_sym_session_get_size(struct
> > rte_cryptodev *dev __rte_unused)
> >  static int
> >  snow3g_pmd_sym_session_configure(struct rte_cryptodev *dev,
> >  		struct rte_crypto_sym_xform *xform,
> > -		struct rte_cryptodev_sym_session *sess,
> > -		struct rte_mempool *mempool)
> > +		void *sess)
> >  {
> > -	void *sess_private_data;
> >  	int ret;
> >  	struct snow3g_private *internals = dev->data->dev_private;
> >
> > @@ -262,43 +259,24 @@ snow3g_pmd_sym_session_configure(struct
> > rte_cryptodev *dev,
> >  		return -EINVAL;
> >  	}
> >
> > -	if (rte_mempool_get(mempool, &sess_private_data)) {
> > -		SNOW3G_LOG(ERR,
> > -			"Couldn't get object from session mempool");
> > -		return -ENOMEM;
> > -	}
> > -
> >  	ret = snow3g_set_session_parameters(internals->mgr,
> > -					sess_private_data, xform);
> > +					sess, xform);
> >  	if (ret != 0) {
> >  		SNOW3G_LOG(ERR, "failed configure session parameters");
> > -
> > -		/* Return session to mempool */
> > -		rte_mempool_put(mempool, sess_private_data);
> >  		return ret;
> >  	}
> >
> > -	set_sym_session_private_data(sess, dev->driver_id,
> > -		sess_private_data);
> > -
> >  	return 0;
> >  }
> >
> >  /** Clear the memory of session so it doesn't leave key material behind */
> >  static void
> > -snow3g_pmd_sym_session_clear(struct rte_cryptodev *dev,
> > -		struct rte_cryptodev_sym_session *sess)
> > +snow3g_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
> >  {
> > -	uint8_t index = dev->driver_id;
> > -	void *sess_priv = get_sym_session_private_data(sess, index);
> > -
> > +	RTE_SET_USED(dev);
> >  	/* Zero out the whole structure */
> > -	if (sess_priv) {
> > -		memset(sess_priv, 0, sizeof(struct snow3g_session));
> > -		struct rte_mempool *sess_mp =
> > rte_mempool_from_obj(sess_priv);
> > -		set_sym_session_private_data(sess, index, NULL);
> > -		rte_mempool_put(sess_mp, sess_priv);
> > -	}
> > +	if (sess)
> > +		memset(sess, 0, sizeof(struct snow3g_session));
> >  }
> >
> >  struct rte_cryptodev_ops snow3g_pmd_ops = {
> > diff --git a/drivers/crypto/virtio/virtio_cryptodev.c
> > b/drivers/crypto/virtio/virtio_cryptodev.c
> > index 8faa39df4a..de52fec32e 100644
> > --- a/drivers/crypto/virtio/virtio_cryptodev.c
> > +++ b/drivers/crypto/virtio/virtio_cryptodev.c
> > @@ -37,11 +37,10 @@ static void virtio_crypto_dev_free_mbufs(struct
> > rte_cryptodev *dev);
> >  static unsigned int virtio_crypto_sym_get_session_private_size(
> >  		struct rte_cryptodev *dev);
> >  static void virtio_crypto_sym_clear_session(struct rte_cryptodev *dev,
> > -		struct rte_cryptodev_sym_session *sess);
> > +		void *sess);
> >  static int virtio_crypto_sym_configure_session(struct rte_cryptodev *dev,
> >  		struct rte_crypto_sym_xform *xform,
> > -		struct rte_cryptodev_sym_session *session,
> > -		struct rte_mempool *mp);
> > +		void *session);
> >
> >  /*
> >   * The set of PCI devices this driver supports
> > @@ -927,7 +926,7 @@ virtio_crypto_check_sym_clear_session_paras(
> >  static void
> >  virtio_crypto_sym_clear_session(
> >  		struct rte_cryptodev *dev,
> > -		struct rte_cryptodev_sym_session *sess)
> > +		void *sess)
> >  {
> >  	struct virtio_crypto_hw *hw;
> >  	struct virtqueue *vq;
> > @@ -1290,11 +1289,9 @@ static int
> >  virtio_crypto_check_sym_configure_session_paras(
> >  		struct rte_cryptodev *dev,
> >  		struct rte_crypto_sym_xform *xform,
> > -		struct rte_cryptodev_sym_session *sym_sess,
> > -		struct rte_mempool *mempool)
> > +		void *sym_sess)
> >  {
> > -	if (unlikely(xform == NULL) || unlikely(sym_sess == NULL) ||
> > -		unlikely(mempool == NULL)) {
> > +	if (unlikely(xform == NULL) || unlikely(sym_sess == NULL)) {
> >  		VIRTIO_CRYPTO_SESSION_LOG_ERR("NULL pointer");
> >  		return -1;
> >  	}
> > @@ -1309,12 +1306,9 @@ static int
> >  virtio_crypto_sym_configure_session(
> >  		struct rte_cryptodev *dev,
> >  		struct rte_crypto_sym_xform *xform,
> > -		struct rte_cryptodev_sym_session *sess,
> > -		struct rte_mempool *mempool)
> > +		void *sess)
> >  {
> >  	int ret;
> > -	struct virtio_crypto_session crypto_sess;
> > -	void *session_private = &crypto_sess;
> >  	struct virtio_crypto_session *session;
> >  	struct virtio_crypto_op_ctrl_req *ctrl_req;
> >  	enum virtio_crypto_cmd_id cmd_id;
> > @@ -1326,19 +1320,13 @@ virtio_crypto_sym_configure_session(
> >  	PMD_INIT_FUNC_TRACE();
> >
> >  	ret = virtio_crypto_check_sym_configure_session_paras(dev, xform,
> > -			sess, mempool);
> > +			sess);
> >  	if (ret < 0) {
> >  		VIRTIO_CRYPTO_SESSION_LOG_ERR("Invalid parameters");
> >  		return ret;
> >  	}
> >
> > -	if (rte_mempool_get(mempool, &session_private)) {
> > -		VIRTIO_CRYPTO_SESSION_LOG_ERR(
> > -			"Couldn't get object from session mempool");
> > -		return -ENOMEM;
> > -	}
> > -
> > -	session = (struct virtio_crypto_session *)session_private;
> > +	session = (struct virtio_crypto_session *)sess;
> >  	memset(session, 0, sizeof(struct virtio_crypto_session));
> >  	ctrl_req = &session->ctrl;
> >  	ctrl_req->header.opcode =
> > VIRTIO_CRYPTO_CIPHER_CREATE_SESSION;
> > @@ -1401,9 +1389,6 @@ virtio_crypto_sym_configure_session(
> >  		goto error_out;
> >  	}
> >
> > -	set_sym_session_private_data(sess, dev->driver_id,
> > -		session_private);
> > -
> >  	return 0;
> >
> >  error_out:
> > diff --git a/drivers/crypto/zuc/rte_zuc_pmd_ops.c
> > b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
> > index 38642d45ab..04126c8a04 100644
> > --- a/drivers/crypto/zuc/rte_zuc_pmd_ops.c
> > +++ b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
> > @@ -226,7 +226,6 @@ zuc_pmd_qp_setup(struct rte_cryptodev *dev,
> > uint16_t qp_id,
> >
> >  	qp->mb_mgr = internals->mb_mgr;
> >  	qp->sess_mp = qp_conf->mp_session;
> > -	qp->sess_mp_priv = qp_conf->mp_session_private;
> >
> >  	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
> >
> > @@ -250,10 +249,8 @@ zuc_pmd_sym_session_get_size(struct
> > rte_cryptodev *dev __rte_unused)
> >  static int
> >  zuc_pmd_sym_session_configure(struct rte_cryptodev *dev
> __rte_unused,
> >  		struct rte_crypto_sym_xform *xform,
> > -		struct rte_cryptodev_sym_session *sess,
> > -		struct rte_mempool *mempool)
> > +		void *sess)
> >  {
> > -	void *sess_private_data;
> >  	int ret;
> >
> >  	if (unlikely(sess == NULL)) {
> > @@ -261,43 +258,23 @@ zuc_pmd_sym_session_configure(struct
> > rte_cryptodev *dev __rte_unused,
> >  		return -EINVAL;
> >  	}
> >
> > -	if (rte_mempool_get(mempool, &sess_private_data)) {
> > -		ZUC_LOG(ERR,
> > -			"Couldn't get object from session mempool");
> > -
> > -		return -ENOMEM;
> > -	}
> > -
> > -	ret = zuc_set_session_parameters(sess_private_data, xform);
> > +	ret = zuc_set_session_parameters(sess, xform);
> >  	if (ret != 0) {
> >  		ZUC_LOG(ERR, "failed configure session parameters");
> > -
> > -		/* Return session to mempool */
> > -		rte_mempool_put(mempool, sess_private_data);
> >  		return ret;
> >  	}
> >
> > -	set_sym_session_private_data(sess, dev->driver_id,
> > -		sess_private_data);
> > -
> >  	return 0;
> >  }
> >
> >  /** Clear the memory of session so it doesn't leave key material behind */
> >  static void
> > -zuc_pmd_sym_session_clear(struct rte_cryptodev *dev,
> > -		struct rte_cryptodev_sym_session *sess)
> > +zuc_pmd_sym_session_clear(struct rte_cryptodev *dev, void *sess)
> >  {
> > -	uint8_t index = dev->driver_id;
> > -	void *sess_priv = get_sym_session_private_data(sess, index);
> > -
> > +	RTE_SET_USED(dev);
> >  	/* Zero out the whole structure */
> > -	if (sess_priv) {
> > -		memset(sess_priv, 0, sizeof(struct zuc_session));
> > -		struct rte_mempool *sess_mp =
> > rte_mempool_from_obj(sess_priv);
> > -		set_sym_session_private_data(sess, index, NULL);
> > -		rte_mempool_put(sess_mp, sess_priv);
> > -	}
> > +	if (sess)
> > +		memset(sess, 0, sizeof(struct zuc_session));
> >  }
> >
> >  struct rte_cryptodev_ops zuc_pmd_ops = {
> > diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
> > b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
> > index b33cb7e139..8522f2dfda 100644
> > --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
> > +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
> > @@ -38,8 +38,7 @@ otx2_ca_deq_post_process(const struct otx2_cpt_qp
> > *qp,
> >  		}
> >
> >  		if (unlikely(cop->sess_type ==
> > RTE_CRYPTO_OP_SESSIONLESS)) {
> > -			sym_session_clear(otx2_cryptodev_driver_id,
> > -					  cop->sym->session);
> > +			sym_session_clear(cop->sym->session);
> >  			memset(cop->sym->session, 0,
> >
> > 	rte_cryptodev_sym_get_existing_header_session_size(
> >  				cop->sym->session));
> > diff --git a/examples/fips_validation/fips_dev_self_test.c
> > b/examples/fips_validation/fips_dev_self_test.c
> > index b4eab05a98..bbc27a1b6f 100644
> > --- a/examples/fips_validation/fips_dev_self_test.c
> > +++ b/examples/fips_validation/fips_dev_self_test.c
> > @@ -969,7 +969,6 @@ struct fips_dev_auto_test_env {
> >  	struct rte_mempool *mpool;
> >  	struct rte_mempool *op_pool;
> >  	struct rte_mempool *sess_pool;
> > -	struct rte_mempool *sess_priv_pool;
> >  	struct rte_mbuf *mbuf;
> >  	struct rte_crypto_op *op;
> >  };
> > @@ -981,7 +980,7 @@ typedef int
> > (*fips_dev_self_test_prepare_xform_t)(uint8_t,
> >  		uint32_t);
> >
> >  typedef int (*fips_dev_self_test_prepare_op_t)(struct rte_crypto_op *,
> > -		struct rte_mbuf *, struct rte_cryptodev_sym_session *,
> > +		struct rte_mbuf *, void *,
> >  		uint32_t, struct fips_dev_self_test_vector *);
> >
> >  typedef int (*fips_dev_self_test_check_result_t)(struct rte_crypto_op *,
> > @@ -1173,7 +1172,7 @@ prepare_aead_xform(uint8_t dev_id,
> >  static int
> >  prepare_cipher_op(struct rte_crypto_op *op,
> >  		struct rte_mbuf *mbuf,
> > -		struct rte_cryptodev_sym_session *session,
> > +		void *session,
> >  		uint32_t dir,
> >  		struct fips_dev_self_test_vector *vec)
> >  {
> > @@ -1212,7 +1211,7 @@ prepare_cipher_op(struct rte_crypto_op *op,
> >  static int
> >  prepare_auth_op(struct rte_crypto_op *op,
> >  		struct rte_mbuf *mbuf,
> > -		struct rte_cryptodev_sym_session *session,
> > +		void *session,
> >  		uint32_t dir,
> >  		struct fips_dev_self_test_vector *vec)
> >  {
> > @@ -1251,7 +1250,7 @@ prepare_auth_op(struct rte_crypto_op *op,
> >  static int
> >  prepare_aead_op(struct rte_crypto_op *op,
> >  		struct rte_mbuf *mbuf,
> > -		struct rte_cryptodev_sym_session *session,
> > +		void *session,
> >  		uint32_t dir,
> >  		struct fips_dev_self_test_vector *vec)
> >  {
> > @@ -1464,7 +1463,7 @@ run_single_test(uint8_t dev_id,
> >  		uint32_t negative_test)
> >  {
> >  	struct rte_crypto_sym_xform xform;
> > -	struct rte_cryptodev_sym_session *sess;
> > +	void *sess;
> >  	uint16_t n_deqd;
> >  	uint8_t key[256];
> >  	int ret;
> > @@ -1484,8 +1483,7 @@ run_single_test(uint8_t dev_id,
> >  	if (!sess)
> >  		return -ENOMEM;
> >
> > -	ret = rte_cryptodev_sym_session_init(dev_id,
> > -			sess, &xform, env->sess_priv_pool);
> > +	ret = rte_cryptodev_sym_session_init(dev_id, sess, &xform);
> >  	if (ret < 0) {
> >  		RTE_LOG(ERR, PMD, "Error %i: Init session\n", ret);
> >  		return ret;
> > @@ -1533,8 +1531,6 @@ fips_dev_auto_test_uninit(uint8_t dev_id,
> >  		rte_mempool_free(env->op_pool);
> >  	if (env->sess_pool)
> >  		rte_mempool_free(env->sess_pool);
> > -	if (env->sess_priv_pool)
> > -		rte_mempool_free(env->sess_priv_pool);
> >
> >  	rte_cryptodev_stop(dev_id);
> >  }
> > @@ -1542,7 +1538,7 @@ fips_dev_auto_test_uninit(uint8_t dev_id,
> >  static int
> >  fips_dev_auto_test_init(uint8_t dev_id, struct fips_dev_auto_test_env
> > *env)
> >  {
> > -	struct rte_cryptodev_qp_conf qp_conf = {128, NULL, NULL};
> > +	struct rte_cryptodev_qp_conf qp_conf = {128, NULL};
> >  	uint32_t sess_sz =
> > rte_cryptodev_sym_get_private_session_size(dev_id);
> >  	struct rte_cryptodev_config conf;
> >  	char name[128];
> > @@ -1586,25 +1582,13 @@ fips_dev_auto_test_init(uint8_t dev_id, struct
> > fips_dev_auto_test_env *env)
> >  	snprintf(name, 128, "%s%u", "SELF_TEST_SESS_POOL", dev_id);
> >
> >  	env->sess_pool = rte_cryptodev_sym_session_pool_create(name,
> > -			128, 0, 0, 0, rte_cryptodev_socket_id(dev_id));
> > +			128, sess_sz, 0, 0, rte_cryptodev_socket_id(dev_id));
> >  	if (!env->sess_pool) {
> >  		ret = -ENOMEM;
> >  		goto error_exit;
> >  	}
> >
> > -	memset(name, 0, 128);
> > -	snprintf(name, 128, "%s%u", "SELF_TEST_SESS_PRIV_POOL", dev_id);
> > -
> > -	env->sess_priv_pool = rte_mempool_create(name,
> > -			128, sess_sz, 0, 0, NULL, NULL, NULL,
> > -			NULL, rte_cryptodev_socket_id(dev_id), 0);
> > -	if (!env->sess_priv_pool) {
> > -		ret = -ENOMEM;
> > -		goto error_exit;
> > -	}
> > -
> >  	qp_conf.mp_session = env->sess_pool;
> > -	qp_conf.mp_session_private = env->sess_priv_pool;
> >
> >  	ret = rte_cryptodev_queue_pair_setup(dev_id, 0, &qp_conf,
> >  			rte_cryptodev_socket_id(dev_id));
> > diff --git a/examples/fips_validation/main.c
> > b/examples/fips_validation/main.c
> > index a8daad1f48..03c6ccb5b8 100644
> > --- a/examples/fips_validation/main.c
> > +++ b/examples/fips_validation/main.c
> > @@ -48,13 +48,12 @@ struct cryptodev_fips_validate_env {
> >  	uint16_t mbuf_data_room;
> >  	struct rte_mempool *mpool;
> >  	struct rte_mempool *sess_mpool;
> > -	struct rte_mempool *sess_priv_mpool;
> >  	struct rte_mempool *op_pool;
> >  	struct rte_mbuf *mbuf;
> >  	uint8_t *digest;
> >  	uint16_t digest_len;
> >  	struct rte_crypto_op *op;
> > -	struct rte_cryptodev_sym_session *sess;
> > +	void *sess;
> >  	uint16_t self_test;
> >  	struct fips_dev_broken_test_config *broken_test_config;
> >  } env;
> > @@ -63,7 +62,7 @@ static int
> >  cryptodev_fips_validate_app_int(void)
> >  {
> >  	struct rte_cryptodev_config conf = {rte_socket_id(), 1, 0};
> > -	struct rte_cryptodev_qp_conf qp_conf = {128, NULL, NULL};
> > +	struct rte_cryptodev_qp_conf qp_conf = {128, NULL};
> >  	struct rte_cryptodev_info dev_info;
> >  	uint32_t sess_sz = rte_cryptodev_sym_get_private_session_size(
> >  			env.dev_id);
> > @@ -103,16 +102,11 @@ cryptodev_fips_validate_app_int(void)
> >  	ret = -ENOMEM;
> >
> >  	env.sess_mpool = rte_cryptodev_sym_session_pool_create(
> > -			"FIPS_SESS_MEMPOOL", 16, 0, 0, 0, rte_socket_id());
> > +			"FIPS_SESS_MEMPOOL", 16, sess_sz, 0, 0,
> > +			rte_socket_id());
> >  	if (!env.sess_mpool)
> >  		goto error_exit;
> >
> > -	env.sess_priv_mpool =
> > rte_mempool_create("FIPS_SESS_PRIV_MEMPOOL",
> > -			16, sess_sz, 0, 0, NULL, NULL, NULL,
> > -			NULL, rte_socket_id(), 0);
> > -	if (!env.sess_priv_mpool)
> > -		goto error_exit;
> > -
> >  	env.op_pool = rte_crypto_op_pool_create(
> >  			"FIPS_OP_POOL",
> >  			RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> > @@ -127,7 +121,6 @@ cryptodev_fips_validate_app_int(void)
> >  		goto error_exit;
> >
> >  	qp_conf.mp_session = env.sess_mpool;
> > -	qp_conf.mp_session_private = env.sess_priv_mpool;
> >
> >  	ret = rte_cryptodev_queue_pair_setup(env.dev_id, 0, &qp_conf,
> >  			rte_socket_id());
> > @@ -141,8 +134,6 @@ cryptodev_fips_validate_app_int(void)
> >  	rte_mempool_free(env.mpool);
> >  	if (env.sess_mpool)
> >  		rte_mempool_free(env.sess_mpool);
> > -	if (env.sess_priv_mpool)
> > -		rte_mempool_free(env.sess_priv_mpool);
> >  	if (env.op_pool)
> >  		rte_mempool_free(env.op_pool);
> >
> > @@ -158,7 +149,6 @@ cryptodev_fips_validate_app_uninit(void)
> >  	rte_cryptodev_sym_session_free(env.sess);
> >  	rte_mempool_free(env.mpool);
> >  	rte_mempool_free(env.sess_mpool);
> > -	rte_mempool_free(env.sess_priv_mpool);
> >  	rte_mempool_free(env.op_pool);
> >  }
> >
> > @@ -1179,7 +1169,7 @@ fips_run_test(void)
> >  		return -ENOMEM;
> >
> >  	ret = rte_cryptodev_sym_session_init(env.dev_id,
> > -			env.sess, &xform, env.sess_priv_mpool);
> > +			env.sess, &xform);
> >  	if (ret < 0) {
> >  		RTE_LOG(ERR, USER1, "Error %i: Init session\n",
> >  				ret);
> > diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-
> > secgw/ipsec-secgw.c
> > index 7ad94cb822..65528ee2e7 100644
> > --- a/examples/ipsec-secgw/ipsec-secgw.c
> > +++ b/examples/ipsec-secgw/ipsec-secgw.c
> > @@ -1216,15 +1216,11 @@ ipsec_poll_mode_worker(void)
> >  	qconf->inbound.sa_ctx = socket_ctx[socket_id].sa_in;
> >  	qconf->inbound.cdev_map = cdev_map_in;
> >  	qconf->inbound.session_pool = socket_ctx[socket_id].session_pool;
> > -	qconf->inbound.session_priv_pool =
> > -			socket_ctx[socket_id].session_priv_pool;
> >  	qconf->outbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_out;
> >  	qconf->outbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_out;
> >  	qconf->outbound.sa_ctx = socket_ctx[socket_id].sa_out;
> >  	qconf->outbound.cdev_map = cdev_map_out;
> >  	qconf->outbound.session_pool =
> > socket_ctx[socket_id].session_pool;
> > -	qconf->outbound.session_priv_pool =
> > -			socket_ctx[socket_id].session_priv_pool;
> >  	qconf->frag.pool_dir = socket_ctx[socket_id].mbuf_pool;
> >  	qconf->frag.pool_indir = socket_ctx[socket_id].mbuf_pool_indir;
> >
> > @@ -2142,8 +2138,6 @@ cryptodevs_init(uint16_t req_queue_num)
> >  		qp_conf.nb_descriptors = CDEV_QUEUE_DESC;
> >  		qp_conf.mp_session =
> >  			socket_ctx[dev_conf.socket_id].session_pool;
> > -		qp_conf.mp_session_private =
> > -			socket_ctx[dev_conf.socket_id].session_priv_pool;
> >  		for (qp = 0; qp < dev_conf.nb_queue_pairs; qp++)
> >  			if (rte_cryptodev_queue_pair_setup(cdev_id, qp,
> >  					&qp_conf, dev_conf.socket_id))
> > @@ -2405,37 +2399,37 @@ session_pool_init(struct socket_ctx *ctx,
> int32_t
> > socket_id, size_t sess_sz)
> >  		printf("Allocated session pool on socket %d\n",
> > 	socket_id);
> >  }
> >
> > -static void
> > -session_priv_pool_init(struct socket_ctx *ctx, int32_t socket_id,
> > -	size_t sess_sz)
> > -{
> > -	char mp_name[RTE_MEMPOOL_NAMESIZE];
> > -	struct rte_mempool *sess_mp;
> > -	uint32_t nb_sess;
> > -
> > -	snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
> > -			"sess_mp_priv_%u", socket_id);
> > -	nb_sess = (get_nb_crypto_sessions() + CDEV_MP_CACHE_SZ *
> > -		rte_lcore_count());
> > -	nb_sess = RTE_MAX(nb_sess, CDEV_MP_CACHE_SZ *
> > -			CDEV_MP_CACHE_MULTIPLIER);
> > -	sess_mp = rte_mempool_create(mp_name,
> > -			nb_sess,
> > -			sess_sz,
> > -			CDEV_MP_CACHE_SZ,
> > -			0, NULL, NULL, NULL,
> > -			NULL, socket_id,
> > -			0);
> > -	ctx->session_priv_pool = sess_mp;
> > -
> > -	if (ctx->session_priv_pool == NULL)
> > -		rte_exit(EXIT_FAILURE,
> > -			"Cannot init session priv pool on socket %d\n",
> > -			socket_id);
> > -	else
> > -		printf("Allocated session priv pool on socket %d\n",
> > -			socket_id);
> > -}
> > +//static void
> > +//session_priv_pool_init(struct socket_ctx *ctx, int32_t socket_id,
> > +//	size_t sess_sz)
> > +//{
> > +//	char mp_name[RTE_MEMPOOL_NAMESIZE];
> > +//	struct rte_mempool *sess_mp;
> > +//	uint32_t nb_sess;
> > +//
> > +//	snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
> > +//			"sess_mp_priv_%u", socket_id);
> > +//	nb_sess = (get_nb_crypto_sessions() + CDEV_MP_CACHE_SZ *
> > +//		rte_lcore_count());
> > +//	nb_sess = RTE_MAX(nb_sess, CDEV_MP_CACHE_SZ *
> > +//			CDEV_MP_CACHE_MULTIPLIER);
> > +//	sess_mp = rte_mempool_create(mp_name,
> > +//			nb_sess,
> > +//			sess_sz,
> > +//			CDEV_MP_CACHE_SZ,
> > +//			0, NULL, NULL, NULL,
> > +//			NULL, socket_id,
> > +//			0);
> > +//	ctx->session_priv_pool = sess_mp;
> > +//
> > +//	if (ctx->session_priv_pool == NULL)
> > +//		rte_exit(EXIT_FAILURE,
> > +//			"Cannot init session priv pool on socket %d\n",
> > +//			socket_id);
> > +//	else
> > +//		printf("Allocated session priv pool on socket %d\n",
> > +//			socket_id);
> > +//}
> >
> >  static void
> >  pool_init(struct socket_ctx *ctx, int32_t socket_id, uint32_t nb_mbuf)
> > @@ -2938,8 +2932,8 @@ main(int32_t argc, char **argv)
> >
> >  		pool_init(&socket_ctx[socket_id], socket_id,
> > nb_bufs_in_pool);
> >  		session_pool_init(&socket_ctx[socket_id], socket_id,
> > sess_sz);
> > -		session_priv_pool_init(&socket_ctx[socket_id], socket_id,
> > -			sess_sz);
> > +//		session_priv_pool_init(&socket_ctx[socket_id], socket_id,
> > +//			sess_sz);
> >  	}
> >  	printf("Number of mbufs in packet pool %d\n", nb_bufs_in_pool);
> >
> > diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c
> > index 03d907cba8..a5921de11c 100644
> > --- a/examples/ipsec-secgw/ipsec.c
> > +++ b/examples/ipsec-secgw/ipsec.c
> > @@ -143,8 +143,7 @@ create_lookaside_session(struct ipsec_ctx
> *ipsec_ctx,
> > struct ipsec_sa *sa,
> >  		ips->crypto.ses = rte_cryptodev_sym_session_create(
> >  				ipsec_ctx->session_pool);
> >  		rte_cryptodev_sym_session_init(ipsec_ctx-
> > >tbl[cdev_id_qp].id,
> > -				ips->crypto.ses, sa->xforms,
> > -				ipsec_ctx->session_priv_pool);
> > +				ips->crypto.ses, sa->xforms);
> >
> >  		rte_cryptodev_info_get(ipsec_ctx->tbl[cdev_id_qp].id,
> >  				&cdev_info);
> > diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h
> > index 8405c48171..673c64e8dc 100644
> > --- a/examples/ipsec-secgw/ipsec.h
> > +++ b/examples/ipsec-secgw/ipsec.h
> > @@ -243,7 +243,6 @@ struct socket_ctx {
> >  	struct rte_mempool *mbuf_pool;
> >  	struct rte_mempool *mbuf_pool_indir;
> >  	struct rte_mempool *session_pool;
> > -	struct rte_mempool *session_priv_pool;
> >  };
> >
> >  struct cnt_blk {
> > diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-
> > secgw/ipsec_worker.c
> > index c545497cee..04bcce49db 100644
> > --- a/examples/ipsec-secgw/ipsec_worker.c
> > +++ b/examples/ipsec-secgw/ipsec_worker.c
> > @@ -537,14 +537,10 @@
> ipsec_wrkr_non_burst_int_port_app_mode(struct
> > eh_event_link_info *links,
> >  	lconf.inbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_in;
> >  	lconf.inbound.sa_ctx = socket_ctx[socket_id].sa_in;
> >  	lconf.inbound.session_pool = socket_ctx[socket_id].session_pool;
> > -	lconf.inbound.session_priv_pool =
> > -			socket_ctx[socket_id].session_priv_pool;
> >  	lconf.outbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_out;
> >  	lconf.outbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_out;
> >  	lconf.outbound.sa_ctx = socket_ctx[socket_id].sa_out;
> >  	lconf.outbound.session_pool = socket_ctx[socket_id].session_pool;
> > -	lconf.outbound.session_priv_pool =
> > -			socket_ctx[socket_id].session_priv_pool;
> >
> >  	RTE_LOG(INFO, IPSEC,
> >  		"Launching event mode worker (non-burst - Tx internal port -
> > "
> > diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
> > index 66d1491bf7..bdc3da731c 100644
> > --- a/examples/l2fwd-crypto/main.c
> > +++ b/examples/l2fwd-crypto/main.c
> > @@ -188,7 +188,7 @@ struct l2fwd_crypto_params {
> >  	struct l2fwd_iv auth_iv;
> >  	struct l2fwd_iv aead_iv;
> >  	struct l2fwd_key aad;
> > -	struct rte_cryptodev_sym_session *session;
> > +	void *session;
> >
> >  	uint8_t do_cipher;
> >  	uint8_t do_hash;
> > @@ -229,7 +229,6 @@ struct rte_mempool *l2fwd_pktmbuf_pool;
> >  struct rte_mempool *l2fwd_crypto_op_pool;
> >  static struct {
> >  	struct rte_mempool *sess_mp;
> > -	struct rte_mempool *priv_mp;
> >  } session_pool_socket[RTE_MAX_NUMA_NODES];
> >
> >  /* Per-port statistics struct */
> > @@ -671,11 +670,11 @@ generate_random_key(uint8_t *key, unsigned
> > length)
> >  }
> >
> >  /* Session is created and is later attached to the crypto operation. 8< */
> > -static struct rte_cryptodev_sym_session *
> > +static void *
> >  initialize_crypto_session(struct l2fwd_crypto_options *options, uint8_t
> > cdev_id)
> >  {
> >  	struct rte_crypto_sym_xform *first_xform;
> > -	struct rte_cryptodev_sym_session *session;
> > +	void *session;
> >  	int retval = rte_cryptodev_socket_id(cdev_id);
> >
> >  	if (retval < 0)
> > @@ -703,8 +702,7 @@ initialize_crypto_session(struct
> l2fwd_crypto_options
> > *options, uint8_t cdev_id)
> >  		return NULL;
> >
> >  	if (rte_cryptodev_sym_session_init(cdev_id, session,
> > -				first_xform,
> > -				session_pool_socket[socket_id].priv_mp) < 0)
> > +				first_xform) < 0)
> >  		return NULL;
> >
> >  	return session;
> > @@ -730,7 +728,7 @@ l2fwd_main_loop(struct l2fwd_crypto_options
> > *options)
> >  			US_PER_S * BURST_TX_DRAIN_US;
> >  	struct l2fwd_crypto_params *cparams;
> >  	struct l2fwd_crypto_params port_cparams[qconf->nb_crypto_devs];
> > -	struct rte_cryptodev_sym_session *session;
> > +	void *session;
> >
> >  	if (qconf->nb_rx_ports == 0) {
> >  		RTE_LOG(INFO, L2FWD, "lcore %u has nothing to do\n",
> > lcore_id);
> > @@ -2388,30 +2386,6 @@ initialize_cryptodevs(struct
> l2fwd_crypto_options
> > *options, unsigned nb_ports,
> >  		} else
> >  			sessions_needed = enabled_cdev_count;
> >
> > -		if (session_pool_socket[socket_id].priv_mp == NULL) {
> > -			char mp_name[RTE_MEMPOOL_NAMESIZE];
> > -
> > -			snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
> > -				"priv_sess_mp_%u", socket_id);
> > -
> > -			session_pool_socket[socket_id].priv_mp =
> > -					rte_mempool_create(mp_name,
> > -						sessions_needed,
> > -						max_sess_sz,
> > -						0, 0, NULL, NULL, NULL,
> > -						NULL, socket_id,
> > -						0);
> > -
> > -			if (session_pool_socket[socket_id].priv_mp == NULL)
> > {
> > -				printf("Cannot create pool on socket %d\n",
> > -					socket_id);
> > -				return -ENOMEM;
> > -			}
> > -
> > -			printf("Allocated pool \"%s\" on socket %d\n",
> > -				mp_name, socket_id);
> > -		}
> > -
> >  		if (session_pool_socket[socket_id].sess_mp == NULL) {
> >  			char mp_name[RTE_MEMPOOL_NAMESIZE];
> >  			snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
> > @@ -2421,7 +2395,8 @@ initialize_cryptodevs(struct
> l2fwd_crypto_options
> > *options, unsigned nb_ports,
> >
> > 	rte_cryptodev_sym_session_pool_create(
> >  							mp_name,
> >  							sessions_needed,
> > -							0, 0, 0, socket_id);
> > +							max_sess_sz,
> > +							0, 0, socket_id);
> >
> >  			if (session_pool_socket[socket_id].sess_mp == NULL)
> > {
> >  				printf("Cannot create pool on socket %d\n",
> > @@ -2573,8 +2548,6 @@ initialize_cryptodevs(struct
> l2fwd_crypto_options
> > *options, unsigned nb_ports,
> >
> >  		qp_conf.nb_descriptors = 2048;
> >  		qp_conf.mp_session =
> > session_pool_socket[socket_id].sess_mp;
> > -		qp_conf.mp_session_private =
> > -				session_pool_socket[socket_id].priv_mp;
> >
> >  		retval = rte_cryptodev_queue_pair_setup(cdev_id, 0,
> > &qp_conf,
> >  				socket_id);
> > diff --git a/examples/vhost_crypto/main.c
> b/examples/vhost_crypto/main.c
> > index dea7dcbd07..cbb97aaf76 100644
> > --- a/examples/vhost_crypto/main.c
> > +++ b/examples/vhost_crypto/main.c
> > @@ -46,7 +46,6 @@ struct vhost_crypto_info {
> >  	int vids[MAX_NB_SOCKETS];
> >  	uint32_t nb_vids;
> >  	struct rte_mempool *sess_pool;
> > -	struct rte_mempool *sess_priv_pool;
> >  	struct rte_mempool *cop_pool;
> >  	uint8_t cid;
> >  	uint32_t qid;
> > @@ -304,7 +303,6 @@ new_device(int vid)
> >  	}
> >
> >  	ret = rte_vhost_crypto_create(vid, info->cid, info->sess_pool,
> > -			info->sess_priv_pool,
> >  			rte_lcore_to_socket_id(options.los[i].lcore_id));
> >  	if (ret) {
> >  		RTE_LOG(ERR, USER1, "Cannot create vhost crypto\n");
> > @@ -458,7 +456,6 @@ free_resource(void)
> >
> >  		rte_mempool_free(info->cop_pool);
> >  		rte_mempool_free(info->sess_pool);
> > -		rte_mempool_free(info->sess_priv_pool);
> >
> >  		for (j = 0; j < lo->nb_sockets; j++) {
> >  			rte_vhost_driver_unregister(lo->socket_files[i]);
> > @@ -544,16 +541,12 @@ main(int argc, char *argv[])
> >
> >  		snprintf(name, 127, "SESS_POOL_%u", lo->lcore_id);
> >  		info->sess_pool =
> > rte_cryptodev_sym_session_pool_create(name,
> > -				SESSION_MAP_ENTRIES, 0, 0, 0,
> > -				rte_lcore_to_socket_id(lo->lcore_id));
> > -
> > -		snprintf(name, 127, "SESS_POOL_PRIV_%u", lo->lcore_id);
> > -		info->sess_priv_pool = rte_mempool_create(name,
> >  				SESSION_MAP_ENTRIES,
> >
> > 	rte_cryptodev_sym_get_private_session_size(
> > -				info->cid), 64, 0, NULL, NULL, NULL, NULL,
> > -				rte_lcore_to_socket_id(lo->lcore_id), 0);
> > -		if (!info->sess_priv_pool || !info->sess_pool) {
> > +					info->cid), 0, 0,
> > +				rte_lcore_to_socket_id(lo->lcore_id));
> > +
> > +		if (!info->sess_pool) {
> >  			RTE_LOG(ERR, USER1, "Failed to create mempool");
> >  			goto error_exit;
> >  		}
> > @@ -574,7 +567,6 @@ main(int argc, char *argv[])
> >
> >  		qp_conf.nb_descriptors = NB_CRYPTO_DESCRIPTORS;
> >  		qp_conf.mp_session = info->sess_pool;
> > -		qp_conf.mp_session_private = info->sess_priv_pool;
> >
> >  		for (j = 0; j < dev_info.max_nb_queue_pairs; j++) {
> >  			ret = rte_cryptodev_queue_pair_setup(info->cid, j,
> > diff --git a/lib/cryptodev/cryptodev_pmd.h
> > b/lib/cryptodev/cryptodev_pmd.h
> > index ae3aac59ae..d758b3b85d 100644
> > --- a/lib/cryptodev/cryptodev_pmd.h
> > +++ b/lib/cryptodev/cryptodev_pmd.h
> > @@ -242,7 +242,6 @@ typedef unsigned int
> > (*cryptodev_asym_get_session_private_size_t)(
> >   * @param	dev		Crypto device pointer
> >   * @param	xform		Single or chain of crypto xforms
> >   * @param	session		Pointer to cryptodev's private session
> > structure
> > - * @param	mp		Mempool where the private session is
> > allocated
> >   *
> >   * @return
> >   *  - Returns 0 if private session structure have been created successfully.
> > @@ -251,9 +250,7 @@ typedef unsigned int
> > (*cryptodev_asym_get_session_private_size_t)(
> >   *  - Returns -ENOMEM if the private session could not be allocated.
> >   */
> >  typedef int (*cryptodev_sym_configure_session_t)(struct rte_cryptodev
> > *dev,
> > -		struct rte_crypto_sym_xform *xform,
> > -		struct rte_cryptodev_sym_session *session,
> > -		struct rte_mempool *mp);
> > +		struct rte_crypto_sym_xform *xform, void *session);
> >  /**
> >   * Configure a Crypto asymmetric session on a device.
> >   *
> > @@ -279,7 +276,7 @@ typedef int
> > (*cryptodev_asym_configure_session_t)(struct rte_cryptodev *dev,
> >   * @param	sess		Cryptodev session structure
> >   */
> >  typedef void (*cryptodev_sym_free_session_t)(struct rte_cryptodev *dev,
> > -		struct rte_cryptodev_sym_session *sess);
> > +		void *sess);
> >  /**
> >   * Free asymmetric session private data.
> >   *
> > diff --git a/lib/cryptodev/rte_crypto.h b/lib/cryptodev/rte_crypto.h
> > index a864f5036f..200617f623 100644
> > --- a/lib/cryptodev/rte_crypto.h
> > +++ b/lib/cryptodev/rte_crypto.h
> > @@ -420,7 +420,7 @@ rte_crypto_op_sym_xforms_alloc(struct
> > rte_crypto_op *op, uint8_t nb_xforms)
> >   */
> >  static inline int
> >  rte_crypto_op_attach_sym_session(struct rte_crypto_op *op,
> > -		struct rte_cryptodev_sym_session *sess)
> > +		void *sess)
> >  {
> >  	if (unlikely(op->type != RTE_CRYPTO_OP_TYPE_SYMMETRIC))
> >  		return -1;
> > diff --git a/lib/cryptodev/rte_crypto_sym.h
> > b/lib/cryptodev/rte_crypto_sym.h
> > index 58c0724743..848da1942c 100644
> > --- a/lib/cryptodev/rte_crypto_sym.h
> > +++ b/lib/cryptodev/rte_crypto_sym.h
> > @@ -932,7 +932,7 @@ __rte_crypto_sym_op_sym_xforms_alloc(struct
> > rte_crypto_sym_op *sym_op,
> >   */
> >  static inline int
> >  __rte_crypto_sym_op_attach_sym_session(struct rte_crypto_sym_op
> > *sym_op,
> > -		struct rte_cryptodev_sym_session *sess)
> > +		void *sess)
> >  {
> >  	sym_op->session = sess;
> >
> > diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
> > index 9fa3aff1d3..a31cae202a 100644
> > --- a/lib/cryptodev/rte_cryptodev.c
> > +++ b/lib/cryptodev/rte_cryptodev.c
> > @@ -199,6 +199,8 @@ struct
> > rte_cryptodev_sym_session_pool_private_data {
> >  	/**< number of elements in sess_data array */
> >  	uint16_t user_data_sz;
> >  	/**< session user data will be placed after sess_data */
> > +	uint16_t sess_priv_sz;
> > +	/**< session user data will be placed after sess_data */
> >  };
> >
> >  int
> > @@ -1223,8 +1225,7 @@ rte_cryptodev_queue_pair_setup(uint8_t
> dev_id,
> > uint16_t queue_pair_id,
> >  		return -EINVAL;
> >  	}
> >
> > -	if ((qp_conf->mp_session && !qp_conf->mp_session_private) ||
> > -			(!qp_conf->mp_session && qp_conf-
> > >mp_session_private)) {
> > +	if (!qp_conf->mp_session) {
> >  		CDEV_LOG_ERR("Invalid mempools\n");
> >  		return -EINVAL;
> >  	}
> > @@ -1232,7 +1233,6 @@ rte_cryptodev_queue_pair_setup(uint8_t
> dev_id,
> > uint16_t queue_pair_id,
> >  	if (qp_conf->mp_session) {
> >  		struct rte_cryptodev_sym_session_pool_private_data
> > *pool_priv;
> >  		uint32_t obj_size = qp_conf->mp_session->elt_size;
> > -		uint32_t obj_priv_size = qp_conf->mp_session_private-
> > >elt_size;
> >  		struct rte_cryptodev_sym_session s = {0};
> >
> >  		pool_priv = rte_mempool_get_priv(qp_conf->mp_session);
> > @@ -1244,11 +1244,11 @@ rte_cryptodev_queue_pair_setup(uint8_t
> > dev_id, uint16_t queue_pair_id,
> >
> >  		s.nb_drivers = pool_priv->nb_drivers;
> >  		s.user_data_sz = pool_priv->user_data_sz;
> > +		s.priv_sz = pool_priv->sess_priv_sz;
> >
> > -		if
> > ((rte_cryptodev_sym_get_existing_header_session_size(&s) >
> > -			obj_size) || (s.nb_drivers <= dev->driver_id) ||
> > -
> > 	rte_cryptodev_sym_get_private_session_size(dev_id) >
> > -				obj_priv_size) {
> > +		if
> > (((rte_cryptodev_sym_get_existing_header_session_size(&s) +
> > +				(s.nb_drivers * s.priv_sz)) > obj_size) ||
> > +				(s.nb_drivers <= dev->driver_id)) {
> >  			CDEV_LOG_ERR("Invalid mempool\n");
> >  			return -EINVAL;
> >  		}
> > @@ -1710,11 +1710,11 @@ rte_cryptodev_pmd_callback_process(struct
> > rte_cryptodev *dev,
> >
> >  int
> >  rte_cryptodev_sym_session_init(uint8_t dev_id,
> > -		struct rte_cryptodev_sym_session *sess,
> > -		struct rte_crypto_sym_xform *xforms,
> > -		struct rte_mempool *mp)
> > +		void *sess_opaque,
> > +		struct rte_crypto_sym_xform *xforms)
> >  {
> >  	struct rte_cryptodev *dev;
> > +	struct rte_cryptodev_sym_session *sess = sess_opaque;
> >  	uint32_t sess_priv_sz =
> > rte_cryptodev_sym_get_private_session_size(
> >  			dev_id);
> >  	uint8_t index;
> > @@ -1727,10 +1727,10 @@ rte_cryptodev_sym_session_init(uint8_t
> dev_id,
> >
> >  	dev = rte_cryptodev_pmd_get_dev(dev_id);
> >
> > -	if (sess == NULL || xforms == NULL || dev == NULL || mp == NULL)
> > +	if (sess == NULL || xforms == NULL || dev == NULL)
> >  		return -EINVAL;
> >
> > -	if (mp->elt_size < sess_priv_sz)
> > +	if (sess->priv_sz < sess_priv_sz)
> >  		return -EINVAL;
> >
> >  	index = dev->driver_id;
> > @@ -1740,8 +1740,11 @@ rte_cryptodev_sym_session_init(uint8_t dev_id,
> >  	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
> > >sym_session_configure, -ENOTSUP);
> >
> >  	if (sess->sess_data[index].refcnt == 0) {
> > +		sess->sess_data[index].data = (void *)((uint8_t *)sess +
> > +
> > 	rte_cryptodev_sym_get_header_session_size() +
> > +				(index * sess->priv_sz));
> >  		ret = dev->dev_ops->sym_session_configure(dev, xforms,
> > -							sess, mp);
> > +				sess->sess_data[index].data);
> >  		if (ret < 0) {
> >  			CDEV_LOG_ERR(
> >  				"dev_id %d failed to configure session
> > details",
> > @@ -1750,7 +1753,7 @@ rte_cryptodev_sym_session_init(uint8_t dev_id,
> >  		}
> >  	}
> >
> > -	rte_cryptodev_trace_sym_session_init(dev_id, sess, xforms, mp);
> > +	rte_cryptodev_trace_sym_session_init(dev_id, sess, xforms);
> >  	sess->sess_data[index].refcnt++;
> >  	return 0;
> >  }
> > @@ -1795,6 +1798,21 @@ rte_cryptodev_asym_session_init(uint8_t
> dev_id,
> >  	rte_cryptodev_trace_asym_session_init(dev_id, sess, xforms, mp);
> >  	return 0;
> >  }
> > +static size_t
> > +get_max_sym_sess_priv_sz(void)
> > +{
> > +	size_t max_sz, sz;
> > +	int16_t cdev_id, n;
> > +
> > +	max_sz = 0;
> > +	n =  rte_cryptodev_count();
> > +	for (cdev_id = 0; cdev_id != n; cdev_id++) {
> > +		sz = rte_cryptodev_sym_get_private_session_size(cdev_id);
> > +		if (sz > max_sz)
> > +			max_sz = sz;
> > +	}
> > +	return max_sz;
> > +}
> >
> >  struct rte_mempool *
> >  rte_cryptodev_sym_session_pool_create(const char *name, uint32_t
> > nb_elts,
> > @@ -1804,15 +1822,15 @@
> rte_cryptodev_sym_session_pool_create(const
> > char *name, uint32_t nb_elts,
> >  	struct rte_mempool *mp;
> >  	struct rte_cryptodev_sym_session_pool_private_data *pool_priv;
> >  	uint32_t obj_sz;
> > +	uint32_t sess_priv_sz = get_max_sym_sess_priv_sz();
> >
> >  	obj_sz = rte_cryptodev_sym_get_header_session_size() +
> > user_data_size;
> > -	if (obj_sz > elt_size)
> > +	if (elt_size < obj_sz + (sess_priv_sz * nb_drivers)) {
> >  		CDEV_LOG_INFO("elt_size %u is expanded to %u\n",
> > elt_size,
> > -				obj_sz);
> > -	else
> > -		obj_sz = elt_size;
> > -
> > -	mp = rte_mempool_create(name, nb_elts, obj_sz, cache_size,
> > +				obj_sz + (sess_priv_sz * nb_drivers));
> > +		elt_size = obj_sz + (sess_priv_sz * nb_drivers);
> > +	}
> > +	mp = rte_mempool_create(name, nb_elts, elt_size, cache_size,
> >  			(uint32_t)(sizeof(*pool_priv)),
> >  			NULL, NULL, NULL, NULL,
> >  			socket_id, 0);
> > @@ -1832,6 +1850,7 @@ rte_cryptodev_sym_session_pool_create(const
> > char *name, uint32_t nb_elts,
> >
> >  	pool_priv->nb_drivers = nb_drivers;
> >  	pool_priv->user_data_sz = user_data_size;
> > +	pool_priv->sess_priv_sz = sess_priv_sz;
> >
> >  	rte_cryptodev_trace_sym_session_pool_create(name, nb_elts,
> >  		elt_size, cache_size, user_data_size, mp);
> > @@ -1865,7 +1884,7 @@ rte_cryptodev_sym_is_valid_session_pool(struct
> > rte_mempool *mp)
> >  	return 1;
> >  }
> >
> > -struct rte_cryptodev_sym_session *
> > +void *
> >  rte_cryptodev_sym_session_create(struct rte_mempool *mp)
> >  {
> >  	struct rte_cryptodev_sym_session *sess;
> > @@ -1886,6 +1905,7 @@ rte_cryptodev_sym_session_create(struct
> > rte_mempool *mp)
> >
> >  	sess->nb_drivers = pool_priv->nb_drivers;
> >  	sess->user_data_sz = pool_priv->user_data_sz;
> > +	sess->priv_sz = pool_priv->sess_priv_sz;
> >  	sess->opaque_data = 0;
> >
> >  	/* Clear device session pointer.
> > @@ -1895,7 +1915,7 @@ rte_cryptodev_sym_session_create(struct
> > rte_mempool *mp)
> >  			rte_cryptodev_sym_session_data_size(sess));
> >
> >  	rte_cryptodev_trace_sym_session_create(mp, sess);
> > -	return sess;
> > +	return (void *)sess;
> >  }
> >
> >  struct rte_cryptodev_asym_session *
> > @@ -1933,9 +1953,9 @@ rte_cryptodev_asym_session_create(struct
> > rte_mempool *mp)
> >  }
> >
> >  int
> > -rte_cryptodev_sym_session_clear(uint8_t dev_id,
> > -		struct rte_cryptodev_sym_session *sess)
> > +rte_cryptodev_sym_session_clear(uint8_t dev_id, void *s)
> >  {
> > +	struct rte_cryptodev_sym_session *sess = s;
> >  	struct rte_cryptodev *dev;
> >  	uint8_t driver_id;
> >
> > @@ -1957,7 +1977,7 @@ rte_cryptodev_sym_session_clear(uint8_t
> dev_id,
> >
> >  	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->sym_session_clear,
> > -ENOTSUP);
> >
> > -	dev->dev_ops->sym_session_clear(dev, sess);
> > +	dev->dev_ops->sym_session_clear(dev, sess-
> > >sess_data[driver_id].data);
> >
> >  	rte_cryptodev_trace_sym_session_clear(dev_id, sess);
> >  	return 0;
> > @@ -1988,10 +2008,11 @@ rte_cryptodev_asym_session_clear(uint8_t
> > dev_id,
> >  }
> >
> >  int
> > -rte_cryptodev_sym_session_free(struct rte_cryptodev_sym_session *sess)
> > +rte_cryptodev_sym_session_free(void *s)
> >  {
> >  	uint8_t i;
> >  	struct rte_mempool *sess_mp;
> > +	struct rte_cryptodev_sym_session *sess = s;
> >
> >  	if (sess == NULL)
> >  		return -EINVAL;
> > diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
> > index bb01f0f195..25af6fa7b9 100644
> > --- a/lib/cryptodev/rte_cryptodev.h
> > +++ b/lib/cryptodev/rte_cryptodev.h
> > @@ -537,8 +537,6 @@ struct rte_cryptodev_qp_conf {
> >  	uint32_t nb_descriptors; /**< Number of descriptors per queue pair
> > */
> >  	struct rte_mempool *mp_session;
> >  	/**< The mempool for creating session in sessionless mode */
> > -	struct rte_mempool *mp_session_private;
> > -	/**< The mempool for creating sess private data in sessionless mode
> > */
> >  };
> >
> >  /**
> > @@ -1126,6 +1124,8 @@ struct rte_cryptodev_sym_session {
> >  	/**< number of elements in sess_data array */
> >  	uint16_t user_data_sz;
> >  	/**< session user data will be placed after sess_data */
> > +	uint16_t priv_sz;
> > +	/**< Maximum private session data size which each driver can use */
> >  	__extension__ struct {
> >  		void *data;
> >  		uint16_t refcnt;
> > @@ -1177,10 +1177,10 @@
> rte_cryptodev_sym_session_pool_create(const
> > char *name, uint32_t nb_elts,
> >   * @param   mempool    Symmetric session mempool to allocate session
> >   *                     objects from
> >   * @return
> > - *  - On success return pointer to sym-session
> > + *  - On success return opaque pointer to sym-session
> >   *  - On failure returns NULL
> >   */
> > -struct rte_cryptodev_sym_session *
> > +void *
> >  rte_cryptodev_sym_session_create(struct rte_mempool *mempool);
> >
> >  /**
> > @@ -1209,7 +1209,7 @@ rte_cryptodev_asym_session_create(struct
> > rte_mempool *mempool);
> >   *  - -EBUSY if not all device private data has been freed.
> >   */
> >  int
> > -rte_cryptodev_sym_session_free(struct rte_cryptodev_sym_session
> > *sess);
> > +rte_cryptodev_sym_session_free(void *sess);
> >
> >  /**
> >   * Frees asymmetric crypto session header, after checking that all
> > @@ -1229,25 +1229,23 @@ rte_cryptodev_asym_session_free(struct
> > rte_cryptodev_asym_session *sess);
> >
> >  /**
> >   * Fill out private data for the device id, based on its device type.
> > + * Memory for private data is already allocated in sess, driver need
> > + * to fill the content.
> >   *
> >   * @param   dev_id   ID of device that we want the session to be used on
> >   * @param   sess     Session where the private data will be attached to
> >   * @param   xforms   Symmetric crypto transform operations to apply on
> > flow
> >   *                   processed with this session
> > - * @param   mempool  Mempool where the private data is allocated.
> >   *
> >   * @return
> >   *  - On success, zero.
> >   *  - -EINVAL if input parameters are invalid.
> >   *  - -ENOTSUP if crypto device does not support the crypto transform or
> >   *    does not support symmetric operations.
> > - *  - -ENOMEM if the private session could not be allocated.
> >   */
> >  int
> > -rte_cryptodev_sym_session_init(uint8_t dev_id,
> > -			struct rte_cryptodev_sym_session *sess,
> > -			struct rte_crypto_sym_xform *xforms,
> > -			struct rte_mempool *mempool);
> > +rte_cryptodev_sym_session_init(uint8_t dev_id, void *sess,
> > +			struct rte_crypto_sym_xform *xforms);
> >
> >  /**
> >   * Initialize asymmetric session on a device with specific asymmetric xform
> > @@ -1286,8 +1284,7 @@ rte_cryptodev_asym_session_init(uint8_t dev_id,
> >   *  - -ENOTSUP if crypto device does not support symmetric operations.
> >   */
> >  int
> > -rte_cryptodev_sym_session_clear(uint8_t dev_id,
> > -			struct rte_cryptodev_sym_session *sess);
> > +rte_cryptodev_sym_session_clear(uint8_t dev_id, void *sess);
> >
> >  /**
> >   * Frees resources held by asymmetric session during
> > rte_cryptodev_session_init
> > diff --git a/lib/cryptodev/rte_cryptodev_trace.h
> > b/lib/cryptodev/rte_cryptodev_trace.h
> > index d1f4f069a3..44da04c425 100644
> > --- a/lib/cryptodev/rte_cryptodev_trace.h
> > +++ b/lib/cryptodev/rte_cryptodev_trace.h
> > @@ -56,7 +56,6 @@ RTE_TRACE_POINT(
> >  	rte_trace_point_emit_u16(queue_pair_id);
> >  	rte_trace_point_emit_u32(conf->nb_descriptors);
> >  	rte_trace_point_emit_ptr(conf->mp_session);
> > -	rte_trace_point_emit_ptr(conf->mp_session_private);
> >  )
> >
> >  RTE_TRACE_POINT(
> > @@ -106,15 +105,13 @@ RTE_TRACE_POINT(
> >  RTE_TRACE_POINT(
> >  	rte_cryptodev_trace_sym_session_init,
> >  	RTE_TRACE_POINT_ARGS(uint8_t dev_id,
> > -		struct rte_cryptodev_sym_session *sess, void *xforms,
> > -		void *mempool),
> > +		struct rte_cryptodev_sym_session *sess, void *xforms),
> >  	rte_trace_point_emit_u8(dev_id);
> >  	rte_trace_point_emit_ptr(sess);
> >  	rte_trace_point_emit_u64(sess->opaque_data);
> >  	rte_trace_point_emit_u16(sess->nb_drivers);
> >  	rte_trace_point_emit_u16(sess->user_data_sz);
> >  	rte_trace_point_emit_ptr(xforms);
> > -	rte_trace_point_emit_ptr(mempool);
> >  )
> >
> >  RTE_TRACE_POINT(
> > diff --git a/lib/pipeline/rte_table_action.c b/lib/pipeline/rte_table_action.c
> > index ad7904c0ee..efdba9c899 100644
> > --- a/lib/pipeline/rte_table_action.c
> > +++ b/lib/pipeline/rte_table_action.c
> > @@ -1719,7 +1719,7 @@ struct sym_crypto_data {
> >  	uint16_t op_mask;
> >
> >  	/** Session pointer. */
> > -	struct rte_cryptodev_sym_session *session;
> > +	void *session;
> >
> >  	/** Direction of crypto, encrypt or decrypt */
> >  	uint16_t direction;
> > @@ -1780,7 +1780,7 @@ sym_crypto_apply(struct sym_crypto_data
> *data,
> >  	const struct rte_crypto_auth_xform *auth_xform = NULL;
> >  	const struct rte_crypto_aead_xform *aead_xform = NULL;
> >  	struct rte_crypto_sym_xform *xform = p->xform;
> > -	struct rte_cryptodev_sym_session *session;
> > +	void *session;
> >  	int ret;
> >
> >  	memset(data, 0, sizeof(*data));
> > @@ -1905,7 +1905,7 @@ sym_crypto_apply(struct sym_crypto_data
> *data,
> >  		return -ENOMEM;
> >
> >  	ret = rte_cryptodev_sym_session_init(cfg->cryptodev_id, session,
> > -			p->xform, cfg->mp_init);
> > +			p->xform);
> >  	if (ret < 0) {
> >  		rte_cryptodev_sym_session_free(session);
> >  		return ret;
> > @@ -2858,7 +2858,7 @@ rte_table_action_time_read(struct
> > rte_table_action *action,
> >  	return 0;
> >  }
> >
> > -struct rte_cryptodev_sym_session *
> > +void *
> >  rte_table_action_crypto_sym_session_get(struct rte_table_action *action,
> >  	void *data)
> >  {
> > diff --git a/lib/pipeline/rte_table_action.h b/lib/pipeline/rte_table_action.h
> > index 82bc9d9ac9..68db453a8b 100644
> > --- a/lib/pipeline/rte_table_action.h
> > +++ b/lib/pipeline/rte_table_action.h
> > @@ -1129,7 +1129,7 @@ rte_table_action_time_read(struct
> > rte_table_action *action,
> >   *   The pointer to the session on success, NULL otherwise.
> >   */
> >  __rte_experimental
> > -struct rte_cryptodev_sym_session *
> > +void *
> >  rte_table_action_crypto_sym_session_get(struct rte_table_action *action,
> >  	void *data);
> >
> > diff --git a/lib/vhost/rte_vhost_crypto.h b/lib/vhost/rte_vhost_crypto.h
> > index f54d731139..d9b7beed9c 100644
> > --- a/lib/vhost/rte_vhost_crypto.h
> > +++ b/lib/vhost/rte_vhost_crypto.h
> > @@ -50,8 +50,6 @@ rte_vhost_crypto_driver_start(const char *path);
> >   *  multiple Vhost-crypto devices.
> >   * @param sess_pool
> >   *  The pointer to the created cryptodev session pool.
> > - * @param sess_priv_pool
> > - *  The pointer to the created cryptodev session private data mempool.
> >   * @param socket_id
> >   *  NUMA Socket ID to allocate resources on. *
> >   * @return
> > @@ -61,7 +59,6 @@ rte_vhost_crypto_driver_start(const char *path);
> >  int
> >  rte_vhost_crypto_create(int vid, uint8_t cryptodev_id,
> >  		struct rte_mempool *sess_pool,
> > -		struct rte_mempool *sess_priv_pool,
> >  		int socket_id);
> >
> >  /**
> > diff --git a/lib/vhost/vhost_crypto.c b/lib/vhost/vhost_crypto.c
> > index 926b5c0bd9..b4464c4253 100644
> > --- a/lib/vhost/vhost_crypto.c
> > +++ b/lib/vhost/vhost_crypto.c
> > @@ -338,7 +338,7 @@ vhost_crypto_create_sess(struct vhost_crypto
> > *vcrypto,
> >  		VhostUserCryptoSessionParam *sess_param)
> >  {
> >  	struct rte_crypto_sym_xform xform1 = {0}, xform2 = {0};
> > -	struct rte_cryptodev_sym_session *session;
> > +	void *session;
> >  	int ret;
> >
> >  	switch (sess_param->op_type) {
> > @@ -383,8 +383,7 @@ vhost_crypto_create_sess(struct vhost_crypto
> > *vcrypto,
> >  		return;
> >  	}
> >
> > -	if (rte_cryptodev_sym_session_init(vcrypto->cid, session, &xform1,
> > -			vcrypto->sess_priv_pool) < 0) {
> > +	if (rte_cryptodev_sym_session_init(vcrypto->cid, session, &xform1)
> > < 0) {
> >  		VC_LOG_ERR("Failed to initialize session");
> >  		sess_param->session_id = -VIRTIO_CRYPTO_ERR;
> >  		return;
> > @@ -1425,7 +1424,6 @@ rte_vhost_crypto_driver_start(const char *path)
> >  int
> >  rte_vhost_crypto_create(int vid, uint8_t cryptodev_id,
> >  		struct rte_mempool *sess_pool,
> > -		struct rte_mempool *sess_priv_pool,
> >  		int socket_id)
> >  {
> >  	struct virtio_net *dev = get_device(vid);
> > @@ -1447,7 +1445,6 @@ rte_vhost_crypto_create(int vid, uint8_t
> > cryptodev_id,
> >  	}
> >
> >  	vcrypto->sess_pool = sess_pool;
> > -	vcrypto->sess_priv_pool = sess_priv_pool;
> >  	vcrypto->cid = cryptodev_id;
> >  	vcrypto->cache_session_id = UINT64_MAX;
> >  	vcrypto->last_session_id = 1;
> > --
> > 2.25.1


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v3 0/2] cmdline: reduce ABI
  2021-09-28 19:47  4%   ` [dpdk-dev] [PATCH v2 0/2] " Dmitry Kozlyuk
  2021-09-28 19:47  4%     ` [dpdk-dev] [PATCH v2 1/2] cmdline: make struct cmdline opaque Dmitry Kozlyuk
@ 2021-10-05  0:55  4%     ` Dmitry Kozlyuk
  2021-10-05  0:55  4%       ` [dpdk-dev] [PATCH v3 1/2] cmdline: make struct cmdline opaque Dmitry Kozlyuk
                         ` (2 more replies)
  1 sibling, 3 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-05  0:55 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Dmitry Kozlyuk

Hide struct cmdline following the deprecation notice.
Hide struct rdline following the v1 discussion.

v3: add experimental tags and releae notes for rdline.
v2: also hide struct rdline (David, Olivier).

Dmitry Kozlyuk (2):
  cmdline: make struct cmdline opaque
  cmdline: make struct rdline opaque

 app/test-cmdline/commands.c            |  2 +-
 app/test/test_cmdline_lib.c            | 19 ++++--
 doc/guides/rel_notes/deprecation.rst   |  4 --
 doc/guides/rel_notes/release_21_11.rst |  5 ++
 lib/cmdline/cmdline.c                  |  3 +-
 lib/cmdline/cmdline.h                  | 19 ------
 lib/cmdline/cmdline_cirbuf.c           |  1 -
 lib/cmdline/cmdline_private.h          | 57 ++++++++++++++++-
 lib/cmdline/cmdline_rdline.c           | 50 ++++++++++++++-
 lib/cmdline/cmdline_rdline.h           | 88 ++++++++++----------------
 lib/cmdline/version.map                |  8 ++-
 11 files changed, 163 insertions(+), 93 deletions(-)

-- 
2.29.3


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v3 1/2] cmdline: make struct cmdline opaque
  2021-10-05  0:55  4%     ` [dpdk-dev] [PATCH v3 0/2] cmdline: reduce ABI Dmitry Kozlyuk
@ 2021-10-05  0:55  4%       ` Dmitry Kozlyuk
  2021-10-05  0:55  3%       ` [dpdk-dev] [PATCH v3 2/2] cmdline: make struct rdline opaque Dmitry Kozlyuk
  2021-10-05 20:15  4%       ` [dpdk-dev] [PATCH v4 0/2] cmdline: reduce ABI Dmitry Kozlyuk
  2 siblings, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-05  0:55 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Dmitry Kozlyuk, Olivier Matz, Ray Kinsella

Remove the definition of `struct cmdline` from public header.
Deprecation notice:
https://mails.dpdk.org/archives/dev/2020-September/183310.html

Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 doc/guides/rel_notes/deprecation.rst   |  4 ----
 doc/guides/rel_notes/release_21_11.rst |  2 ++
 lib/cmdline/cmdline.h                  | 19 -------------------
 lib/cmdline/cmdline_private.h          |  8 +++++++-
 4 files changed, 9 insertions(+), 24 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 76a4abfd6b..a404276fa2 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -275,10 +275,6 @@ Deprecation Notices
 * metrics: The function ``rte_metrics_init`` will have a non-void return
   in order to notify errors instead of calling ``rte_exit``.
 
-* cmdline: ``cmdline`` structure will be made opaque to hide platform-specific
-  content. On Linux and FreeBSD, supported prior to DPDK 20.11,
-  original structure will be kept until DPDK 21.11.
-
 * security: The functions ``rte_security_set_pkt_metadata`` and
   ``rte_security_get_userdata`` will be made inline functions and additional
   flags will be added in structure ``rte_security_ctx`` in DPDK 21.11.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index b55900936d..18377e5813 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -101,6 +101,8 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
+* cmdline: Made ``cmdline`` structure definition hidden on Linux and FreeBSD.
+
 
 ABI Changes
 -----------
diff --git a/lib/cmdline/cmdline.h b/lib/cmdline/cmdline.h
index c29762ddae..96674dfda2 100644
--- a/lib/cmdline/cmdline.h
+++ b/lib/cmdline/cmdline.h
@@ -7,10 +7,6 @@
 #ifndef _CMDLINE_H_
 #define _CMDLINE_H_
 
-#ifndef RTE_EXEC_ENV_WINDOWS
-#include <termios.h>
-#endif
-
 #include <rte_common.h>
 #include <rte_compat.h>
 
@@ -27,23 +23,8 @@
 extern "C" {
 #endif
 
-#ifndef RTE_EXEC_ENV_WINDOWS
-
-struct cmdline {
-	int s_in;
-	int s_out;
-	cmdline_parse_ctx_t *ctx;
-	struct rdline rdl;
-	char prompt[RDLINE_PROMPT_SIZE];
-	struct termios oldterm;
-};
-
-#else
-
 struct cmdline;
 
-#endif /* RTE_EXEC_ENV_WINDOWS */
-
 struct cmdline *cmdline_new(cmdline_parse_ctx_t *ctx, const char *prompt, int s_in, int s_out);
 void cmdline_set_prompt(struct cmdline *cl, const char *prompt);
 void cmdline_free(struct cmdline *cl);
diff --git a/lib/cmdline/cmdline_private.h b/lib/cmdline/cmdline_private.h
index a87c45275c..2e93674c66 100644
--- a/lib/cmdline/cmdline_private.h
+++ b/lib/cmdline/cmdline_private.h
@@ -11,6 +11,8 @@
 #include <rte_os_shim.h>
 #ifdef RTE_EXEC_ENV_WINDOWS
 #include <rte_windows.h>
+#else
+#include <termios.h>
 #endif
 
 #include <cmdline.h>
@@ -22,6 +24,7 @@ struct terminal {
 	int is_console_input;
 	int is_console_output;
 };
+#endif
 
 struct cmdline {
 	int s_in;
@@ -29,11 +32,14 @@ struct cmdline {
 	cmdline_parse_ctx_t *ctx;
 	struct rdline rdl;
 	char prompt[RDLINE_PROMPT_SIZE];
+#ifdef RTE_EXEC_ENV_WINDOWS
 	struct terminal oldterm;
 	char repeated_char;
 	WORD repeat_count;
-};
+#else
+	struct termios oldterm;
 #endif
+};
 
 /* Disable buffering and echoing, save previous settings to oldterm. */
 void terminal_adjust(struct cmdline *cl);
-- 
2.29.3


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v3 2/2] cmdline: make struct rdline opaque
  2021-10-05  0:55  4%     ` [dpdk-dev] [PATCH v3 0/2] cmdline: reduce ABI Dmitry Kozlyuk
  2021-10-05  0:55  4%       ` [dpdk-dev] [PATCH v3 1/2] cmdline: make struct cmdline opaque Dmitry Kozlyuk
@ 2021-10-05  0:55  3%       ` Dmitry Kozlyuk
  2021-10-05 20:15  4%       ` [dpdk-dev] [PATCH v4 0/2] cmdline: reduce ABI Dmitry Kozlyuk
  2 siblings, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-05  0:55 UTC (permalink / raw)
  To: dev
  Cc: David Marchand, Dmitry Kozlyuk, Ali Alnubani, Gregory Etelson,
	Olivier Matz, Ray Kinsella

Hide struct rdline definition and some RDLINE_* constants in order
to be able to change internal buffer sizes transparently to the user.
Add new functions:

* rdline_create(): allocate and initialize struct rdline.
  This function replaces rdline_init() and takes an extra parameter:
  opaque user data for the callbacks.
* rdline_free(): deallocate struct rdline.
* rdline_get_history_buffer_size(): for use in tests.
* rdline_get_opaque(): to obtain user data in callback functions.

Remove rdline_init() function from library headers and export list,
because using it requires the knowledge of sizeof(struct rdline).

Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
---
 app/test-cmdline/commands.c            |  2 +-
 app/test/test_cmdline_lib.c            | 19 ++++--
 doc/guides/rel_notes/release_21_11.rst |  3 +
 lib/cmdline/cmdline.c                  |  3 +-
 lib/cmdline/cmdline_cirbuf.c           |  1 -
 lib/cmdline/cmdline_private.h          | 49 ++++++++++++++
 lib/cmdline/cmdline_rdline.c           | 50 ++++++++++++++-
 lib/cmdline/cmdline_rdline.h           | 88 ++++++++++----------------
 lib/cmdline/version.map                |  8 ++-
 9 files changed, 154 insertions(+), 69 deletions(-)

diff --git a/app/test-cmdline/commands.c b/app/test-cmdline/commands.c
index d732976f08..a13e1d1afd 100644
--- a/app/test-cmdline/commands.c
+++ b/app/test-cmdline/commands.c
@@ -297,7 +297,7 @@ cmd_get_history_bufsize_parsed(__rte_unused void *parsed_result,
 	struct rdline *rdl = cmdline_get_rdline(cl);
 
 	cmdline_printf(cl, "History buffer size: %zu\n",
-			sizeof(rdl->history_buf));
+			rdline_get_history_buffer_size(rdl));
 }
 
 cmdline_parse_token_string_t cmd_get_history_bufsize_tok =
diff --git a/app/test/test_cmdline_lib.c b/app/test/test_cmdline_lib.c
index d5a09b4541..367dcad5be 100644
--- a/app/test/test_cmdline_lib.c
+++ b/app/test/test_cmdline_lib.c
@@ -83,18 +83,18 @@ test_cmdline_parse_fns(void)
 static int
 test_cmdline_rdline_fns(void)
 {
-	struct rdline rdl;
+	struct rdline *rdl = NULL;
 	rdline_write_char_t *wc = &cmdline_write_char;
 	rdline_validate_t *v = &valid_buffer;
 	rdline_complete_t *c = &complete_buffer;
 
-	if (rdline_init(NULL, wc, v, c) >= 0)
+	if (rdline_create(NULL, wc, v, c, NULL) >= 0)
 		goto error;
-	if (rdline_init(&rdl, NULL, v, c) >= 0)
+	if (rdline_create(&rdl, NULL, v, c, NULL) >= 0)
 		goto error;
-	if (rdline_init(&rdl, wc, NULL, c) >= 0)
+	if (rdline_create(&rdl, wc, NULL, c, NULL) >= 0)
 		goto error;
-	if (rdline_init(&rdl, wc, v, NULL) >= 0)
+	if (rdline_create(&rdl, wc, v, NULL, NULL) >= 0)
 		goto error;
 	if (rdline_char_in(NULL, 0) >= 0)
 		goto error;
@@ -102,25 +102,30 @@ test_cmdline_rdline_fns(void)
 		goto error;
 	if (rdline_add_history(NULL, "history") >= 0)
 		goto error;
-	if (rdline_add_history(&rdl, NULL) >= 0)
+	if (rdline_add_history(rdl, NULL) >= 0)
 		goto error;
 	if (rdline_get_history_item(NULL, 0) != NULL)
 		goto error;
 
 	/* void functions */
+	rdline_get_history_buffer_size(NULL);
+	rdline_get_opaque(NULL);
 	rdline_newline(NULL, "prompt");
-	rdline_newline(&rdl, NULL);
+	rdline_newline(rdl, NULL);
 	rdline_stop(NULL);
 	rdline_quit(NULL);
 	rdline_restart(NULL);
 	rdline_redisplay(NULL);
 	rdline_reset(NULL);
 	rdline_clear_history(NULL);
+	rdline_free(NULL);
 
+	rdline_free(rdl);
 	return 0;
 
 error:
 	printf("Error: function accepted null parameter!\n");
+	rdline_free(rdl);
 	return -1;
 }
 
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 18377e5813..af11f4a656 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -103,6 +103,9 @@ API Changes
 
 * cmdline: Made ``cmdline`` structure definition hidden on Linux and FreeBSD.
 
+* cmdline: Made ``rdline`` structure definition hidden. Functions are added
+  to dynamically allocate and free it, and to access user data in callbacks.
+
 
 ABI Changes
 -----------
diff --git a/lib/cmdline/cmdline.c b/lib/cmdline/cmdline.c
index a176d15130..8f1854cb0b 100644
--- a/lib/cmdline/cmdline.c
+++ b/lib/cmdline/cmdline.c
@@ -85,13 +85,12 @@ cmdline_new(cmdline_parse_ctx_t *ctx, const char *prompt, int s_in, int s_out)
 	cl->ctx = ctx;
 
 	ret = rdline_init(&cl->rdl, cmdline_write_char, cmdline_valid_buffer,
-			cmdline_complete_buffer);
+			cmdline_complete_buffer, cl);
 	if (ret != 0) {
 		free(cl);
 		return NULL;
 	}
 
-	cl->rdl.opaque = cl;
 	cmdline_set_prompt(cl, prompt);
 	rdline_newline(&cl->rdl, cl->prompt);
 
diff --git a/lib/cmdline/cmdline_cirbuf.c b/lib/cmdline/cmdline_cirbuf.c
index 829a8af563..cbb76a7016 100644
--- a/lib/cmdline/cmdline_cirbuf.c
+++ b/lib/cmdline/cmdline_cirbuf.c
@@ -10,7 +10,6 @@
 
 #include "cmdline_cirbuf.h"
 
-
 int
 cirbuf_init(struct cirbuf *cbuf, char *buf, unsigned int start, unsigned int maxlen)
 {
diff --git a/lib/cmdline/cmdline_private.h b/lib/cmdline/cmdline_private.h
index 2e93674c66..c2e906d8de 100644
--- a/lib/cmdline/cmdline_private.h
+++ b/lib/cmdline/cmdline_private.h
@@ -17,6 +17,49 @@
 
 #include <cmdline.h>
 
+#define RDLINE_BUF_SIZE 512
+#define RDLINE_PROMPT_SIZE  32
+#define RDLINE_VT100_BUF_SIZE  8
+#define RDLINE_HISTORY_BUF_SIZE BUFSIZ
+#define RDLINE_HISTORY_MAX_LINE 64
+
+enum rdline_status {
+	RDLINE_INIT,
+	RDLINE_RUNNING,
+	RDLINE_EXITED
+};
+
+struct rdline {
+	enum rdline_status status;
+	/* rdline bufs */
+	struct cirbuf left;
+	struct cirbuf right;
+	char left_buf[RDLINE_BUF_SIZE+2]; /* reserve 2 chars for the \n\0 */
+	char right_buf[RDLINE_BUF_SIZE];
+
+	char prompt[RDLINE_PROMPT_SIZE];
+	unsigned int prompt_size;
+
+	char kill_buf[RDLINE_BUF_SIZE];
+	unsigned int kill_size;
+
+	/* history */
+	struct cirbuf history;
+	char history_buf[RDLINE_HISTORY_BUF_SIZE];
+	int history_cur_line;
+
+	/* callbacks and func pointers */
+	rdline_write_char_t *write_char;
+	rdline_validate_t *validate;
+	rdline_complete_t *complete;
+
+	/* vt100 parser */
+	struct cmdline_vt100 vt100;
+
+	/* opaque pointer */
+	void *opaque;
+};
+
 #ifdef RTE_EXEC_ENV_WINDOWS
 struct terminal {
 	DWORD input_mode;
@@ -57,4 +100,10 @@ ssize_t cmdline_read_char(struct cmdline *cl, char *c);
 __rte_format_printf(2, 0)
 int cmdline_vdprintf(int fd, const char *format, va_list op);
 
+int rdline_init(struct rdline *rdl,
+		rdline_write_char_t *write_char,
+		rdline_validate_t *validate,
+		rdline_complete_t *complete,
+		void *opaque);
+
 #endif
diff --git a/lib/cmdline/cmdline_rdline.c b/lib/cmdline/cmdline_rdline.c
index 2cb53e38f2..a525513465 100644
--- a/lib/cmdline/cmdline_rdline.c
+++ b/lib/cmdline/cmdline_rdline.c
@@ -13,6 +13,7 @@
 #include <ctype.h>
 
 #include "cmdline_cirbuf.h"
+#include "cmdline_private.h"
 #include "cmdline_rdline.h"
 
 static void rdline_puts(struct rdline *rdl, const char *buf);
@@ -37,9 +38,10 @@ isblank2(char c)
 
 int
 rdline_init(struct rdline *rdl,
-		 rdline_write_char_t *write_char,
-		 rdline_validate_t *validate,
-		 rdline_complete_t *complete)
+	    rdline_write_char_t *write_char,
+	    rdline_validate_t *validate,
+	    rdline_complete_t *complete,
+	    void *opaque)
 {
 	if (!rdl || !write_char || !validate || !complete)
 		return -EINVAL;
@@ -47,10 +49,40 @@ rdline_init(struct rdline *rdl,
 	rdl->validate = validate;
 	rdl->complete = complete;
 	rdl->write_char = write_char;
+	rdl->opaque = opaque;
 	rdl->status = RDLINE_INIT;
 	return cirbuf_init(&rdl->history, rdl->history_buf, 0, RDLINE_HISTORY_BUF_SIZE);
 }
 
+int
+rdline_create(struct rdline **out,
+	      rdline_write_char_t *write_char,
+	      rdline_validate_t *validate,
+	      rdline_complete_t *complete,
+	      void *opaque)
+{
+	struct rdline *rdl;
+	int ret;
+
+	if (out == NULL)
+		return -EINVAL;
+	rdl = malloc(sizeof(*rdl));
+	if (rdl == NULL)
+		return -ENOMEM;
+	ret = rdline_init(rdl, write_char, validate, complete, opaque);
+	if (ret < 0)
+		free(rdl);
+	else
+		*out = rdl;
+	return ret;
+}
+
+void
+rdline_free(struct rdline *rdl)
+{
+	free(rdl);
+}
+
 void
 rdline_newline(struct rdline *rdl, const char *prompt)
 {
@@ -564,6 +596,18 @@ rdline_get_history_item(struct rdline * rdl, unsigned int idx)
 	return NULL;
 }
 
+size_t
+rdline_get_history_buffer_size(struct rdline *rdl)
+{
+	return sizeof(rdl->history_buf);
+}
+
+void *
+rdline_get_opaque(struct rdline *rdl)
+{
+	return rdl != NULL ? rdl->opaque : NULL;
+}
+
 int
 rdline_add_history(struct rdline * rdl, const char * buf)
 {
diff --git a/lib/cmdline/cmdline_rdline.h b/lib/cmdline/cmdline_rdline.h
index d2170293de..b93e9ea569 100644
--- a/lib/cmdline/cmdline_rdline.h
+++ b/lib/cmdline/cmdline_rdline.h
@@ -10,9 +10,7 @@
 /**
  * This file is a small equivalent to the GNU readline library, but it
  * was originally designed for small systems, like Atmel AVR
- * microcontrollers (8 bits). Indeed, we don't use any malloc that is
- * sometimes not implemented (or just not recommended) on such
- * systems.
+ * microcontrollers (8 bits). It only uses malloc() on object creation.
  *
  * Obviously, it does not support as many things as the GNU readline,
  * but at least it supports some interesting features like a kill
@@ -31,6 +29,7 @@
  */
 
 #include <stdio.h>
+#include <rte_compat.h>
 #include <cmdline_cirbuf.h>
 #include <cmdline_vt100.h>
 
@@ -38,19 +37,6 @@
 extern "C" {
 #endif
 
-/* configuration */
-#define RDLINE_BUF_SIZE 512
-#define RDLINE_PROMPT_SIZE  32
-#define RDLINE_VT100_BUF_SIZE  8
-#define RDLINE_HISTORY_BUF_SIZE BUFSIZ
-#define RDLINE_HISTORY_MAX_LINE 64
-
-enum rdline_status {
-	RDLINE_INIT,
-	RDLINE_RUNNING,
-	RDLINE_EXITED
-};
-
 struct rdline;
 
 typedef int (rdline_write_char_t)(struct rdline *rdl, char);
@@ -60,52 +46,34 @@ typedef int (rdline_complete_t)(struct rdline *rdl, const char *buf,
 				char *dstbuf, unsigned int dstsize,
 				int *state);
 
-struct rdline {
-	enum rdline_status status;
-	/* rdline bufs */
-	struct cirbuf left;
-	struct cirbuf right;
-	char left_buf[RDLINE_BUF_SIZE+2]; /* reserve 2 chars for the \n\0 */
-	char right_buf[RDLINE_BUF_SIZE];
-
-	char prompt[RDLINE_PROMPT_SIZE];
-	unsigned int prompt_size;
-
-	char kill_buf[RDLINE_BUF_SIZE];
-	unsigned int kill_size;
-
-	/* history */
-	struct cirbuf history;
-	char history_buf[RDLINE_HISTORY_BUF_SIZE];
-	int history_cur_line;
-
-	/* callbacks and func pointers */
-	rdline_write_char_t *write_char;
-	rdline_validate_t *validate;
-	rdline_complete_t *complete;
-
-	/* vt100 parser */
-	struct cmdline_vt100 vt100;
-
-	/* opaque pointer */
-	void *opaque;
-};
-
 /**
- * Init fields for a struct rdline. Call this only once at the beginning
- * of your program.
- * \param rdl A pointer to an uninitialized struct rdline
+ * Allocate and initialize a new rdline instance.
+ *
+ * \param rdl Receives a pointer to the allocated structure.
  * \param write_char The function used by the function to write a character
  * \param validate A pointer to the function to execute when the
  *                 user validates the buffer.
  * \param complete A pointer to the function to execute when the
  *                 user completes the buffer.
+ * \param opaque User data for use in the callbacks.
+ *
+ * \return 0 on success, negative errno-style code in failure.
  */
-int rdline_init(struct rdline *rdl,
-		 rdline_write_char_t *write_char,
-		 rdline_validate_t *validate,
-		 rdline_complete_t *complete);
+__rte_experimental
+int rdline_create(struct rdline **rdl,
+		  rdline_write_char_t *write_char,
+		  rdline_validate_t *validate,
+		  rdline_complete_t *complete,
+		  void *opaque);
 
+/**
+ * Free an rdline instance.
+ *
+ * \param rdl A pointer to an initialized struct rdline.
+ *            If NULL, this function is a no-op.
+ */
+__rte_experimental
+void rdline_free(struct rdline *rdl);
 
 /**
  * Init the current buffer, and display a prompt.
@@ -194,6 +162,18 @@ void rdline_clear_history(struct rdline *rdl);
  */
 char *rdline_get_history_item(struct rdline *rdl, unsigned int i);
 
+/**
+ * Get maximum history buffer size.
+ */
+__rte_experimental
+size_t rdline_get_history_buffer_size(struct rdline *rdl);
+
+/**
+ * Get the opaque pointer supplied on struct rdline creation.
+ */
+__rte_experimental
+void *rdline_get_opaque(struct rdline *rdl);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/cmdline/version.map b/lib/cmdline/version.map
index 980adb4f23..13feb4d0a5 100644
--- a/lib/cmdline/version.map
+++ b/lib/cmdline/version.map
@@ -57,7 +57,6 @@ DPDK_22 {
 	rdline_clear_history;
 	rdline_get_buffer;
 	rdline_get_history_item;
-	rdline_init;
 	rdline_newline;
 	rdline_quit;
 	rdline_redisplay;
@@ -73,7 +72,14 @@ DPDK_22 {
 EXPERIMENTAL {
 	global:
 
+	# added in 20.11
 	cmdline_get_rdline;
 
+	# added in 21.11
+	rdline_create;
+	rdline_free;
+	rdline_get_history_buffer_size;
+	rdline_get_opaque;
+
 	local: *;
 };
-- 
2.29.3


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH] sort symbols map
@ 2021-10-05  9:16  4% David Marchand
  2021-10-05 14:16  0% ` Kinsella, Ray
  2021-10-11 11:36  0% ` Dumitrescu, Cristian
  0 siblings, 2 replies; 200+ results
From: David Marchand @ 2021-10-05  9:16 UTC (permalink / raw)
  To: dev
  Cc: thomas, ferruh.yigit, Ray Kinsella, Jasvinder Singh,
	Cristian Dumitrescu, Vladimir Medvedkin, Conor Walsh,
	Stephen Hemminger

Fixed with ./devtools/update-abi.sh $(cat ABI_VERSION)

Fixes: e73a7ab22422 ("net/softnic: promote manage API")
Fixes: 8f532a34c4f2 ("fib: promote API to stable")
Fixes: 4aeb92396b85 ("rib: promote API to stable")

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
I added "./devtools/update-abi.sh $(cat ABI_VERSION)" to my checks.

I should have caught it when merging fib and rib patches...
But my eyes (or more likely brain) stopped at net/softnic bits.

What do you think?
Should I wait a bit more and send a global patch to catch any missed
sorting just before rc1?

In the meantime, if you merge .map updates, try to remember to run the
command above.

Thanks.
---
 drivers/net/softnic/version.map |  2 +-
 lib/fib/version.map             | 21 ++++++++++-----------
 lib/rib/version.map             | 33 ++++++++++++++++-----------------
 3 files changed, 27 insertions(+), 29 deletions(-)

diff --git a/drivers/net/softnic/version.map b/drivers/net/softnic/version.map
index cd5afcf155..01e1514276 100644
--- a/drivers/net/softnic/version.map
+++ b/drivers/net/softnic/version.map
@@ -1,8 +1,8 @@
 DPDK_22 {
 	global:
 
-	rte_pmd_softnic_run;
 	rte_pmd_softnic_manage;
+	rte_pmd_softnic_run;
 
 	local: *;
 };
diff --git a/lib/fib/version.map b/lib/fib/version.map
index af76add2b9..b23fa42b9b 100644
--- a/lib/fib/version.map
+++ b/lib/fib/version.map
@@ -1,25 +1,24 @@
 DPDK_22 {
 	global:
 
-	rte_fib_add;
-	rte_fib_create;
-	rte_fib_delete;
-	rte_fib_find_existing;
-	rte_fib_free;
-	rte_fib_lookup_bulk;
-	rte_fib_get_dp;
-	rte_fib_get_rib;
-	rte_fib_select_lookup;
-
 	rte_fib6_add;
 	rte_fib6_create;
 	rte_fib6_delete;
 	rte_fib6_find_existing;
 	rte_fib6_free;
-	rte_fib6_lookup_bulk;
 	rte_fib6_get_dp;
 	rte_fib6_get_rib;
+	rte_fib6_lookup_bulk;
 	rte_fib6_select_lookup;
+	rte_fib_add;
+	rte_fib_create;
+	rte_fib_delete;
+	rte_fib_find_existing;
+	rte_fib_free;
+	rte_fib_get_dp;
+	rte_fib_get_rib;
+	rte_fib_lookup_bulk;
+	rte_fib_select_lookup;
 
 	local: *;
 };
diff --git a/lib/rib/version.map b/lib/rib/version.map
index 6eb1252acb..f356fe8849 100644
--- a/lib/rib/version.map
+++ b/lib/rib/version.map
@@ -1,21 +1,6 @@
 DPDK_22 {
 	global:
 
-	rte_rib_create;
-	rte_rib_find_existing;
-	rte_rib_free;
-	rte_rib_get_depth;
-	rte_rib_get_ext;
-	rte_rib_get_ip;
-	rte_rib_get_nh;
-	rte_rib_get_nxt;
-	rte_rib_insert;
-	rte_rib_lookup;
-	rte_rib_lookup_parent;
-	rte_rib_lookup_exact;
-	rte_rib_set_nh;
-	rte_rib_remove;
-
 	rte_rib6_create;
 	rte_rib6_find_existing;
 	rte_rib6_free;
@@ -26,10 +11,24 @@ DPDK_22 {
 	rte_rib6_get_nxt;
 	rte_rib6_insert;
 	rte_rib6_lookup;
-	rte_rib6_lookup_parent;
 	rte_rib6_lookup_exact;
-	rte_rib6_set_nh;
+	rte_rib6_lookup_parent;
 	rte_rib6_remove;
+	rte_rib6_set_nh;
+	rte_rib_create;
+	rte_rib_find_existing;
+	rte_rib_free;
+	rte_rib_get_depth;
+	rte_rib_get_ext;
+	rte_rib_get_ip;
+	rte_rib_get_nh;
+	rte_rib_get_nxt;
+	rte_rib_insert;
+	rte_rib_lookup;
+	rte_rib_lookup_exact;
+	rte_rib_lookup_parent;
+	rte_rib_remove;
+	rte_rib_set_nh;
 
 	local: *;
 };
-- 
2.23.0


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v4 4/7] ethdev: make burst functions to use new flat array
  2021-10-04 13:56  2%       ` [dpdk-dev] [PATCH v4 4/7] ethdev: make burst functions to use new flat array Konstantin Ananyev
@ 2021-10-05  9:54  0%         ` David Marchand
  2021-10-05 10:13  0%           ` Ananyev, Konstantin
  0 siblings, 1 reply; 200+ results
From: David Marchand @ 2021-10-05  9:54 UTC (permalink / raw)
  To: Konstantin Ananyev
  Cc: dev, Xiaoyun Li, Anoob Joseph, Jerin Jacob Kollanukkaran,
	Nithin Dabilpuram, Ankur Dwivedi, Shepard Siegel, Ed Czeck,
	John Miller, Igor Russkikh, Ajit Khaparde, Somnath Kotur,
	Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Wang, Haiyue,
	John Daley, Hyong Youb Kim, Qi Zhang, Xiao Wang, humin (Q),
	Yisen Zhuang, oulijun, Beilei Xing, Jingjing Wu, Qiming Yang,
	Matan Azrad, Slava Ovsiienko, Stephen Hemminger, Long Li,
	heinrich.kuhn, Kiran Kumar Kokkilagadda, Andrew Rybchenko,
	Maciej Czekaj, Jiawen Wu, Jian Wang, Maxime Coquelin, Xia,
	Chenbo, Thomas Monjalon, Yigit, Ferruh, Ray Kinsella,
	Jayatheerthan, Jay

On Mon, Oct 4, 2021 at 3:59 PM Konstantin Ananyev
<konstantin.ananyev@intel.com> wrote:
>
> Rework 'fast' burst functions to use rte_eth_fp_ops[].
> While it is an API/ABI breakage, this change is intended to be
> transparent for both users (no changes in user app is required) and
> PMD developers (no changes in PMD is required).
> One extra thing to note - RX/TX callback invocation will cause extra
> function call with these changes. That might cause some insignificant
> slowdown for code-path where RX/TX callbacks are heavily involved.
>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>  lib/ethdev/ethdev_private.c |  31 +++++
>  lib/ethdev/rte_ethdev.h     | 242 ++++++++++++++++++++++++++----------
>  lib/ethdev/version.map      |   5 +
>  3 files changed, 210 insertions(+), 68 deletions(-)
>
> diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
> index 3eeda6e9f9..27d29b2ac6 100644
> --- a/lib/ethdev/ethdev_private.c
> +++ b/lib/ethdev/ethdev_private.c
> @@ -226,3 +226,34 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
>         fpo->txq.data = dev->data->tx_queues;
>         fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
>  }
> +
> +uint16_t
> +__rte_eth_rx_epilog(uint16_t port_id, uint16_t queue_id,
> +       struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
> +       void *opaque)
> +{
> +       const struct rte_eth_rxtx_callback *cb = opaque;
> +
> +       while (cb != NULL) {
> +               nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
> +                               nb_pkts, cb->param);
> +               cb = cb->next;
> +       }
> +
> +       return nb_rx;
> +}

This helper name is ambiguous.
Maybe the intent was to have a generic place holder for updates in
future releases.
But in this series, __rte_eth_rx_epilog is invoked only if a rx
callback is registered, under #ifdef RTE_ETHDEV_RXTX_CALLBACKS.

I'd prefer we call it a spade, i.e. rte_eth_call_rx_callbacks, and it
does not need to be advertised as internal.


> +
> +uint16_t
> +__rte_eth_tx_prolog(uint16_t port_id, uint16_t queue_id,
> +       struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque)
> +{
> +       const struct rte_eth_rxtx_callback *cb = opaque;
> +
> +       while (cb != NULL) {
> +               nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
> +                               cb->param);
> +               cb = cb->next;
> +       }
> +
> +       return nb_pkts;
> +}

Idem, rte_eth_call_tx_callbacks.


-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v4 7/7] ethdev: hide eth dev related structures
  2021-10-04 13:56  8%       ` [dpdk-dev] [PATCH v4 7/7] ethdev: hide eth dev related structures Konstantin Ananyev
@ 2021-10-05 10:04  0%         ` David Marchand
  2021-10-05 10:43  0%           ` Ferruh Yigit
  0 siblings, 1 reply; 200+ results
From: David Marchand @ 2021-10-05 10:04 UTC (permalink / raw)
  To: Konstantin Ananyev
  Cc: dev, Xiaoyun Li, Anoob Joseph, Jerin Jacob Kollanukkaran,
	Nithin Dabilpuram, Ankur Dwivedi, Shepard Siegel, Ed Czeck,
	John Miller, Igor Russkikh, Ajit Khaparde, Somnath Kotur,
	Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Wang, Haiyue,
	John Daley, Hyong Youb Kim, Qi Zhang, Xiao Wang, humin (Q),
	Yisen Zhuang, oulijun, Beilei Xing, Jingjing Wu, Qiming Yang,
	Matan Azrad, Slava Ovsiienko, Stephen Hemminger, Long Li,
	heinrich.kuhn, Kiran Kumar Kokkilagadda, Andrew Rybchenko,
	Maciej Czekaj, Jiawen Wu, Jian Wang, Maxime Coquelin, Xia,
	Chenbo, Thomas Monjalon, Yigit, Ferruh, Ray Kinsella,
	Jayatheerthan, Jay

On Mon, Oct 4, 2021 at 3:59 PM Konstantin Ananyev
<konstantin.ananyev@intel.com> wrote:
>
> Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
> data into private header (ethdev_driver.h).
> Few minor changes to keep DPDK building after that.

This change is going to hurt a lot of people :-).
But this is a necessary move.

$ git grep-all -lw rte_eth_devices |grep -v \\.patch$
ANS/ans/ans_main.c
BESS/core/drivers/pmd.cc
dma_ip_drivers/QDMA/DPDK/drivers/net/qdma/qdma_xdebug.c
dma_ip_drivers/QDMA/DPDK/drivers/net/qdma/rte_pmd_qdma.c
dma_ip_drivers/QDMA/DPDK/examples/qdma_testapp/pcierw.c
dma_ip_drivers/QDMA/DPDK/examples/qdma_testapp/testapp.c
FD.io-VPP/src/plugins/dpdk/device/format.c
lagopus/src/dataplane/dpdk/dpdk_io.c
OVS/lib/netdev-dpdk.c
packet-journey/app/kni.c
pktgen-dpdk/app/pktgen-port-cfg.c
pktgen-dpdk/app/pktgen-port-cfg.h
pktgen-dpdk/app/pktgen-stats.c
Trex/src/dpdk_funcs.c
Trex/src/drivers/trex_i40e_fdir.c
Trex/src/drivers/trex_ixgbe_fdir.c
TungstenFabric-vRouter/gdb/vr_dpdk.gdb


I did not check all projects for their uses of rte_eth_devices, but I
did the job for OVS.
If you have cycles to review...
https://patchwork.ozlabs.org/project/openvswitch/patch/20210907082343.16370-1-david.marchand@redhat.com/

One nit:

>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>  doc/guides/rel_notes/release_21_11.rst        |   6 +
>  drivers/common/octeontx2/otx2_sec_idev.c      |   2 +-
>  drivers/crypto/octeontx2/otx2_cryptodev_ops.c |   2 +-
>  drivers/net/cxgbe/base/adapter.h              |   2 +-
>  drivers/net/dpaa2/dpaa2_ptp.c                 |   2 +-
>  drivers/net/netvsc/hn_var.h                   |   1 +
>  lib/ethdev/ethdev_driver.h                    | 149 ++++++++++++++++++
>  lib/ethdev/rte_ethdev_core.h                  | 143 -----------------
>  lib/ethdev/version.map                        |   2 +-
>  lib/eventdev/rte_event_eth_rx_adapter.c       |   2 +-
>  lib/eventdev/rte_event_eth_tx_adapter.c       |   2 +-
>  lib/eventdev/rte_eventdev.c                   |   2 +-
>  lib/metrics/rte_metrics_telemetry.c           |   2 +-
>  13 files changed, 165 insertions(+), 152 deletions(-)
>
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index 6055551443..2944149943 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -228,6 +228,12 @@ ABI Changes
>    to user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
>    is used by  public inline function ``rte_eth_rx_queue_count``.
>
> +* ethdev: Made ``rte_eth_dev``, ``rte_eth_dev_data``, ``rte_eth_rxtx_callback``
> +  private data structures. ``rte_eth_devices[]`` can't be accessible directly

accessed*

> +  by user any more. While it is an ABI breakage, this change is intended
> +  to be transparent for both users (no changes in user app is required) and
> +  PMD developers (no changes in PMD is required).
> +
>
>  Known Issues
>  ------------



-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v4 4/7] ethdev: make burst functions to use new flat array
  2021-10-05  9:54  0%         ` David Marchand
@ 2021-10-05 10:13  0%           ` Ananyev, Konstantin
  0 siblings, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2021-10-05 10:13 UTC (permalink / raw)
  To: David Marchand
  Cc: dev, Li, Xiaoyun, Anoob Joseph, Jerin Jacob Kollanukkaran,
	Nithin Dabilpuram, Ankur Dwivedi, Shepard Siegel, Ed Czeck,
	John Miller, Igor Russkikh, Ajit Khaparde, Somnath Kotur,
	Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Wang, Haiyue,
	Daley, John, Hyong Youb Kim, Zhang, Qi Z, Wang, Xiao W, humin (Q),
	Yisen Zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang, Qiming,
	Matan Azrad, Slava Ovsiienko, Stephen Hemminger, Long Li,
	heinrich.kuhn, Kiran Kumar Kokkilagadda, Andrew Rybchenko,
	Maciej Czekaj, Jiawen Wu, Jian Wang, Maxime Coquelin, Xia,
	Chenbo, Thomas Monjalon, Yigit, Ferruh, Ray Kinsella,
	Jayatheerthan, Jay



> >
> > Rework 'fast' burst functions to use rte_eth_fp_ops[].
> > While it is an API/ABI breakage, this change is intended to be
> > transparent for both users (no changes in user app is required) and
> > PMD developers (no changes in PMD is required).
> > One extra thing to note - RX/TX callback invocation will cause extra
> > function call with these changes. That might cause some insignificant
> > slowdown for code-path where RX/TX callbacks are heavily involved.
> >
> > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > ---
> >  lib/ethdev/ethdev_private.c |  31 +++++
> >  lib/ethdev/rte_ethdev.h     | 242 ++++++++++++++++++++++++++----------
> >  lib/ethdev/version.map      |   5 +
> >  3 files changed, 210 insertions(+), 68 deletions(-)
> >
> > diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
> > index 3eeda6e9f9..27d29b2ac6 100644
> > --- a/lib/ethdev/ethdev_private.c
> > +++ b/lib/ethdev/ethdev_private.c
> > @@ -226,3 +226,34 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> >         fpo->txq.data = dev->data->tx_queues;
> >         fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
> >  }
> > +
> > +uint16_t
> > +__rte_eth_rx_epilog(uint16_t port_id, uint16_t queue_id,
> > +       struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
> > +       void *opaque)
> > +{
> > +       const struct rte_eth_rxtx_callback *cb = opaque;
> > +
> > +       while (cb != NULL) {
> > +               nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
> > +                               nb_pkts, cb->param);
> > +               cb = cb->next;
> > +       }
> > +
> > +       return nb_rx;
> > +}
> 
> This helper name is ambiguous.
> Maybe the intent was to have a generic place holder for updates in
> future releases.

Yes, that was the intent.
We have array of opaque pointers (one per queue).
So I thought some generic name would be better - who knows
how we would need to change this function and its parameters in future.
 
> But in this series, __rte_eth_rx_epilog is invoked only if a rx
> callback is registered, under #ifdef RTE_ETHDEV_RXTX_CALLBACKS.

Hmm, yes it implies that we'll do callback underneath :)
 
> I'd prefer we call it a spade, i.e. rte_eth_call_rx_callbacks,

If there are no objections from other people - I am ok to rename it.

> and it
> does not need to be advertised as internal.

About internal vs public, I think Ferruh proposed the same.
I am not really fond of it as:
if we'll declare it public, we will have obligations to support it in future releases.
Plus it might encourage users to use it on its own, which I don't think is a right thing to do.

> 
> 
> > +
> > +uint16_t
> > +__rte_eth_tx_prolog(uint16_t port_id, uint16_t queue_id,
> > +       struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque)
> > +{
> > +       const struct rte_eth_rxtx_callback *cb = opaque;
> > +
> > +       while (cb != NULL) {
> > +               nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
> > +                               cb->param);
> > +               cb = cb->next;
> > +       }
> > +
> > +       return nb_pkts;
> > +}
> 
> Idem, rte_eth_call_tx_callbacks.
> 
> 
> --
> David Marchand


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v4 7/7] ethdev: hide eth dev related structures
  2021-10-05 10:04  0%         ` David Marchand
@ 2021-10-05 10:43  0%           ` Ferruh Yigit
  0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2021-10-05 10:43 UTC (permalink / raw)
  To: David Marchand, Konstantin Ananyev, Thomas Monjalon
  Cc: dev, Xiaoyun Li, Anoob Joseph, Jerin Jacob Kollanukkaran,
	Nithin Dabilpuram, Ankur Dwivedi, Shepard Siegel, Ed Czeck,
	John Miller, Igor Russkikh, Ajit Khaparde, Somnath Kotur,
	Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Wang, Haiyue,
	John Daley, Hyong Youb Kim, Qi Zhang, Xiao Wang, humin (Q),
	Yisen Zhuang, oulijun, Beilei Xing, Jingjing Wu, Qiming Yang,
	Matan Azrad, Slava Ovsiienko, Stephen Hemminger, Long Li,
	heinrich.kuhn, Kiran Kumar Kokkilagadda, Andrew Rybchenko,
	Maciej Czekaj, Jiawen Wu, Jian Wang, Maxime Coquelin, Xia,
	Chenbo, Thomas Monjalon, Ray Kinsella, Jayatheerthan, Jay

On 10/5/2021 11:04 AM, David Marchand wrote:
> On Mon, Oct 4, 2021 at 3:59 PM Konstantin Ananyev
> <konstantin.ananyev@intel.com> wrote:
>>
>> Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
>> data into private header (ethdev_driver.h).
>> Few minor changes to keep DPDK building after that.
> 
> This change is going to hurt a lot of people :-).
> But this is a necessary move.
> 

+1 that it is necessary move, but I am surprised to see how much 'rte_eth_devices'
is accessed directly.

Do you have any idea/suggestion on how can we reduce the pain for them?

> $ git grep-all -lw rte_eth_devices |grep -v \\.patch$
> ANS/ans/ans_main.c
> BESS/core/drivers/pmd.cc
> dma_ip_drivers/QDMA/DPDK/drivers/net/qdma/qdma_xdebug.c
> dma_ip_drivers/QDMA/DPDK/drivers/net/qdma/rte_pmd_qdma.c
> dma_ip_drivers/QDMA/DPDK/examples/qdma_testapp/pcierw.c
> dma_ip_drivers/QDMA/DPDK/examples/qdma_testapp/testapp.c
> FD.io-VPP/src/plugins/dpdk/device/format.c
> lagopus/src/dataplane/dpdk/dpdk_io.c
> OVS/lib/netdev-dpdk.c
> packet-journey/app/kni.c
> pktgen-dpdk/app/pktgen-port-cfg.c
> pktgen-dpdk/app/pktgen-port-cfg.h
> pktgen-dpdk/app/pktgen-stats.c
> Trex/src/dpdk_funcs.c
> Trex/src/drivers/trex_i40e_fdir.c
> Trex/src/drivers/trex_ixgbe_fdir.c
> TungstenFabric-vRouter/gdb/vr_dpdk.gdb
> 
> 
> I did not check all projects for their uses of rte_eth_devices, but I
> did the job for OVS.
> If you have cycles to review...
> https://patchwork.ozlabs.org/project/openvswitch/patch/20210907082343.16370-1-david.marchand@redhat.com/
> 
> One nit:
> 
>>
>> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>> ---
>>   doc/guides/rel_notes/release_21_11.rst        |   6 +
>>   drivers/common/octeontx2/otx2_sec_idev.c      |   2 +-
>>   drivers/crypto/octeontx2/otx2_cryptodev_ops.c |   2 +-
>>   drivers/net/cxgbe/base/adapter.h              |   2 +-
>>   drivers/net/dpaa2/dpaa2_ptp.c                 |   2 +-
>>   drivers/net/netvsc/hn_var.h                   |   1 +
>>   lib/ethdev/ethdev_driver.h                    | 149 ++++++++++++++++++
>>   lib/ethdev/rte_ethdev_core.h                  | 143 -----------------
>>   lib/ethdev/version.map                        |   2 +-
>>   lib/eventdev/rte_event_eth_rx_adapter.c       |   2 +-
>>   lib/eventdev/rte_event_eth_tx_adapter.c       |   2 +-
>>   lib/eventdev/rte_eventdev.c                   |   2 +-
>>   lib/metrics/rte_metrics_telemetry.c           |   2 +-
>>   13 files changed, 165 insertions(+), 152 deletions(-)
>>
>> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
>> index 6055551443..2944149943 100644
>> --- a/doc/guides/rel_notes/release_21_11.rst
>> +++ b/doc/guides/rel_notes/release_21_11.rst
>> @@ -228,6 +228,12 @@ ABI Changes
>>     to user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
>>     is used by  public inline function ``rte_eth_rx_queue_count``.
>>
>> +* ethdev: Made ``rte_eth_dev``, ``rte_eth_dev_data``, ``rte_eth_rxtx_callback``
>> +  private data structures. ``rte_eth_devices[]`` can't be accessible directly
> 
> accessed*
> 
>> +  by user any more. While it is an ABI breakage, this change is intended
>> +  to be transparent for both users (no changes in user app is required) and
>> +  PMD developers (no changes in PMD is required).
>> +
>>
>>   Known Issues
>>   ------------
> 
> 
> 


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v2 0/6] make rte_intr_handle internal
    @ 2021-10-05 12:14  4% ` Harman Kalra
  2021-10-05 12:14  1%   ` [dpdk-dev] [PATCH v2 2/6] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
  2021-10-05 16:07  0% ` [dpdk-dev] [RFC 0/7] make rte_intr_handle internal Stephen Hemminger
  2 siblings, 1 reply; 200+ results
From: Harman Kalra @ 2021-10-05 12:14 UTC (permalink / raw)
  To: dev; +Cc: david.marchand, dmitry.kozliuk, mdr, Harman Kalra

Moving struct rte_intr_handle as an internal structure to
avoid any ABI breakages in future. Since this structure defines
some static arrays and changing respective macros breaks the ABI.
Eg:
Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
MSI-X interrupts that can be defined for a PCI device, while PCI
specification allows maximum 2048 MSI-X interrupts that can be used.
If some PCI device requires more than 512 vectors, either change the
RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
PCI device MSI-X size on probe time. Either way its an ABI breakage.

Change already included in 21.11 ABI improvement spreadsheet (item 42):
https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=


This series makes struct rte_intr_handle totally opaque to the outside
world by wrapping it inside a .c file and providing get set wrapper APIs
to read or manipulate its fields.. Any changes to be made to any of the
fields should be done via these get set APIs.
Introduced a new eal_common_interrupts.c where all these APIs are defined
and also hides struct rte_intr_handle definition.

Details on each patch of the series:
Patch 1: eal/interrupts: implement get set APIs
This patch provides prototypes and implementation of all the new
get set APIs. Alloc APIs are implemented to allocate memory for
interrupt handle instance. Currently most of the drivers defines
interrupt handle instance as static but now it cant be static as
size of rte_intr_handle is unknown to all the drivers. Drivers are
expected to allocate interrupt instances during initialization
and free these instances during cleanup phase.
This patch also rearranges the headers related to interrupt
framework. Epoll related definitions prototypes are moved into a
new header i.e. rte_epoll.h and APIs defined in rte_eal_interrupts.h
which were driver specific are moved to rte_interrupts.h (as anyways
it was accessible and used outside DPDK library. Later in the series
rte_eal_interrupts.h is removed.

Patch 2: eal/interrupts: avoid direct access to interrupt handle
Modifying the interrupt framework for linux and freebsd to use these
get set alloc APIs as per requirement and avoid accessing the fields
directly.

Patch 3: test/interrupt: apply get set interrupt handle APIs
Updating interrupt test suite to use interrupt handle APIs.

Patch 4: drivers: remove direct access to interrupt handle fields
Modifying all the drivers and libraries which are currently directly
accessing the interrupt handle fields. Drivers are expected to
allocated the interrupt instance, use get set APIs with the allocated
interrupt handle and free it on cleanup.

Patch 5: eal/interrupts: make interrupt handle structure opaque
In this patch rte_eal_interrupt.h is removed, struct rte_intr_handle
definition is moved to c file to make it completely opaque. As part of
interrupt handle allocation, array like efds and elist(which are currently
static) are dynamically allocated with default size
(RTE_MAX_RXTX_INTR_VEC_ID). Later these arrays can be reallocated as per
device requirement using new API rte_intr_handle_event_list_update().
Eg, on PCI device probing MSIX size can be queried and these arrays can
be reallocated accordingly.

Patch 6: eal/alarm: introduce alarm fini routine
Introducing alarm fini routine, as the memory allocated for alarm interrupt
instance can be freed in alarm fini.

Testing performed:
1. Validated the series by running interrupts and alarm test suite.
2. Validate l3fwd power functionality with octeontx2 and i40e intel cards,
   where interrupts are expected on packet arrival.

v1:
* Fixed freebsd compilation failure
* Fixed seg fault in case of memif

v2:
* Merged the prototype and implementation patch to 1.
* Restricting allocation of single interrupt instance.
* Removed base APIs, as they were exposing internally
allocated memory information.
* Fixed some memory leak issues.
* Marked some library specific APIs as internal.

Harman Kalra (6):
  eal/interrupts: implement get set APIs
  eal/interrupts: avoid direct access to interrupt handle
  test/interrupt: apply get set interrupt handle APIs
  drivers: remove direct access to interrupt handle
  eal/interrupts: make interrupt handle structure opaque
  eal/alarm: introduce alarm fini routine

 MAINTAINERS                                   |   1 +
 app/test/test_interrupts.c                    | 163 +++--
 drivers/baseband/acc100/rte_acc100_pmd.c      |  18 +-
 .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c         |  21 +-
 drivers/baseband/fpga_lte_fec/fpga_lte_fec.c  |  21 +-
 drivers/bus/auxiliary/auxiliary_common.c      |   2 +
 drivers/bus/auxiliary/linux/auxiliary.c       |   9 +
 drivers/bus/auxiliary/rte_bus_auxiliary.h     |   2 +-
 drivers/bus/dpaa/dpaa_bus.c                   |  28 +-
 drivers/bus/dpaa/rte_dpaa_bus.h               |   2 +-
 drivers/bus/fslmc/fslmc_bus.c                 |  15 +-
 drivers/bus/fslmc/fslmc_vfio.c                |  32 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c      |  20 +-
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h       |   2 +-
 drivers/bus/fslmc/rte_fslmc.h                 |   2 +-
 drivers/bus/ifpga/ifpga_bus.c                 |  15 +-
 drivers/bus/ifpga/rte_bus_ifpga.h             |   2 +-
 drivers/bus/pci/bsd/pci.c                     |  21 +-
 drivers/bus/pci/linux/pci.c                   |   4 +-
 drivers/bus/pci/linux/pci_uio.c               |  73 +-
 drivers/bus/pci/linux/pci_vfio.c              | 115 +++-
 drivers/bus/pci/pci_common.c                  |  29 +-
 drivers/bus/pci/pci_common_uio.c              |  21 +-
 drivers/bus/pci/rte_bus_pci.h                 |   4 +-
 drivers/bus/vmbus/linux/vmbus_bus.c           |   5 +
 drivers/bus/vmbus/linux/vmbus_uio.c           |  37 +-
 drivers/bus/vmbus/rte_bus_vmbus.h             |   2 +-
 drivers/bus/vmbus/vmbus_common_uio.c          |  24 +-
 drivers/common/cnxk/roc_cpt.c                 |   8 +-
 drivers/common/cnxk/roc_dev.c                 |  14 +-
 drivers/common/cnxk/roc_irq.c                 | 106 +--
 drivers/common/cnxk/roc_nix_irq.c             |  36 +-
 drivers/common/cnxk/roc_npa.c                 |   2 +-
 drivers/common/cnxk/roc_platform.h            |  49 +-
 drivers/common/cnxk/roc_sso.c                 |   4 +-
 drivers/common/cnxk/roc_tim.c                 |   4 +-
 drivers/common/octeontx2/otx2_dev.c           |  14 +-
 drivers/common/octeontx2/otx2_irq.c           | 117 ++--
 .../octeontx2/otx2_cryptodev_hw_access.c      |   4 +-
 drivers/event/octeontx2/otx2_evdev_irq.c      |  12 +-
 drivers/mempool/octeontx2/otx2_mempool.c      |   2 +-
 drivers/net/atlantic/atl_ethdev.c             |  20 +-
 drivers/net/avp/avp_ethdev.c                  |   8 +-
 drivers/net/axgbe/axgbe_ethdev.c              |  12 +-
 drivers/net/axgbe/axgbe_mdio.c                |   6 +-
 drivers/net/bnx2x/bnx2x_ethdev.c              |  10 +-
 drivers/net/bnxt/bnxt_ethdev.c                |  33 +-
 drivers/net/bnxt/bnxt_irq.c                   |   4 +-
 drivers/net/dpaa/dpaa_ethdev.c                |  47 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |  10 +-
 drivers/net/e1000/em_ethdev.c                 |  23 +-
 drivers/net/e1000/igb_ethdev.c                |  79 +--
 drivers/net/ena/ena_ethdev.c                  |  35 +-
 drivers/net/enic/enic_main.c                  |  26 +-
 drivers/net/failsafe/failsafe.c               |  23 +-
 drivers/net/failsafe/failsafe_intr.c          |  43 +-
 drivers/net/failsafe/failsafe_ops.c           |  21 +-
 drivers/net/failsafe/failsafe_private.h       |   2 +-
 drivers/net/fm10k/fm10k_ethdev.c              |  32 +-
 drivers/net/hinic/hinic_pmd_ethdev.c          |  10 +-
 drivers/net/hns3/hns3_ethdev.c                |  57 +-
 drivers/net/hns3/hns3_ethdev_vf.c             |  64 +-
 drivers/net/hns3/hns3_rxtx.c                  |   2 +-
 drivers/net/i40e/i40e_ethdev.c                |  53 +-
 drivers/net/iavf/iavf_ethdev.c                |  42 +-
 drivers/net/iavf/iavf_vchnl.c                 |   4 +-
 drivers/net/ice/ice_dcf.c                     |  10 +-
 drivers/net/ice/ice_dcf_ethdev.c              |  21 +-
 drivers/net/ice/ice_ethdev.c                  |  49 +-
 drivers/net/igc/igc_ethdev.c                  |  45 +-
 drivers/net/ionic/ionic_ethdev.c              |  17 +-
 drivers/net/ixgbe/ixgbe_ethdev.c              |  66 +-
 drivers/net/memif/memif_socket.c              | 111 ++-
 drivers/net/memif/memif_socket.h              |   4 +-
 drivers/net/memif/rte_eth_memif.c             |  61 +-
 drivers/net/memif/rte_eth_memif.h             |   2 +-
 drivers/net/mlx4/mlx4.c                       |  19 +-
 drivers/net/mlx4/mlx4.h                       |   2 +-
 drivers/net/mlx4/mlx4_intr.c                  |  47 +-
 drivers/net/mlx5/linux/mlx5_os.c              |  54 +-
 drivers/net/mlx5/linux/mlx5_socket.c          |  24 +-
 drivers/net/mlx5/mlx5.h                       |   6 +-
 drivers/net/mlx5/mlx5_rxq.c                   |  42 +-
 drivers/net/mlx5/mlx5_trigger.c               |   4 +-
 drivers/net/mlx5/mlx5_txpp.c                  |  26 +-
 drivers/net/netvsc/hn_ethdev.c                |   4 +-
 drivers/net/nfp/nfp_common.c                  |  34 +-
 drivers/net/nfp/nfp_ethdev.c                  |  13 +-
 drivers/net/nfp/nfp_ethdev_vf.c               |  13 +-
 drivers/net/ngbe/ngbe_ethdev.c                |  29 +-
 drivers/net/octeontx2/otx2_ethdev_irq.c       |  35 +-
 drivers/net/qede/qede_ethdev.c                |  16 +-
 drivers/net/sfc/sfc_intr.c                    |  30 +-
 drivers/net/tap/rte_eth_tap.c                 |  36 +-
 drivers/net/tap/rte_eth_tap.h                 |   2 +-
 drivers/net/tap/tap_intr.c                    |  32 +-
 drivers/net/thunderx/nicvf_ethdev.c           |  12 +
 drivers/net/thunderx/nicvf_struct.h           |   2 +-
 drivers/net/txgbe/txgbe_ethdev.c              |  34 +-
 drivers/net/txgbe/txgbe_ethdev_vf.c           |  33 +-
 drivers/net/vhost/rte_eth_vhost.c             |  75 +-
 drivers/net/virtio/virtio_ethdev.c            |  21 +-
 .../net/virtio/virtio_user/virtio_user_dev.c  |  48 +-
 drivers/net/vmxnet3/vmxnet3_ethdev.c          |  43 +-
 drivers/raw/ifpga/ifpga_rawdev.c              |  62 +-
 drivers/raw/ntb/ntb.c                         |   9 +-
 .../regex/octeontx2/otx2_regexdev_hw_access.c |   4 +-
 drivers/vdpa/ifc/ifcvf_vdpa.c                 |   5 +-
 drivers/vdpa/mlx5/mlx5_vdpa.c                 |  10 +
 drivers/vdpa/mlx5/mlx5_vdpa.h                 |   4 +-
 drivers/vdpa/mlx5/mlx5_vdpa_event.c           |  22 +-
 drivers/vdpa/mlx5/mlx5_vdpa_virtq.c           |  45 +-
 lib/bbdev/rte_bbdev.c                         |   4 +-
 lib/eal/common/eal_common_interrupts.c        | 649 ++++++++++++++++++
 lib/eal/common/eal_private.h                  |  11 +
 lib/eal/common/meson.build                    |   1 +
 lib/eal/freebsd/eal.c                         |   1 +
 lib/eal/freebsd/eal_alarm.c                   |  52 +-
 lib/eal/freebsd/eal_interrupts.c              | 110 ++-
 lib/eal/include/meson.build                   |   2 +-
 lib/eal/include/rte_eal_interrupts.h          | 269 --------
 lib/eal/include/rte_eal_trace.h               |  24 +-
 lib/eal/include/rte_epoll.h                   | 118 ++++
 lib/eal/include/rte_interrupts.h              | 634 ++++++++++++++++-
 lib/eal/linux/eal.c                           |   1 +
 lib/eal/linux/eal_alarm.c                     |  37 +-
 lib/eal/linux/eal_dev.c                       |  63 +-
 lib/eal/linux/eal_interrupts.c                | 302 +++++---
 lib/eal/version.map                           |  46 +-
 lib/ethdev/ethdev_pci.h                       |   2 +-
 lib/ethdev/rte_ethdev.c                       |  14 +-
 131 files changed, 3645 insertions(+), 1706 deletions(-)
 create mode 100644 lib/eal/common/eal_common_interrupts.c
 delete mode 100644 lib/eal/include/rte_eal_interrupts.h
 create mode 100644 lib/eal/include/rte_epoll.h

-- 
2.18.0


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v2 2/6] eal/interrupts: avoid direct access to interrupt handle
  2021-10-05 12:14  4% ` [dpdk-dev] [PATCH v2 0/6] " Harman Kalra
@ 2021-10-05 12:14  1%   ` Harman Kalra
  0 siblings, 0 replies; 200+ results
From: Harman Kalra @ 2021-10-05 12:14 UTC (permalink / raw)
  To: dev, Harman Kalra, Bruce Richardson; +Cc: david.marchand, dmitry.kozliuk, mdr

Making changes to the interrupt framework to use interrupt handle
APIs to get/set any field. Direct access to any of the fields
should be avoided to avoid any ABI breakage in future.

Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
 lib/eal/freebsd/eal_interrupts.c | 111 ++++++++----
 lib/eal/include/rte_interrupts.h |   2 +
 lib/eal/linux/eal_interrupts.c   | 302 +++++++++++++++++++------------
 3 files changed, 268 insertions(+), 147 deletions(-)

diff --git a/lib/eal/freebsd/eal_interrupts.c b/lib/eal/freebsd/eal_interrupts.c
index 86810845fe..cf6216601b 100644
--- a/lib/eal/freebsd/eal_interrupts.c
+++ b/lib/eal/freebsd/eal_interrupts.c
@@ -40,7 +40,7 @@ struct rte_intr_callback {
 
 struct rte_intr_source {
 	TAILQ_ENTRY(rte_intr_source) next;
-	struct rte_intr_handle intr_handle; /**< interrupt handle */
+	struct rte_intr_handle *intr_handle; /**< interrupt handle */
 	struct rte_intr_cb_list callbacks;  /**< user callbacks */
 	uint32_t active;
 };
@@ -60,7 +60,7 @@ static int
 intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
 {
 	/* alarm callbacks are special case */
-	if (ih->type == RTE_INTR_HANDLE_ALARM) {
+	if (rte_intr_type_get(ih) == RTE_INTR_HANDLE_ALARM) {
 		uint64_t timeout_ns;
 
 		/* get soonest alarm timeout */
@@ -75,7 +75,7 @@ intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
 	} else {
 		ke->filter = EVFILT_READ;
 	}
-	ke->ident = ih->fd;
+	ke->ident = rte_intr_fd_get(ih);
 
 	return 0;
 }
@@ -86,10 +86,11 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 {
 	struct rte_intr_callback *callback;
 	struct rte_intr_source *src;
-	int ret = 0, add_event = 0;
+	int ret = 0, add_event = 0, mem_allocator;
 
 	/* first do parameter checking */
-	if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
+	if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0 ||
+	    cb == NULL) {
 		RTE_LOG(ERR, EAL,
 			"Registering with invalid input parameter\n");
 		return -EINVAL;
@@ -103,7 +104,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 
 	/* find the source for this intr_handle */
 	TAILQ_FOREACH(src, &intr_sources, next) {
-		if (src->intr_handle.fd == intr_handle->fd)
+		if (rte_intr_fd_get(src->intr_handle) ==
+		    rte_intr_fd_get(intr_handle))
 			break;
 	}
 
@@ -112,8 +114,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 	 * thing on the list should be eal_alarm_callback() and we may
 	 * be called just to reset the timer.
 	 */
-	if (src != NULL && src->intr_handle.type == RTE_INTR_HANDLE_ALARM &&
-		 !TAILQ_EMPTY(&src->callbacks)) {
+	if (src != NULL && rte_intr_type_get(src->intr_handle) ==
+		RTE_INTR_HANDLE_ALARM && !TAILQ_EMPTY(&src->callbacks)) {
 		callback = NULL;
 	} else {
 		/* allocate a new interrupt callback entity */
@@ -135,9 +137,35 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 				ret = -ENOMEM;
 				goto fail;
 			} else {
-				src->intr_handle = *intr_handle;
-				TAILQ_INIT(&src->callbacks);
-				TAILQ_INSERT_TAIL(&intr_sources, src, next);
+				/* src->interrupt instance memory allocated
+				 * depends on from where intr_handle memory
+				 * is allocated.
+				 */
+				mem_allocator =
+					rte_intr_instance_mem_allocator_get(
+								intr_handle);
+				if (mem_allocator == 0)
+					src->intr_handle =
+						rte_intr_instance_alloc(
+						RTE_INTR_ALLOC_TRAD_HEAP);
+				else if (mem_allocator == 1)
+					src->intr_handle =
+						rte_intr_instance_alloc(
+						RTE_INTR_ALLOC_DPDK_ALLOCATOR);
+				else
+					RTE_LOG(ERR, EAL, "Failed to get mem allocator\n");
+
+				if (src->intr_handle == NULL) {
+					RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+					free(callback);
+					ret = -ENOMEM;
+				} else {
+					rte_intr_instance_copy(src->intr_handle,
+							       intr_handle);
+					TAILQ_INIT(&src->callbacks);
+					TAILQ_INSERT_TAIL(&intr_sources, src,
+							  next);
+				}
 			}
 		}
 
@@ -151,7 +179,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 	/* add events to the queue. timer events are special as we need to
 	 * re-set the timer.
 	 */
-	if (add_event || src->intr_handle.type == RTE_INTR_HANDLE_ALARM) {
+	if (add_event || rte_intr_type_get(src->intr_handle) ==
+							RTE_INTR_HANDLE_ALARM) {
 		struct kevent ke;
 
 		memset(&ke, 0, sizeof(ke));
@@ -173,12 +202,13 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 			 */
 			if (errno == ENODEV)
 				RTE_LOG(DEBUG, EAL, "Interrupt handle %d not supported\n",
-					src->intr_handle.fd);
+				rte_intr_fd_get(src->intr_handle));
 			else
 				RTE_LOG(ERR, EAL, "Error adding fd %d "
-						"kevent, %s\n",
-						src->intr_handle.fd,
-						strerror(errno));
+					"kevent, %s\n",
+					rte_intr_fd_get(
+							src->intr_handle),
+					strerror(errno));
 			ret = -errno;
 			goto fail;
 		}
@@ -213,7 +243,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
 	struct rte_intr_callback *cb, *next;
 
 	/* do parameter checking first */
-	if (intr_handle == NULL || intr_handle->fd < 0) {
+	if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
 		RTE_LOG(ERR, EAL,
 		"Unregistering with invalid input parameter\n");
 		return -EINVAL;
@@ -228,7 +258,8 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
 
 	/* check if the insterrupt source for the fd is existent */
 	TAILQ_FOREACH(src, &intr_sources, next)
-		if (src->intr_handle.fd == intr_handle->fd)
+		if (rte_intr_fd_get(src->intr_handle) ==
+					rte_intr_fd_get(intr_handle))
 			break;
 
 	/* No interrupt source registered for the fd */
@@ -268,7 +299,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 	struct rte_intr_callback *cb, *next;
 
 	/* do parameter checking first */
-	if (intr_handle == NULL || intr_handle->fd < 0) {
+	if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
 		RTE_LOG(ERR, EAL,
 		"Unregistering with invalid input parameter\n");
 		return -EINVAL;
@@ -282,7 +313,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 
 	/* check if the insterrupt source for the fd is existent */
 	TAILQ_FOREACH(src, &intr_sources, next)
-		if (src->intr_handle.fd == intr_handle->fd)
+		if (rte_intr_fd_get(src->intr_handle) ==
+					rte_intr_fd_get(intr_handle))
 			break;
 
 	/* No interrupt source registered for the fd */
@@ -314,7 +346,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 		 */
 		if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
 			RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n",
-				src->intr_handle.fd, strerror(errno));
+				rte_intr_fd_get(src->intr_handle),
+				strerror(errno));
 			/* removing non-existent even is an expected condition
 			 * in some circumstances (e.g. oneshot events).
 			 */
@@ -365,17 +398,18 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
 	if (intr_handle == NULL)
 		return -1;
 
-	if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+	if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
 		rc = 0;
 		goto out;
 	}
 
-	if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+	if (rte_intr_fd_get(intr_handle) < 0 ||
+				rte_intr_dev_fd_get(intr_handle) < 0) {
 		rc = -1;
 		goto out;
 	}
 
-	switch (intr_handle->type) {
+	switch (rte_intr_type_get(intr_handle)) {
 	/* not used at this moment */
 	case RTE_INTR_HANDLE_ALARM:
 		rc = -1;
@@ -388,7 +422,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
 	default:
 		RTE_LOG(ERR, EAL,
 			"Unknown handle type of fd %d\n",
-					intr_handle->fd);
+					rte_intr_fd_get(intr_handle));
 		rc = -1;
 		break;
 	}
@@ -406,17 +440,18 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
 	if (intr_handle == NULL)
 		return -1;
 
-	if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+	if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
 		rc = 0;
 		goto out;
 	}
 
-	if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+	if (rte_intr_fd_get(intr_handle) < 0 ||
+				rte_intr_dev_fd_get(intr_handle) < 0) {
 		rc = -1;
 		goto out;
 	}
 
-	switch (intr_handle->type) {
+	switch (rte_intr_type_get(intr_handle)) {
 	/* not used at this moment */
 	case RTE_INTR_HANDLE_ALARM:
 		rc = -1;
@@ -429,7 +464,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
 	default:
 		RTE_LOG(ERR, EAL,
 			"Unknown handle type of fd %d\n",
-					intr_handle->fd);
+					rte_intr_fd_get(intr_handle));
 		rc = -1;
 		break;
 	}
@@ -441,7 +476,8 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
 int
 rte_intr_ack(const struct rte_intr_handle *intr_handle)
 {
-	if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+	if (intr_handle &&
+	    rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
 		return 0;
 
 	return -1;
@@ -463,7 +499,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
 
 		rte_spinlock_lock(&intr_lock);
 		TAILQ_FOREACH(src, &intr_sources, next)
-			if (src->intr_handle.fd == event_fd)
+			if (rte_intr_fd_get(src->intr_handle) ==
+								event_fd)
 				break;
 		if (src == NULL) {
 			rte_spinlock_unlock(&intr_lock);
@@ -475,7 +512,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
 		rte_spinlock_unlock(&intr_lock);
 
 		/* set the length to be read dor different handle type */
-		switch (src->intr_handle.type) {
+		switch (rte_intr_type_get(src->intr_handle)) {
 		case RTE_INTR_HANDLE_ALARM:
 			bytes_read = 0;
 			call = true;
@@ -546,7 +583,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
 				/* mark for deletion from the queue */
 				ke.flags = EV_DELETE;
 
-				if (intr_source_to_kevent(&src->intr_handle, &ke) < 0) {
+				if (intr_source_to_kevent(src->intr_handle,
+							  &ke) < 0) {
 					RTE_LOG(ERR, EAL, "Cannot convert to kevent\n");
 					rte_spinlock_unlock(&intr_lock);
 					return;
@@ -557,7 +595,9 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
 				 */
 				if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
 					RTE_LOG(ERR, EAL, "Error removing fd %d kevent, "
-						"%s\n", src->intr_handle.fd,
+						"%s\n",
+						rte_intr_fd_get(
+							src->intr_handle),
 						strerror(errno));
 					/* removing non-existent even is an expected
 					 * condition in some circumstances
@@ -567,7 +607,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
 
 				TAILQ_REMOVE(&src->callbacks, cb, next);
 				if (cb->ucb_fn)
-					cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+					cb->ucb_fn(src->intr_handle,
+						   cb->cb_arg);
 				free(cb);
 			}
 		}
diff --git a/lib/eal/include/rte_interrupts.h b/lib/eal/include/rte_interrupts.h
index db830907fb..442b02de8f 100644
--- a/lib/eal/include/rte_interrupts.h
+++ b/lib/eal/include/rte_interrupts.h
@@ -28,6 +28,8 @@ struct rte_intr_handle;
 /** Interrupt instance allocation flags
  * @see rte_intr_instance_alloc
  */
+/** Allocate interrupt instance from traditional heap */
+#define RTE_INTR_ALLOC_TRAD_HEAP	0x00000000
 /** Allocate interrupt instance using DPDK memory management APIs */
 #define RTE_INTR_ALLOC_DPDK_ALLOCATOR	0x00000001
 
diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c
index 22b3b7bcd9..a9d6833b79 100644
--- a/lib/eal/linux/eal_interrupts.c
+++ b/lib/eal/linux/eal_interrupts.c
@@ -20,6 +20,7 @@
 #include <stdbool.h>
 
 #include <rte_common.h>
+#include <rte_epoll.h>
 #include <rte_interrupts.h>
 #include <rte_memory.h>
 #include <rte_launch.h>
@@ -82,7 +83,7 @@ struct rte_intr_callback {
 
 struct rte_intr_source {
 	TAILQ_ENTRY(rte_intr_source) next;
-	struct rte_intr_handle intr_handle; /**< interrupt handle */
+	struct rte_intr_handle *intr_handle; /**< interrupt handle */
 	struct rte_intr_cb_list callbacks;  /**< user callbacks */
 	uint32_t active;
 };
@@ -112,7 +113,7 @@ static int
 vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
 	struct vfio_irq_set *irq_set;
 	char irq_set_buf[IRQ_SET_BUF_LEN];
-	int len, ret;
+	int len, ret, vfio_dev_fd;
 	int *fd_ptr;
 
 	len = sizeof(irq_set_buf);
@@ -125,13 +126,14 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
 	irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
 	irq_set->start = 0;
 	fd_ptr = (int *) &irq_set->data;
-	*fd_ptr = intr_handle->fd;
+	*fd_ptr = rte_intr_fd_get(intr_handle);
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret) {
 		RTE_LOG(ERR, EAL, "Error enabling INTx interrupts for fd %d\n",
-						intr_handle->fd);
+					rte_intr_fd_get(intr_handle));
 		return -1;
 	}
 
@@ -144,11 +146,11 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
 	irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
 	irq_set->start = 0;
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret) {
 		RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
-						intr_handle->fd);
+					rte_intr_fd_get(intr_handle));
 		return -1;
 	}
 	return 0;
@@ -159,7 +161,7 @@ static int
 vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
 	struct vfio_irq_set *irq_set;
 	char irq_set_buf[IRQ_SET_BUF_LEN];
-	int len, ret;
+	int len, ret, vfio_dev_fd;
 
 	len = sizeof(struct vfio_irq_set);
 
@@ -171,11 +173,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
 	irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
 	irq_set->start = 0;
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret) {
 		RTE_LOG(ERR, EAL, "Error masking INTx interrupts for fd %d\n",
-						intr_handle->fd);
+					rte_intr_fd_get(intr_handle));
 		return -1;
 	}
 
@@ -187,11 +190,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
 	irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
 	irq_set->start = 0;
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret) {
 		RTE_LOG(ERR, EAL,
-			"Error disabling INTx interrupts for fd %d\n", intr_handle->fd);
+			"Error disabling INTx interrupts for fd %d\n",
+			rte_intr_fd_get(intr_handle));
 		return -1;
 	}
 	return 0;
@@ -202,6 +206,7 @@ static int
 vfio_ack_intx(const struct rte_intr_handle *intr_handle)
 {
 	struct vfio_irq_set irq_set;
+	int vfio_dev_fd;
 
 	/* unmask INTx */
 	memset(&irq_set, 0, sizeof(irq_set));
@@ -211,9 +216,10 @@ vfio_ack_intx(const struct rte_intr_handle *intr_handle)
 	irq_set.index = VFIO_PCI_INTX_IRQ_INDEX;
 	irq_set.start = 0;
 
-	if (ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
+	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+	if (ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
 		RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
-			intr_handle->fd);
+			rte_intr_fd_get(intr_handle));
 		return -1;
 	}
 	return 0;
@@ -225,7 +231,7 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
 	int len, ret;
 	char irq_set_buf[IRQ_SET_BUF_LEN];
 	struct vfio_irq_set *irq_set;
-	int *fd_ptr;
+	int *fd_ptr, vfio_dev_fd;
 
 	len = sizeof(irq_set_buf);
 
@@ -236,13 +242,14 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
 	irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
 	irq_set->start = 0;
 	fd_ptr = (int *) &irq_set->data;
-	*fd_ptr = intr_handle->fd;
+	*fd_ptr = rte_intr_fd_get(intr_handle);
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret) {
 		RTE_LOG(ERR, EAL, "Error enabling MSI interrupts for fd %d\n",
-						intr_handle->fd);
+					rte_intr_fd_get(intr_handle));
 		return -1;
 	}
 	return 0;
@@ -253,7 +260,7 @@ static int
 vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
 	struct vfio_irq_set *irq_set;
 	char irq_set_buf[IRQ_SET_BUF_LEN];
-	int len, ret;
+	int len, ret, vfio_dev_fd;
 
 	len = sizeof(struct vfio_irq_set);
 
@@ -264,11 +271,13 @@ vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
 	irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
 	irq_set->start = 0;
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret)
 		RTE_LOG(ERR, EAL,
-			"Error disabling MSI interrupts for fd %d\n", intr_handle->fd);
+			"Error disabling MSI interrupts for fd %d\n",
+			rte_intr_fd_get(intr_handle));
 
 	return ret;
 }
@@ -279,30 +288,34 @@ vfio_enable_msix(const struct rte_intr_handle *intr_handle) {
 	int len, ret;
 	char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
 	struct vfio_irq_set *irq_set;
-	int *fd_ptr;
+	int *fd_ptr, vfio_dev_fd, i;
 
 	len = sizeof(irq_set_buf);
 
 	irq_set = (struct vfio_irq_set *) irq_set_buf;
 	irq_set->argsz = len;
 	/* 0 < irq_set->count < RTE_MAX_RXTX_INTR_VEC_ID + 1 */
-	irq_set->count = intr_handle->max_intr ?
-		(intr_handle->max_intr > RTE_MAX_RXTX_INTR_VEC_ID + 1 ?
-		RTE_MAX_RXTX_INTR_VEC_ID + 1 : intr_handle->max_intr) : 1;
+	irq_set->count = rte_intr_max_intr_get(intr_handle) ?
+		(rte_intr_max_intr_get(intr_handle) >
+		 RTE_MAX_RXTX_INTR_VEC_ID + 1 ?	RTE_MAX_RXTX_INTR_VEC_ID + 1 :
+		 rte_intr_max_intr_get(intr_handle)) : 1;
+
 	irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER;
 	irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
 	irq_set->start = 0;
 	fd_ptr = (int *) &irq_set->data;
 	/* INTR vector offset 0 reserve for non-efds mapping */
-	fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = intr_handle->fd;
-	memcpy(&fd_ptr[RTE_INTR_VEC_RXTX_OFFSET], intr_handle->efds,
-		sizeof(*intr_handle->efds) * intr_handle->nb_efd);
+	fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = rte_intr_fd_get(intr_handle);
+	for (i = 0; i < rte_intr_nb_efd_get(intr_handle); i++)
+		fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] =
+			rte_intr_efds_index_get(intr_handle, i);
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret) {
 		RTE_LOG(ERR, EAL, "Error enabling MSI-X interrupts for fd %d\n",
-						intr_handle->fd);
+					rte_intr_fd_get(intr_handle));
 		return -1;
 	}
 
@@ -314,7 +327,7 @@ static int
 vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
 	struct vfio_irq_set *irq_set;
 	char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
-	int len, ret;
+	int len, ret, vfio_dev_fd;
 
 	len = sizeof(struct vfio_irq_set);
 
@@ -325,11 +338,13 @@ vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
 	irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
 	irq_set->start = 0;
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret)
 		RTE_LOG(ERR, EAL,
-			"Error disabling MSI-X interrupts for fd %d\n", intr_handle->fd);
+			"Error disabling MSI-X interrupts for fd %d\n",
+			rte_intr_fd_get(intr_handle));
 
 	return ret;
 }
@@ -342,7 +357,7 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
 	int len, ret;
 	char irq_set_buf[IRQ_SET_BUF_LEN];
 	struct vfio_irq_set *irq_set;
-	int *fd_ptr;
+	int *fd_ptr, vfio_dev_fd;
 
 	len = sizeof(irq_set_buf);
 
@@ -354,13 +369,14 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
 	irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
 	irq_set->start = 0;
 	fd_ptr = (int *) &irq_set->data;
-	*fd_ptr = intr_handle->fd;
+	*fd_ptr = rte_intr_fd_get(intr_handle);
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret) {
 		RTE_LOG(ERR, EAL, "Error enabling req interrupts for fd %d\n",
-						intr_handle->fd);
+					rte_intr_fd_get(intr_handle));
 		return -1;
 	}
 
@@ -373,7 +389,7 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
 {
 	struct vfio_irq_set *irq_set;
 	char irq_set_buf[IRQ_SET_BUF_LEN];
-	int len, ret;
+	int len, ret, vfio_dev_fd;
 
 	len = sizeof(struct vfio_irq_set);
 
@@ -384,11 +400,12 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
 	irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
 	irq_set->start = 0;
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret)
 		RTE_LOG(ERR, EAL, "Error disabling req interrupts for fd %d\n",
-			intr_handle->fd);
+			rte_intr_fd_get(intr_handle));
 
 	return ret;
 }
@@ -399,20 +416,22 @@ static int
 uio_intx_intr_disable(const struct rte_intr_handle *intr_handle)
 {
 	unsigned char command_high;
+	int uio_cfg_fd;
 
 	/* use UIO config file descriptor for uio_pci_generic */
-	if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+	uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+	if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
 		RTE_LOG(ERR, EAL,
 			"Error reading interrupts status for fd %d\n",
-			intr_handle->uio_cfg_fd);
+			uio_cfg_fd);
 		return -1;
 	}
 	/* disable interrupts */
 	command_high |= 0x4;
-	if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+	if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
 		RTE_LOG(ERR, EAL,
 			"Error disabling interrupts for fd %d\n",
-			intr_handle->uio_cfg_fd);
+			uio_cfg_fd);
 		return -1;
 	}
 
@@ -423,20 +442,22 @@ static int
 uio_intx_intr_enable(const struct rte_intr_handle *intr_handle)
 {
 	unsigned char command_high;
+	int uio_cfg_fd;
 
 	/* use UIO config file descriptor for uio_pci_generic */
-	if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+	uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+	if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
 		RTE_LOG(ERR, EAL,
 			"Error reading interrupts status for fd %d\n",
-			intr_handle->uio_cfg_fd);
+			uio_cfg_fd);
 		return -1;
 	}
 	/* enable interrupts */
 	command_high &= ~0x4;
-	if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+	if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
 		RTE_LOG(ERR, EAL,
 			"Error enabling interrupts for fd %d\n",
-			intr_handle->uio_cfg_fd);
+			uio_cfg_fd);
 		return -1;
 	}
 
@@ -448,10 +469,11 @@ uio_intr_disable(const struct rte_intr_handle *intr_handle)
 {
 	const int value = 0;
 
-	if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
+	if (write(rte_intr_fd_get(intr_handle), &value,
+		  sizeof(value)) < 0) {
 		RTE_LOG(ERR, EAL,
 			"Error disabling interrupts for fd %d (%s)\n",
-			intr_handle->fd, strerror(errno));
+			rte_intr_fd_get(intr_handle), strerror(errno));
 		return -1;
 	}
 	return 0;
@@ -462,10 +484,11 @@ uio_intr_enable(const struct rte_intr_handle *intr_handle)
 {
 	const int value = 1;
 
-	if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
+	if (write(rte_intr_fd_get(intr_handle), &value,
+		  sizeof(value)) < 0) {
 		RTE_LOG(ERR, EAL,
 			"Error enabling interrupts for fd %d (%s)\n",
-			intr_handle->fd, strerror(errno));
+			rte_intr_fd_get(intr_handle), strerror(errno));
 		return -1;
 	}
 	return 0;
@@ -475,14 +498,15 @@ int
 rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 			rte_intr_callback_fn cb, void *cb_arg)
 {
-	int ret, wake_thread;
+	int ret, wake_thread, mem_allocator;
 	struct rte_intr_source *src;
 	struct rte_intr_callback *callback;
 
 	wake_thread = 0;
 
 	/* first do parameter checking */
-	if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
+	if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0 ||
+	    cb == NULL) {
 		RTE_LOG(ERR, EAL,
 			"Registering with invalid input parameter\n");
 		return -EINVAL;
@@ -503,7 +527,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 
 	/* check if there is at least one callback registered for the fd */
 	TAILQ_FOREACH(src, &intr_sources, next) {
-		if (src->intr_handle.fd == intr_handle->fd) {
+		if (rte_intr_fd_get(src->intr_handle) ==
+					rte_intr_fd_get(intr_handle)) {
 			/* we had no interrupts for this */
 			if (TAILQ_EMPTY(&src->callbacks))
 				wake_thread = 1;
@@ -522,12 +547,34 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 			free(callback);
 			ret = -ENOMEM;
 		} else {
-			src->intr_handle = *intr_handle;
-			TAILQ_INIT(&src->callbacks);
-			TAILQ_INSERT_TAIL(&(src->callbacks), callback, next);
-			TAILQ_INSERT_TAIL(&intr_sources, src, next);
-			wake_thread = 1;
-			ret = 0;
+			/* src->interrupt instance memory allocated depends on
+			 * from where intr_handle memory is allocated.
+			 */
+			mem_allocator =
+			rte_intr_instance_mem_allocator_get(intr_handle);
+			if (mem_allocator == 0)
+				src->intr_handle = rte_intr_instance_alloc(
+						RTE_INTR_ALLOC_TRAD_HEAP);
+			else if (mem_allocator == 1)
+				src->intr_handle = rte_intr_instance_alloc(
+						RTE_INTR_ALLOC_DPDK_ALLOCATOR);
+			else
+				RTE_LOG(ERR, EAL, "Failed to get mem allocator\n");
+
+			if (src->intr_handle == NULL) {
+				RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+				free(callback);
+				ret = -ENOMEM;
+			} else {
+				rte_intr_instance_copy(src->intr_handle,
+						       intr_handle);
+				TAILQ_INIT(&src->callbacks);
+				TAILQ_INSERT_TAIL(&(src->callbacks), callback,
+						  next);
+				TAILQ_INSERT_TAIL(&intr_sources, src, next);
+				wake_thread = 1;
+				ret = 0;
+			}
 		}
 	}
 
@@ -555,7 +602,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
 	struct rte_intr_callback *cb, *next;
 
 	/* do parameter checking first */
-	if (intr_handle == NULL || intr_handle->fd < 0) {
+	if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
 		RTE_LOG(ERR, EAL,
 		"Unregistering with invalid input parameter\n");
 		return -EINVAL;
@@ -565,7 +612,8 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
 
 	/* check if the insterrupt source for the fd is existent */
 	TAILQ_FOREACH(src, &intr_sources, next)
-		if (src->intr_handle.fd == intr_handle->fd)
+		if (rte_intr_fd_get(src->intr_handle) ==
+					rte_intr_fd_get(intr_handle))
 			break;
 
 	/* No interrupt source registered for the fd */
@@ -605,7 +653,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 	struct rte_intr_callback *cb, *next;
 
 	/* do parameter checking first */
-	if (intr_handle == NULL || intr_handle->fd < 0) {
+	if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
 		RTE_LOG(ERR, EAL,
 		"Unregistering with invalid input parameter\n");
 		return -EINVAL;
@@ -615,7 +663,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 
 	/* check if the insterrupt source for the fd is existent */
 	TAILQ_FOREACH(src, &intr_sources, next)
-		if (src->intr_handle.fd == intr_handle->fd)
+		if (rte_intr_fd_get(src->intr_handle) ==
+					rte_intr_fd_get(intr_handle))
 			break;
 
 	/* No interrupt source registered for the fd */
@@ -646,6 +695,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 		/* all callbacks for that source are removed. */
 		if (TAILQ_EMPTY(&src->callbacks)) {
 			TAILQ_REMOVE(&intr_sources, src, next);
+			rte_intr_instance_free(src->intr_handle);
 			free(src);
 		}
 	}
@@ -677,22 +727,23 @@ rte_intr_callback_unregister_sync(const struct rte_intr_handle *intr_handle,
 int
 rte_intr_enable(const struct rte_intr_handle *intr_handle)
 {
-	int rc = 0;
+	int rc = 0, uio_cfg_fd;
 
 	if (intr_handle == NULL)
 		return -1;
 
-	if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+	if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
 		rc = 0;
 		goto out;
 	}
 
-	if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+	uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+	if (rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
 		rc = -1;
 		goto out;
 	}
 
-	switch (intr_handle->type){
+	switch (rte_intr_type_get(intr_handle)) {
 	/* write to the uio fd to enable the interrupt */
 	case RTE_INTR_HANDLE_UIO:
 		if (uio_intr_enable(intr_handle))
@@ -734,7 +785,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
 	default:
 		RTE_LOG(ERR, EAL,
 			"Unknown handle type of fd %d\n",
-					intr_handle->fd);
+					rte_intr_fd_get(intr_handle));
 		rc = -1;
 		break;
 	}
@@ -757,13 +808,17 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
 int
 rte_intr_ack(const struct rte_intr_handle *intr_handle)
 {
-	if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+	int uio_cfg_fd;
+
+	if (intr_handle && rte_intr_type_get(intr_handle) ==
+							RTE_INTR_HANDLE_VDEV)
 		return 0;
 
-	if (!intr_handle || intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0)
+	uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+	if (!intr_handle || rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0)
 		return -1;
 
-	switch (intr_handle->type) {
+	switch (rte_intr_type_get(intr_handle)) {
 	/* Both acking and enabling are same for UIO */
 	case RTE_INTR_HANDLE_UIO:
 		if (uio_intr_enable(intr_handle))
@@ -796,7 +851,7 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
 	/* unknown handle type */
 	default:
 		RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
-			intr_handle->fd);
+			rte_intr_fd_get(intr_handle));
 		return -1;
 	}
 
@@ -806,22 +861,23 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
 int
 rte_intr_disable(const struct rte_intr_handle *intr_handle)
 {
-	int rc = 0;
+	int rc = 0, uio_cfg_fd;
 
 	if (intr_handle == NULL)
 		return -1;
 
-	if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+	if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
 		rc = 0;
 		goto out;
 	}
 
-	if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+	uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+	if (rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
 		rc = -1;
 		goto out;
 	}
 
-	switch (intr_handle->type){
+	switch (rte_intr_type_get(intr_handle)) {
 	/* write to the uio fd to disable the interrupt */
 	case RTE_INTR_HANDLE_UIO:
 		if (uio_intr_disable(intr_handle))
@@ -863,7 +919,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
 	default:
 		RTE_LOG(ERR, EAL,
 			"Unknown handle type of fd %d\n",
-					intr_handle->fd);
+			rte_intr_fd_get(intr_handle));
 		rc = -1;
 		break;
 	}
@@ -896,7 +952,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
 		}
 		rte_spinlock_lock(&intr_lock);
 		TAILQ_FOREACH(src, &intr_sources, next)
-			if (src->intr_handle.fd ==
+			if (rte_intr_fd_get(src->intr_handle) ==
 					events[n].data.fd)
 				break;
 		if (src == NULL){
@@ -909,7 +965,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
 		rte_spinlock_unlock(&intr_lock);
 
 		/* set the length to be read dor different handle type */
-		switch (src->intr_handle.type) {
+		switch (rte_intr_type_get(src->intr_handle)) {
 		case RTE_INTR_HANDLE_UIO:
 		case RTE_INTR_HANDLE_UIO_INTX:
 			bytes_read = sizeof(buf.uio_intr_count);
@@ -973,6 +1029,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
 					TAILQ_REMOVE(&src->callbacks, cb, next);
 					free(cb);
 				}
+				rte_intr_instance_free(src->intr_handle);
 				free(src);
 				return -1;
 			} else if (bytes_read == 0)
@@ -1012,7 +1069,8 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
 			if (cb->pending_delete) {
 				TAILQ_REMOVE(&src->callbacks, cb, next);
 				if (cb->ucb_fn)
-					cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+					cb->ucb_fn(src->intr_handle,
+						   cb->cb_arg);
 				free(cb);
 				rv++;
 			}
@@ -1021,6 +1079,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
 		/* all callbacks for that source are removed. */
 		if (TAILQ_EMPTY(&src->callbacks)) {
 			TAILQ_REMOVE(&intr_sources, src, next);
+			rte_intr_instance_free(src->intr_handle);
 			free(src);
 		}
 
@@ -1123,16 +1182,18 @@ eal_intr_thread_main(__rte_unused void *arg)
 				continue; /* skip those with no callbacks */
 			memset(&ev, 0, sizeof(ev));
 			ev.events = EPOLLIN | EPOLLPRI | EPOLLRDHUP | EPOLLHUP;
-			ev.data.fd = src->intr_handle.fd;
+			ev.data.fd = rte_intr_fd_get(src->intr_handle);
 
 			/**
 			 * add all the uio device file descriptor
 			 * into wait list.
 			 */
 			if (epoll_ctl(pfd, EPOLL_CTL_ADD,
-					src->intr_handle.fd, &ev) < 0){
+				rte_intr_fd_get(src->intr_handle),
+								&ev) < 0) {
 				rte_panic("Error adding fd %d epoll_ctl, %s\n",
-					src->intr_handle.fd, strerror(errno));
+				rte_intr_fd_get(src->intr_handle),
+				strerror(errno));
 			}
 			else
 				numfds++;
@@ -1185,7 +1246,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
 	int bytes_read = 0;
 	int nbytes;
 
-	switch (intr_handle->type) {
+	switch (rte_intr_type_get(intr_handle)) {
 	case RTE_INTR_HANDLE_UIO:
 	case RTE_INTR_HANDLE_UIO_INTX:
 		bytes_read = sizeof(buf.uio_intr_count);
@@ -1198,7 +1259,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
 		break;
 #endif
 	case RTE_INTR_HANDLE_VDEV:
-		bytes_read = intr_handle->efd_counter_size;
+		bytes_read = rte_intr_efd_counter_size_get(intr_handle);
 		/* For vdev, number of bytes to read is set by driver */
 		break;
 	case RTE_INTR_HANDLE_EXT:
@@ -1419,8 +1480,8 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
 	efd_idx = (vec >= RTE_INTR_VEC_RXTX_OFFSET) ?
 		(vec - RTE_INTR_VEC_RXTX_OFFSET) : vec;
 
-	if (!intr_handle || intr_handle->nb_efd == 0 ||
-	    efd_idx >= intr_handle->nb_efd) {
+	if (!intr_handle || rte_intr_nb_efd_get(intr_handle) == 0 ||
+	    efd_idx >= (unsigned int)rte_intr_nb_efd_get(intr_handle)) {
 		RTE_LOG(ERR, EAL, "Wrong intr vector number.\n");
 		return -EPERM;
 	}
@@ -1428,7 +1489,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
 	switch (op) {
 	case RTE_INTR_EVENT_ADD:
 		epfd_op = EPOLL_CTL_ADD;
-		rev = &intr_handle->elist[efd_idx];
+		rev = rte_intr_elist_index_get(intr_handle, efd_idx);
 		if (__atomic_load_n(&rev->status,
 				__ATOMIC_RELAXED) != RTE_EPOLL_INVALID) {
 			RTE_LOG(INFO, EAL, "Event already been added.\n");
@@ -1442,7 +1503,9 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
 		epdata->cb_fun = (rte_intr_event_cb_t)eal_intr_proc_rxtx_intr;
 		epdata->cb_arg = (void *)intr_handle;
 		rc = rte_epoll_ctl(epfd, epfd_op,
-				   intr_handle->efds[efd_idx], rev);
+				   rte_intr_efds_index_get(intr_handle,
+								  efd_idx),
+				   rev);
 		if (!rc)
 			RTE_LOG(DEBUG, EAL,
 				"efd %d associated with vec %d added on epfd %d"
@@ -1452,7 +1515,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
 		break;
 	case RTE_INTR_EVENT_DEL:
 		epfd_op = EPOLL_CTL_DEL;
-		rev = &intr_handle->elist[efd_idx];
+		rev = rte_intr_elist_index_get(intr_handle, efd_idx);
 		if (__atomic_load_n(&rev->status,
 				__ATOMIC_RELAXED) == RTE_EPOLL_INVALID) {
 			RTE_LOG(INFO, EAL, "Event does not exist.\n");
@@ -1477,8 +1540,9 @@ rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle)
 	uint32_t i;
 	struct rte_epoll_event *rev;
 
-	for (i = 0; i < intr_handle->nb_efd; i++) {
-		rev = &intr_handle->elist[i];
+	for (i = 0; i < (uint32_t)rte_intr_nb_efd_get(intr_handle);
+									i++) {
+		rev = rte_intr_elist_index_get(intr_handle, i);
 		if (__atomic_load_n(&rev->status,
 				__ATOMIC_RELAXED) == RTE_EPOLL_INVALID)
 			continue;
@@ -1498,7 +1562,7 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
 
 	assert(nb_efd != 0);
 
-	if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX) {
+	if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VFIO_MSIX) {
 		for (i = 0; i < n; i++) {
 			fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
 			if (fd < 0) {
@@ -1507,21 +1571,32 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
 					errno, strerror(errno));
 				return -errno;
 			}
-			intr_handle->efds[i] = fd;
+
+			if (rte_intr_efds_index_set(intr_handle, i, fd))
+				return -rte_errno;
 		}
-		intr_handle->nb_efd   = n;
-		intr_handle->max_intr = NB_OTHER_INTR + n;
-	} else if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+
+		if (rte_intr_nb_efd_set(intr_handle, n))
+			return -rte_errno;
+
+		if (rte_intr_max_intr_set(intr_handle, NB_OTHER_INTR + n))
+			return -rte_errno;
+	} else if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
 		/* only check, initialization would be done in vdev driver.*/
-		if (intr_handle->efd_counter_size >
+		if ((uint64_t)rte_intr_efd_counter_size_get(intr_handle) >
 		    sizeof(union rte_intr_read_buffer)) {
 			RTE_LOG(ERR, EAL, "the efd_counter_size is oversized");
 			return -EINVAL;
 		}
 	} else {
-		intr_handle->efds[0]  = intr_handle->fd;
-		intr_handle->nb_efd   = RTE_MIN(nb_efd, 1U);
-		intr_handle->max_intr = NB_OTHER_INTR;
+		if (rte_intr_efds_index_set(intr_handle, 0,
+					    rte_intr_fd_get(intr_handle)))
+			return -rte_errno;
+		if (rte_intr_nb_efd_set(intr_handle,
+					RTE_MIN(nb_efd, 1U)))
+			return -rte_errno;
+		if (rte_intr_max_intr_set(intr_handle, NB_OTHER_INTR))
+			return -rte_errno;
 	}
 
 	return 0;
@@ -1533,18 +1608,20 @@ rte_intr_efd_disable(struct rte_intr_handle *intr_handle)
 	uint32_t i;
 
 	rte_intr_free_epoll_fd(intr_handle);
-	if (intr_handle->max_intr > intr_handle->nb_efd) {
-		for (i = 0; i < intr_handle->nb_efd; i++)
-			close(intr_handle->efds[i]);
+	if (rte_intr_max_intr_get(intr_handle) >
+				rte_intr_nb_efd_get(intr_handle)) {
+		for (i = 0; i <
+			(uint32_t)rte_intr_nb_efd_get(intr_handle); i++)
+			close(rte_intr_efds_index_get(intr_handle, i));
 	}
-	intr_handle->nb_efd = 0;
-	intr_handle->max_intr = 0;
+	rte_intr_nb_efd_set(intr_handle, 0);
+	rte_intr_max_intr_set(intr_handle, 0);
 }
 
 int
 rte_intr_dp_is_en(struct rte_intr_handle *intr_handle)
 {
-	return !(!intr_handle->nb_efd);
+	return !(!rte_intr_nb_efd_get(intr_handle));
 }
 
 int
@@ -1553,16 +1630,17 @@ rte_intr_allow_others(struct rte_intr_handle *intr_handle)
 	if (!rte_intr_dp_is_en(intr_handle))
 		return 1;
 	else
-		return !!(intr_handle->max_intr - intr_handle->nb_efd);
+		return !!(rte_intr_max_intr_get(intr_handle) -
+				rte_intr_nb_efd_get(intr_handle));
 }
 
 int
 rte_intr_cap_multiple(struct rte_intr_handle *intr_handle)
 {
-	if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX)
+	if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VFIO_MSIX)
 		return 1;
 
-	if (intr_handle->type == RTE_INTR_HANDLE_VDEV)
+	if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
 		return 1;
 
 	return 0;
-- 
2.18.0


^ permalink raw reply	[relevance 1%]

* Re: [dpdk-dev] [PATCH v4 3/7] ethdev: copy ethdev 'fast' API into separate structure
  2021-10-04 13:55  2%       ` [dpdk-dev] [PATCH v4 3/7] ethdev: copy ethdev 'fast' API into separate structure Konstantin Ananyev
@ 2021-10-05 13:09  0%         ` Thomas Monjalon
  2021-10-05 16:41  0%           ` Ananyev, Konstantin
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-10-05 13:09 UTC (permalink / raw)
  To: Konstantin Ananyev
  Cc: dev, xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, ferruh.yigit, mdr, jay.jayatheerthan

04/10/2021 15:55, Konstantin Ananyev:
> Copy public function pointers (rx_pkt_burst(), etc.) and related
> pointers to internal data from rte_eth_dev structure into a
> separate flat array. That array will remain in a public header.
> The intention here is to make rte_eth_dev and related structures internal.
> That should allow future possible changes to core eth_dev structures
> to be transparent to the user and help to avoid ABI/API breakages.
> The plan is to keep minimal part of data from rte_eth_dev public,
> so we still can use inline functions for 'fast' calls
> (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.

I don't understand why 'fast' is quoted.
It looks strange.


> +/* reset eth 'fast' API to dummy values */
> +void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
> +
> +/* setup eth 'fast' API to ethdev values */
> +void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> +		const struct rte_eth_dev *dev);

I assume "fp" stands for fast path.
Please write "fast path" completely in the comments.

> +	/* expose selection of PMD rx/tx function */
> +	eth_dev_fp_ops_setup(rte_eth_fp_ops + port_id, dev);
[...]
> +	/* point rx/tx functions to dummy ones */
> +	eth_dev_fp_ops_reset(rte_eth_fp_ops + port_id);

Nit: Rx/Tx
or could be "fast path", to be consistent.

> +	/*
> +	 * for secondary process, at that point we expect device
> +	 * to be already 'usable', so shared data and all function pointers
> +	 * for 'fast' devops have to be setup properly inside rte_eth_dev.
> +	 */
> +	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
> +		eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev);
> +
>  	rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL);
>  
>  	dev->state = RTE_ETH_DEV_ATTACHED;
> diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
> index 948c0b71c1..fe47a660c7 100644
> --- a/lib/ethdev/rte_ethdev_core.h
> +++ b/lib/ethdev/rte_ethdev_core.h
> @@ -53,6 +53,51 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
>  typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
>  /**< @internal Check the status of a Tx descriptor */
>  
> +/**
> + * @internal
> + * Structure used to hold opaque pointernals to internal ethdev RX/TXi

typos in above line

> + * queues data.
> + * The main purpose to expose these pointers at all - allow compiler
> + * to fetch this data for 'fast' ethdev inline functions in advance.
> + */
> +struct rte_ethdev_qdata {
> +	void **data;
> +	/**< points to array of internal queue data pointers */
> +	void **clbk;
> +	/**< points to array of queue callback data pointers */
> +};
> +
> +/**
> + * @internal
> + * 'fast' ethdev funcions and related data are hold in a flat array.
> + * one entry per ethdev.
> + */
> +struct rte_eth_fp_ops {
> +
> +	/** first 64B line */
> +	eth_rx_burst_t rx_pkt_burst;
> +	/**< PMD receive function. */
> +	eth_tx_burst_t tx_pkt_burst;
> +	/**< PMD transmit function. */
> +	eth_tx_prep_t tx_pkt_prepare;
> +	/**< PMD transmit prepare function. */
> +	eth_rx_queue_count_t rx_queue_count;
> +	/**< Get the number of used RX descriptors. */
> +	eth_rx_descriptor_status_t rx_descriptor_status;
> +	/**< Check the status of a Rx descriptor. */
> +	eth_tx_descriptor_status_t tx_descriptor_status;
> +	/**< Check the status of a Tx descriptor. */
> +	uintptr_t reserved[2];

uintptr_t size is not fix.
I think you mean uint64_t.

> +
> +	/** second 64B line */
> +	struct rte_ethdev_qdata rxq;
> +	struct rte_ethdev_qdata txq;
> +	uintptr_t reserved2[4];
> +
> +} __rte_cache_aligned;
> +
> +extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];




^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] sort symbols map
  2021-10-05  9:16  4% [dpdk-dev] [PATCH] sort symbols map David Marchand
@ 2021-10-05 14:16  0% ` Kinsella, Ray
  2021-10-05 14:31  0%   ` David Marchand
  2021-10-05 15:06  0%   ` David Marchand
  2021-10-11 11:36  0% ` Dumitrescu, Cristian
  1 sibling, 2 replies; 200+ results
From: Kinsella, Ray @ 2021-10-05 14:16 UTC (permalink / raw)
  To: David Marchand, dev
  Cc: thomas, ferruh.yigit, Jasvinder Singh, Cristian Dumitrescu,
	Vladimir Medvedkin, Conor Walsh, Stephen Hemminger



On 05/10/2021 10:16, David Marchand wrote:
> Fixed with ./devtools/update-abi.sh $(cat ABI_VERSION)
> 
> Fixes: e73a7ab22422 ("net/softnic: promote manage API")
> Fixes: 8f532a34c4f2 ("fib: promote API to stable")
> Fixes: 4aeb92396b85 ("rib: promote API to stable")
> 
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
> I added "./devtools/update-abi.sh $(cat ABI_VERSION)" to my checks.
> 
> I should have caught it when merging fib and rib patches...
> But my eyes (or more likely brain) stopped at net/softnic bits.
> 
> What do you think?
> Should I wait a bit more and send a global patch to catch any missed
> sorting just before rc1?
> 
> In the meantime, if you merge .map updates, try to remember to run the
> command above.
> 
> Thanks.
> ---
>  drivers/net/softnic/version.map |  2 +-
>  lib/fib/version.map             | 21 ++++++++++-----------
>  lib/rib/version.map             | 33 ++++++++++++++++-----------------
>  3 files changed, 27 insertions(+), 29 deletions(-)
> 

Something to add to the Symbol Bot also, maybe?

Acked-by: Ray Kinsella <mdr@ashroe.eu>

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] sort symbols map
  2021-10-05 14:16  0% ` Kinsella, Ray
@ 2021-10-05 14:31  0%   ` David Marchand
  2021-10-05 15:06  0%   ` David Marchand
  1 sibling, 0 replies; 200+ results
From: David Marchand @ 2021-10-05 14:31 UTC (permalink / raw)
  To: Kinsella, Ray
  Cc: dev, Thomas Monjalon, Yigit, Ferruh, Jasvinder Singh,
	Cristian Dumitrescu, Vladimir Medvedkin, Conor Walsh,
	Stephen Hemminger

On Tue, Oct 5, 2021 at 4:17 PM Kinsella, Ray <mdr@ashroe.eu> wrote:
> On 05/10/2021 10:16, David Marchand wrote:
> > Fixed with ./devtools/update-abi.sh $(cat ABI_VERSION)
> >
> > Fixes: e73a7ab22422 ("net/softnic: promote manage API")
> > Fixes: 8f532a34c4f2 ("fib: promote API to stable")
> > Fixes: 4aeb92396b85 ("rib: promote API to stable")
> >
> > Signed-off-by: David Marchand <david.marchand@redhat.com>
> > ---
> > I added "./devtools/update-abi.sh $(cat ABI_VERSION)" to my checks.
> >
> > I should have caught it when merging fib and rib patches...
> > But my eyes (or more likely brain) stopped at net/softnic bits.
> >
> > What do you think?
> > Should I wait a bit more and send a global patch to catch any missed
> > sorting just before rc1?
> >
> > In the meantime, if you merge .map updates, try to remember to run the
> > command above.
> >
> > Thanks.
> > ---
> >  drivers/net/softnic/version.map |  2 +-
> >  lib/fib/version.map             | 21 ++++++++++-----------
> >  lib/rib/version.map             | 33 ++++++++++++++++-----------------
> >  3 files changed, 27 insertions(+), 29 deletions(-)
> >
>
> Something to add to the Symbol Bot also, maybe?

The committed maps should have no issue in the first place.
The best place would probably be in checkpatches.sh so that developers
get the warning before even posting and so that maintainers fix the
issues before pushing.
But it requires checked out sources.


-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] sort symbols map
  2021-10-05 14:16  0% ` Kinsella, Ray
  2021-10-05 14:31  0%   ` David Marchand
@ 2021-10-05 15:06  0%   ` David Marchand
  1 sibling, 0 replies; 200+ results
From: David Marchand @ 2021-10-05 15:06 UTC (permalink / raw)
  To: David Marchand
  Cc: dev, Kinsella, Ray, Thomas Monjalon, Yigit, Ferruh,
	Jasvinder Singh, Cristian Dumitrescu, Vladimir Medvedkin,
	Conor Walsh, Stephen Hemminger

On Tue, Oct 5, 2021 at 4:17 PM Kinsella, Ray <mdr@ashroe.eu> wrote:
> On 05/10/2021 10:16, David Marchand wrote:
> > Fixed with ./devtools/update-abi.sh $(cat ABI_VERSION)
> >
> > Fixes: e73a7ab22422 ("net/softnic: promote manage API")
> > Fixes: 8f532a34c4f2 ("fib: promote API to stable")
> > Fixes: 4aeb92396b85 ("rib: promote API to stable")
> >
> > Signed-off-by: David Marchand <david.marchand@redhat.com>
> Acked-by: Ray Kinsella <mdr@ashroe.eu>

Applied, thanks.


-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [RFC 0/7] make rte_intr_handle internal
      2021-10-05 12:14  4% ` [dpdk-dev] [PATCH v2 0/6] " Harman Kalra
@ 2021-10-05 16:07  0% ` Stephen Hemminger
  2 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-10-05 16:07 UTC (permalink / raw)
  To: Harman Kalra; +Cc: dev

On Thu, 26 Aug 2021 20:27:19 +0530
Harman Kalra <hkalra@marvell.com> wrote:

> Moving struct rte_intr_handle as an internal structure to
> avoid any ABI breakages in future. Since this structure defines
> some static arrays and changing respective macros breaks the ABI.
> Eg:
> Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
> MSI-X interrupts that can be defined for a PCI device, while PCI
> specification allows maximum 2048 MSI-X interrupts that can be used.
> If some PCI device requires more than 512 vectors, either change the
> RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
> PCI device MSI-X size on probe time. Either way its an ABI breakage.
> 
> Change already included in 21.11 ABI improvement spreadsheet (item 42):
> https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
> preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
> 3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
> XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
> Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
> 
> 
> This series makes struct rte_intr_handle totally opaque to the outside
> world by wrapping it inside a .c file and providing get set wrapper APIs
> to read or manipulate its fields.. Any changes to be made to any of the
> fields should be done via these get set APIs.
> Introduced a new eal_common_interrupts.c where all these APIs are defined
> and also hides struct rte_intr_handle definition.

I agree rte_intr_handle and eth_devices structure needs to be hidden.
But there does not appear to be an API to check if device supports
receive interrupt mode.

There is:
   RTE_ETH_DEV_INTR_LSC  - link state
   RTE_ETH_DEV_INTR_RMV  - interrupt on removal

but no
   RTE_ETH_DEV_INTR_RXQ  - device supports rxq interrupt

There should be a new flag reported by devices, and the intr_conf should
be checked in rte_eth_dev_configure

Doing this would require fixes many drivers and there is risk of exposing
existing sematic bugs in applications.


code 
   


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v4 3/7] ethdev: copy ethdev 'fast' API into separate structure
  2021-10-05 13:09  0%         ` Thomas Monjalon
@ 2021-10-05 16:41  0%           ` Ananyev, Konstantin
  0 siblings, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2021-10-05 16:41 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
	maxime.coquelin, Xia, Chenbo, Yigit, Ferruh, mdr, Jayatheerthan,
	Jay

> 04/10/2021 15:55, Konstantin Ananyev:
> > Copy public function pointers (rx_pkt_burst(), etc.) and related
> > pointers to internal data from rte_eth_dev structure into a
> > separate flat array. That array will remain in a public header.
> > The intention here is to make rte_eth_dev and related structures internal.
> > That should allow future possible changes to core eth_dev structures
> > to be transparent to the user and help to avoid ABI/API breakages.
> > The plan is to keep minimal part of data from rte_eth_dev public,
> > so we still can use inline functions for 'fast' calls
> > (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
> 
> I don't understand why 'fast' is quoted.
> It looks strange.
> 
> 
> > +/* reset eth 'fast' API to dummy values */
> > +void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
> > +
> > +/* setup eth 'fast' API to ethdev values */
> > +void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> > +		const struct rte_eth_dev *dev);
> 
> I assume "fp" stands for fast path.

Yes.

> Please write "fast path" completely in the comments.

Ok.

> > +	/* expose selection of PMD rx/tx function */
> > +	eth_dev_fp_ops_setup(rte_eth_fp_ops + port_id, dev);
> [...]
> > +	/* point rx/tx functions to dummy ones */
> > +	eth_dev_fp_ops_reset(rte_eth_fp_ops + port_id);
> 
> Nit: Rx/Tx
> or could be "fast path", to be consistent.
> 
> > +	/*
> > +	 * for secondary process, at that point we expect device
> > +	 * to be already 'usable', so shared data and all function pointers
> > +	 * for 'fast' devops have to be setup properly inside rte_eth_dev.
> > +	 */
> > +	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
> > +		eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev);
> > +
> >  	rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL);
> >
> >  	dev->state = RTE_ETH_DEV_ATTACHED;
> > diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
> > index 948c0b71c1..fe47a660c7 100644
> > --- a/lib/ethdev/rte_ethdev_core.h
> > +++ b/lib/ethdev/rte_ethdev_core.h
> > @@ -53,6 +53,51 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
> >  typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
> >  /**< @internal Check the status of a Tx descriptor */
> >
> > +/**
> > + * @internal
> > + * Structure used to hold opaque pointernals to internal ethdev RX/TXi
> 
> typos in above line
> 
> > + * queues data.
> > + * The main purpose to expose these pointers at all - allow compiler
> > + * to fetch this data for 'fast' ethdev inline functions in advance.
> > + */
> > +struct rte_ethdev_qdata {
> > +	void **data;
> > +	/**< points to array of internal queue data pointers */
> > +	void **clbk;
> > +	/**< points to array of queue callback data pointers */
> > +};
> > +
> > +/**
> > + * @internal
> > + * 'fast' ethdev funcions and related data are hold in a flat array.
> > + * one entry per ethdev.
> > + */
> > +struct rte_eth_fp_ops {
> > +
> > +	/** first 64B line */
> > +	eth_rx_burst_t rx_pkt_burst;
> > +	/**< PMD receive function. */
> > +	eth_tx_burst_t tx_pkt_burst;
> > +	/**< PMD transmit function. */
> > +	eth_tx_prep_t tx_pkt_prepare;
> > +	/**< PMD transmit prepare function. */
> > +	eth_rx_queue_count_t rx_queue_count;
> > +	/**< Get the number of used RX descriptors. */
> > +	eth_rx_descriptor_status_t rx_descriptor_status;
> > +	/**< Check the status of a Rx descriptor. */
> > +	eth_tx_descriptor_status_t tx_descriptor_status;
> > +	/**< Check the status of a Tx descriptor. */
> > +	uintptr_t reserved[2];
> 
> uintptr_t size is not fix.
> I think you mean uint64_t.

Nope, I meant 'uintptr_t' here.
That way it fits really nicely to both 64-bit and 32-bit systems.
For 64-bit systems we have all function pointers on first 64B line,
and all data pointers on second 64B line.
For 32-bit systems we have all fields within first 64B line.  

> > +
> > +	/** second 64B line */
> > +	struct rte_ethdev_qdata rxq;
> > +	struct rte_ethdev_qdata txq;
> > +	uintptr_t reserved2[4];
> > +
> > +} __rte_cache_aligned;
> > +
> > +extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
> 
> 


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v4 0/2] cmdline: reduce ABI
  2021-10-05  0:55  4%     ` [dpdk-dev] [PATCH v3 0/2] cmdline: reduce ABI Dmitry Kozlyuk
  2021-10-05  0:55  4%       ` [dpdk-dev] [PATCH v3 1/2] cmdline: make struct cmdline opaque Dmitry Kozlyuk
  2021-10-05  0:55  3%       ` [dpdk-dev] [PATCH v3 2/2] cmdline: make struct rdline opaque Dmitry Kozlyuk
@ 2021-10-05 20:15  4%       ` Dmitry Kozlyuk
  2021-10-05 20:15  4%         ` [dpdk-dev] [PATCH v4 1/2] cmdline: make struct cmdline opaque Dmitry Kozlyuk
                           ` (2 more replies)
  2 siblings, 3 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-05 20:15 UTC (permalink / raw)
  To: dev; +Cc: Dmitry Kozlyuk

Hide struct cmdline following the deprecation notice.
Hide struct rdline following the v1 discussion.

v4: rdline_create -> rdline_new, restore empty line (Olivier).
v3: add experimental tags and releae notes for rdline.
v2: also hide struct rdline (David, Olivier).
 
Dmitry Kozlyuk (2):
  cmdline: make struct cmdline opaque
  cmdline: make struct rdline opaque

 app/test-cmdline/commands.c            |  2 +-
 app/test/test_cmdline_lib.c            | 22 ++++---
 doc/guides/rel_notes/deprecation.rst   |  4 --
 doc/guides/rel_notes/release_21_11.rst |  5 ++
 lib/cmdline/cmdline.c                  |  3 +-
 lib/cmdline/cmdline.h                  | 19 ------
 lib/cmdline/cmdline_private.h          | 57 ++++++++++++++++-
 lib/cmdline/cmdline_rdline.c           | 43 ++++++++++++-
 lib/cmdline/cmdline_rdline.h           | 87 ++++++++++----------------
 lib/cmdline/version.map                |  8 ++-
 10 files changed, 157 insertions(+), 93 deletions(-)

-- 
2.29.3


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v4 1/2] cmdline: make struct cmdline opaque
  2021-10-05 20:15  4%       ` [dpdk-dev] [PATCH v4 0/2] cmdline: reduce ABI Dmitry Kozlyuk
@ 2021-10-05 20:15  4%         ` Dmitry Kozlyuk
  2021-10-05 20:15  3%         ` [dpdk-dev] [PATCH v4 2/2] cmdline: make struct rdline opaque Dmitry Kozlyuk
  2021-10-07 22:10  4%         ` [dpdk-dev] [PATCH v5 0/2] cmdline: reduce ABI Dmitry Kozlyuk
  2 siblings, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-05 20:15 UTC (permalink / raw)
  To: dev; +Cc: Dmitry Kozlyuk, David Marchand, Olivier Matz, Ray Kinsella

Remove the definition of `struct cmdline` from public header.
Deprecation notice:
https://mails.dpdk.org/archives/dev/2020-September/183310.html

Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 doc/guides/rel_notes/deprecation.rst   |  4 ----
 doc/guides/rel_notes/release_21_11.rst |  2 ++
 lib/cmdline/cmdline.h                  | 19 -------------------
 lib/cmdline/cmdline_private.h          |  8 +++++++-
 4 files changed, 9 insertions(+), 24 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 76a4abfd6b..a404276fa2 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -275,10 +275,6 @@ Deprecation Notices
 * metrics: The function ``rte_metrics_init`` will have a non-void return
   in order to notify errors instead of calling ``rte_exit``.
 
-* cmdline: ``cmdline`` structure will be made opaque to hide platform-specific
-  content. On Linux and FreeBSD, supported prior to DPDK 20.11,
-  original structure will be kept until DPDK 21.11.
-
 * security: The functions ``rte_security_set_pkt_metadata`` and
   ``rte_security_get_userdata`` will be made inline functions and additional
   flags will be added in structure ``rte_security_ctx`` in DPDK 21.11.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index b55900936d..18377e5813 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -101,6 +101,8 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
+* cmdline: Made ``cmdline`` structure definition hidden on Linux and FreeBSD.
+
 
 ABI Changes
 -----------
diff --git a/lib/cmdline/cmdline.h b/lib/cmdline/cmdline.h
index c29762ddae..96674dfda2 100644
--- a/lib/cmdline/cmdline.h
+++ b/lib/cmdline/cmdline.h
@@ -7,10 +7,6 @@
 #ifndef _CMDLINE_H_
 #define _CMDLINE_H_
 
-#ifndef RTE_EXEC_ENV_WINDOWS
-#include <termios.h>
-#endif
-
 #include <rte_common.h>
 #include <rte_compat.h>
 
@@ -27,23 +23,8 @@
 extern "C" {
 #endif
 
-#ifndef RTE_EXEC_ENV_WINDOWS
-
-struct cmdline {
-	int s_in;
-	int s_out;
-	cmdline_parse_ctx_t *ctx;
-	struct rdline rdl;
-	char prompt[RDLINE_PROMPT_SIZE];
-	struct termios oldterm;
-};
-
-#else
-
 struct cmdline;
 
-#endif /* RTE_EXEC_ENV_WINDOWS */
-
 struct cmdline *cmdline_new(cmdline_parse_ctx_t *ctx, const char *prompt, int s_in, int s_out);
 void cmdline_set_prompt(struct cmdline *cl, const char *prompt);
 void cmdline_free(struct cmdline *cl);
diff --git a/lib/cmdline/cmdline_private.h b/lib/cmdline/cmdline_private.h
index a87c45275c..2e93674c66 100644
--- a/lib/cmdline/cmdline_private.h
+++ b/lib/cmdline/cmdline_private.h
@@ -11,6 +11,8 @@
 #include <rte_os_shim.h>
 #ifdef RTE_EXEC_ENV_WINDOWS
 #include <rte_windows.h>
+#else
+#include <termios.h>
 #endif
 
 #include <cmdline.h>
@@ -22,6 +24,7 @@ struct terminal {
 	int is_console_input;
 	int is_console_output;
 };
+#endif
 
 struct cmdline {
 	int s_in;
@@ -29,11 +32,14 @@ struct cmdline {
 	cmdline_parse_ctx_t *ctx;
 	struct rdline rdl;
 	char prompt[RDLINE_PROMPT_SIZE];
+#ifdef RTE_EXEC_ENV_WINDOWS
 	struct terminal oldterm;
 	char repeated_char;
 	WORD repeat_count;
-};
+#else
+	struct termios oldterm;
 #endif
+};
 
 /* Disable buffering and echoing, save previous settings to oldterm. */
 void terminal_adjust(struct cmdline *cl);
-- 
2.29.3


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v4 2/2] cmdline: make struct rdline opaque
  2021-10-05 20:15  4%       ` [dpdk-dev] [PATCH v4 0/2] cmdline: reduce ABI Dmitry Kozlyuk
  2021-10-05 20:15  4%         ` [dpdk-dev] [PATCH v4 1/2] cmdline: make struct cmdline opaque Dmitry Kozlyuk
@ 2021-10-05 20:15  3%         ` Dmitry Kozlyuk
  2021-10-07 22:10  4%         ` [dpdk-dev] [PATCH v5 0/2] cmdline: reduce ABI Dmitry Kozlyuk
  2 siblings, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-05 20:15 UTC (permalink / raw)
  To: dev
  Cc: Dmitry Kozlyuk, Ali Alnubani, Gregory Etelson, David Marchand,
	Olivier Matz, Ray Kinsella

Hide struct rdline definition and some RDLINE_* constants in order
to be able to change internal buffer sizes transparently to the user.
Add new functions:

* rdline_new(): allocate and initialize struct rdline.
  This function replaces rdline_init() and takes an extra parameter:
  opaque user data for the callbacks.
* rdline_free(): deallocate struct rdline.
* rdline_get_history_buffer_size(): for use in tests.
* rdline_get_opaque(): to obtain user data in callback functions.

Remove rdline_init() function from library headers and export list,
because using it requires the knowledge of sizeof(struct rdline).

Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
---
 app/test-cmdline/commands.c            |  2 +-
 app/test/test_cmdline_lib.c            | 22 ++++---
 doc/guides/rel_notes/release_21_11.rst |  3 +
 lib/cmdline/cmdline.c                  |  3 +-
 lib/cmdline/cmdline_private.h          | 49 +++++++++++++++
 lib/cmdline/cmdline_rdline.c           | 43 ++++++++++++-
 lib/cmdline/cmdline_rdline.h           | 87 ++++++++++----------------
 lib/cmdline/version.map                |  8 ++-
 8 files changed, 148 insertions(+), 69 deletions(-)

diff --git a/app/test-cmdline/commands.c b/app/test-cmdline/commands.c
index d732976f08..a13e1d1afd 100644
--- a/app/test-cmdline/commands.c
+++ b/app/test-cmdline/commands.c
@@ -297,7 +297,7 @@ cmd_get_history_bufsize_parsed(__rte_unused void *parsed_result,
 	struct rdline *rdl = cmdline_get_rdline(cl);
 
 	cmdline_printf(cl, "History buffer size: %zu\n",
-			sizeof(rdl->history_buf));
+			rdline_get_history_buffer_size(rdl));
 }
 
 cmdline_parse_token_string_t cmd_get_history_bufsize_tok =
diff --git a/app/test/test_cmdline_lib.c b/app/test/test_cmdline_lib.c
index d5a09b4541..24bc03fccb 100644
--- a/app/test/test_cmdline_lib.c
+++ b/app/test/test_cmdline_lib.c
@@ -83,18 +83,19 @@ test_cmdline_parse_fns(void)
 static int
 test_cmdline_rdline_fns(void)
 {
-	struct rdline rdl;
+	struct rdline *rdl = NULL;
 	rdline_write_char_t *wc = &cmdline_write_char;
 	rdline_validate_t *v = &valid_buffer;
 	rdline_complete_t *c = &complete_buffer;
 
-	if (rdline_init(NULL, wc, v, c) >= 0)
+	rdl = rdline_new(NULL, v, c, NULL);
+	if (rdl != NULL)
 		goto error;
-	if (rdline_init(&rdl, NULL, v, c) >= 0)
+	rdl = rdline_new(wc, NULL, c, NULL);
+	if (rdl != NULL)
 		goto error;
-	if (rdline_init(&rdl, wc, NULL, c) >= 0)
-		goto error;
-	if (rdline_init(&rdl, wc, v, NULL) >= 0)
+	rdl = rdline_new(wc, v, NULL, NULL);
+	if (rdl != NULL)
 		goto error;
 	if (rdline_char_in(NULL, 0) >= 0)
 		goto error;
@@ -102,25 +103,30 @@ test_cmdline_rdline_fns(void)
 		goto error;
 	if (rdline_add_history(NULL, "history") >= 0)
 		goto error;
-	if (rdline_add_history(&rdl, NULL) >= 0)
+	if (rdline_add_history(rdl, NULL) >= 0)
 		goto error;
 	if (rdline_get_history_item(NULL, 0) != NULL)
 		goto error;
 
 	/* void functions */
+	rdline_get_history_buffer_size(NULL);
+	rdline_get_opaque(NULL);
 	rdline_newline(NULL, "prompt");
-	rdline_newline(&rdl, NULL);
+	rdline_newline(rdl, NULL);
 	rdline_stop(NULL);
 	rdline_quit(NULL);
 	rdline_restart(NULL);
 	rdline_redisplay(NULL);
 	rdline_reset(NULL);
 	rdline_clear_history(NULL);
+	rdline_free(NULL);
 
+	rdline_free(rdl);
 	return 0;
 
 error:
 	printf("Error: function accepted null parameter!\n");
+	rdline_free(rdl);
 	return -1;
 }
 
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 18377e5813..af11f4a656 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -103,6 +103,9 @@ API Changes
 
 * cmdline: Made ``cmdline`` structure definition hidden on Linux and FreeBSD.
 
+* cmdline: Made ``rdline`` structure definition hidden. Functions are added
+  to dynamically allocate and free it, and to access user data in callbacks.
+
 
 ABI Changes
 -----------
diff --git a/lib/cmdline/cmdline.c b/lib/cmdline/cmdline.c
index a176d15130..8f1854cb0b 100644
--- a/lib/cmdline/cmdline.c
+++ b/lib/cmdline/cmdline.c
@@ -85,13 +85,12 @@ cmdline_new(cmdline_parse_ctx_t *ctx, const char *prompt, int s_in, int s_out)
 	cl->ctx = ctx;
 
 	ret = rdline_init(&cl->rdl, cmdline_write_char, cmdline_valid_buffer,
-			cmdline_complete_buffer);
+			cmdline_complete_buffer, cl);
 	if (ret != 0) {
 		free(cl);
 		return NULL;
 	}
 
-	cl->rdl.opaque = cl;
 	cmdline_set_prompt(cl, prompt);
 	rdline_newline(&cl->rdl, cl->prompt);
 
diff --git a/lib/cmdline/cmdline_private.h b/lib/cmdline/cmdline_private.h
index 2e93674c66..c2e906d8de 100644
--- a/lib/cmdline/cmdline_private.h
+++ b/lib/cmdline/cmdline_private.h
@@ -17,6 +17,49 @@
 
 #include <cmdline.h>
 
+#define RDLINE_BUF_SIZE 512
+#define RDLINE_PROMPT_SIZE  32
+#define RDLINE_VT100_BUF_SIZE  8
+#define RDLINE_HISTORY_BUF_SIZE BUFSIZ
+#define RDLINE_HISTORY_MAX_LINE 64
+
+enum rdline_status {
+	RDLINE_INIT,
+	RDLINE_RUNNING,
+	RDLINE_EXITED
+};
+
+struct rdline {
+	enum rdline_status status;
+	/* rdline bufs */
+	struct cirbuf left;
+	struct cirbuf right;
+	char left_buf[RDLINE_BUF_SIZE+2]; /* reserve 2 chars for the \n\0 */
+	char right_buf[RDLINE_BUF_SIZE];
+
+	char prompt[RDLINE_PROMPT_SIZE];
+	unsigned int prompt_size;
+
+	char kill_buf[RDLINE_BUF_SIZE];
+	unsigned int kill_size;
+
+	/* history */
+	struct cirbuf history;
+	char history_buf[RDLINE_HISTORY_BUF_SIZE];
+	int history_cur_line;
+
+	/* callbacks and func pointers */
+	rdline_write_char_t *write_char;
+	rdline_validate_t *validate;
+	rdline_complete_t *complete;
+
+	/* vt100 parser */
+	struct cmdline_vt100 vt100;
+
+	/* opaque pointer */
+	void *opaque;
+};
+
 #ifdef RTE_EXEC_ENV_WINDOWS
 struct terminal {
 	DWORD input_mode;
@@ -57,4 +100,10 @@ ssize_t cmdline_read_char(struct cmdline *cl, char *c);
 __rte_format_printf(2, 0)
 int cmdline_vdprintf(int fd, const char *format, va_list op);
 
+int rdline_init(struct rdline *rdl,
+		rdline_write_char_t *write_char,
+		rdline_validate_t *validate,
+		rdline_complete_t *complete,
+		void *opaque);
+
 #endif
diff --git a/lib/cmdline/cmdline_rdline.c b/lib/cmdline/cmdline_rdline.c
index 2cb53e38f2..d92b1cda53 100644
--- a/lib/cmdline/cmdline_rdline.c
+++ b/lib/cmdline/cmdline_rdline.c
@@ -13,6 +13,7 @@
 #include <ctype.h>
 
 #include "cmdline_cirbuf.h"
+#include "cmdline_private.h"
 #include "cmdline_rdline.h"
 
 static void rdline_puts(struct rdline *rdl, const char *buf);
@@ -37,9 +38,10 @@ isblank2(char c)
 
 int
 rdline_init(struct rdline *rdl,
-		 rdline_write_char_t *write_char,
-		 rdline_validate_t *validate,
-		 rdline_complete_t *complete)
+	    rdline_write_char_t *write_char,
+	    rdline_validate_t *validate,
+	    rdline_complete_t *complete,
+	    void *opaque)
 {
 	if (!rdl || !write_char || !validate || !complete)
 		return -EINVAL;
@@ -47,10 +49,33 @@ rdline_init(struct rdline *rdl,
 	rdl->validate = validate;
 	rdl->complete = complete;
 	rdl->write_char = write_char;
+	rdl->opaque = opaque;
 	rdl->status = RDLINE_INIT;
 	return cirbuf_init(&rdl->history, rdl->history_buf, 0, RDLINE_HISTORY_BUF_SIZE);
 }
 
+struct rdline *
+rdline_new(rdline_write_char_t *write_char,
+	   rdline_validate_t *validate,
+	   rdline_complete_t *complete,
+	   void *opaque)
+{
+	struct rdline *rdl;
+
+	rdl = malloc(sizeof(*rdl));
+	if (rdline_init(rdl, write_char, validate, complete, opaque) < 0) {
+		free(rdl);
+		rdl = NULL;
+	}
+	return rdl;
+}
+
+void
+rdline_free(struct rdline *rdl)
+{
+	free(rdl);
+}
+
 void
 rdline_newline(struct rdline *rdl, const char *prompt)
 {
@@ -564,6 +589,18 @@ rdline_get_history_item(struct rdline * rdl, unsigned int idx)
 	return NULL;
 }
 
+size_t
+rdline_get_history_buffer_size(struct rdline *rdl)
+{
+	return sizeof(rdl->history_buf);
+}
+
+void *
+rdline_get_opaque(struct rdline *rdl)
+{
+	return rdl != NULL ? rdl->opaque : NULL;
+}
+
 int
 rdline_add_history(struct rdline * rdl, const char * buf)
 {
diff --git a/lib/cmdline/cmdline_rdline.h b/lib/cmdline/cmdline_rdline.h
index d2170293de..af66b70495 100644
--- a/lib/cmdline/cmdline_rdline.h
+++ b/lib/cmdline/cmdline_rdline.h
@@ -10,9 +10,7 @@
 /**
  * This file is a small equivalent to the GNU readline library, but it
  * was originally designed for small systems, like Atmel AVR
- * microcontrollers (8 bits). Indeed, we don't use any malloc that is
- * sometimes not implemented (or just not recommended) on such
- * systems.
+ * microcontrollers (8 bits). It only uses malloc() on object creation.
  *
  * Obviously, it does not support as many things as the GNU readline,
  * but at least it supports some interesting features like a kill
@@ -31,6 +29,7 @@
  */
 
 #include <stdio.h>
+#include <rte_compat.h>
 #include <cmdline_cirbuf.h>
 #include <cmdline_vt100.h>
 
@@ -38,19 +37,6 @@
 extern "C" {
 #endif
 
-/* configuration */
-#define RDLINE_BUF_SIZE 512
-#define RDLINE_PROMPT_SIZE  32
-#define RDLINE_VT100_BUF_SIZE  8
-#define RDLINE_HISTORY_BUF_SIZE BUFSIZ
-#define RDLINE_HISTORY_MAX_LINE 64
-
-enum rdline_status {
-	RDLINE_INIT,
-	RDLINE_RUNNING,
-	RDLINE_EXITED
-};
-
 struct rdline;
 
 typedef int (rdline_write_char_t)(struct rdline *rdl, char);
@@ -60,52 +46,33 @@ typedef int (rdline_complete_t)(struct rdline *rdl, const char *buf,
 				char *dstbuf, unsigned int dstsize,
 				int *state);
 
-struct rdline {
-	enum rdline_status status;
-	/* rdline bufs */
-	struct cirbuf left;
-	struct cirbuf right;
-	char left_buf[RDLINE_BUF_SIZE+2]; /* reserve 2 chars for the \n\0 */
-	char right_buf[RDLINE_BUF_SIZE];
-
-	char prompt[RDLINE_PROMPT_SIZE];
-	unsigned int prompt_size;
-
-	char kill_buf[RDLINE_BUF_SIZE];
-	unsigned int kill_size;
-
-	/* history */
-	struct cirbuf history;
-	char history_buf[RDLINE_HISTORY_BUF_SIZE];
-	int history_cur_line;
-
-	/* callbacks and func pointers */
-	rdline_write_char_t *write_char;
-	rdline_validate_t *validate;
-	rdline_complete_t *complete;
-
-	/* vt100 parser */
-	struct cmdline_vt100 vt100;
-
-	/* opaque pointer */
-	void *opaque;
-};
-
 /**
- * Init fields for a struct rdline. Call this only once at the beginning
- * of your program.
- * \param rdl A pointer to an uninitialized struct rdline
+ * Allocate and initialize a new rdline instance.
+ *
+ * \param rdl Receives a pointer to the allocated structure.
  * \param write_char The function used by the function to write a character
  * \param validate A pointer to the function to execute when the
  *                 user validates the buffer.
  * \param complete A pointer to the function to execute when the
  *                 user completes the buffer.
+ * \param opaque User data for use in the callbacks.
+ *
+ * \return 0 on success, negative errno-style code in failure.
  */
-int rdline_init(struct rdline *rdl,
-		 rdline_write_char_t *write_char,
-		 rdline_validate_t *validate,
-		 rdline_complete_t *complete);
+__rte_experimental
+struct rdline *rdline_new(rdline_write_char_t *write_char,
+			  rdline_validate_t *validate,
+			  rdline_complete_t *complete,
+			  void *opaque);
 
+/**
+ * Free an rdline instance.
+ *
+ * \param rdl A pointer to an initialized struct rdline.
+ *            If NULL, this function is a no-op.
+ */
+__rte_experimental
+void rdline_free(struct rdline *rdl);
 
 /**
  * Init the current buffer, and display a prompt.
@@ -194,6 +161,18 @@ void rdline_clear_history(struct rdline *rdl);
  */
 char *rdline_get_history_item(struct rdline *rdl, unsigned int i);
 
+/**
+ * Get maximum history buffer size.
+ */
+__rte_experimental
+size_t rdline_get_history_buffer_size(struct rdline *rdl);
+
+/**
+ * Get the opaque pointer supplied on struct rdline creation.
+ */
+__rte_experimental
+void *rdline_get_opaque(struct rdline *rdl);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/cmdline/version.map b/lib/cmdline/version.map
index 980adb4f23..b9bbb87510 100644
--- a/lib/cmdline/version.map
+++ b/lib/cmdline/version.map
@@ -57,7 +57,6 @@ DPDK_22 {
 	rdline_clear_history;
 	rdline_get_buffer;
 	rdline_get_history_item;
-	rdline_init;
 	rdline_newline;
 	rdline_quit;
 	rdline_redisplay;
@@ -73,7 +72,14 @@ DPDK_22 {
 EXPERIMENTAL {
 	global:
 
+	# added in 20.11
 	cmdline_get_rdline;
 
+	# added in 21.11
+	rdline_new;
+	rdline_free;
+	rdline_get_history_buffer_size;
+	rdline_get_opaque;
+
 	local: *;
 };
-- 
2.29.3


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v3 04/14] eventdev: move inline APIs into separate structure
  @ 2021-10-06  6:50  2%     ` pbhagavatula
    1 sibling, 0 replies; 200+ results
From: pbhagavatula @ 2021-10-06  6:50 UTC (permalink / raw)
  To: jerinj, Ray Kinsella; +Cc: dev, Pavan Nikhilesh

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Move fastpath inline function pointers from rte_eventdev into a
separate structure accessed via a flat array.
The intension is to make rte_eventdev and related structures private
to avoid future API/ABI breakages.`

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
---
 lib/eventdev/eventdev_pmd.h      |  38 +++++++++++
 lib/eventdev/eventdev_pmd_pci.h  |   4 +-
 lib/eventdev/eventdev_private.c  | 112 +++++++++++++++++++++++++++++++
 lib/eventdev/meson.build         |   1 +
 lib/eventdev/rte_eventdev.c      |  22 +++++-
 lib/eventdev/rte_eventdev_core.h |  28 ++++++++
 lib/eventdev/version.map         |   6 ++
 7 files changed, 209 insertions(+), 2 deletions(-)
 create mode 100644 lib/eventdev/eventdev_private.c

diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index 7eb2aa0520..b188280778 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -1189,4 +1189,42 @@ __rte_internal
 int
 rte_event_pmd_release(struct rte_eventdev *eventdev);
 
+/**
+ *
+ * @internal
+ * This is the last step of device probing.
+ * It must be called after a port is allocated and initialized successfully.
+ *
+ * @param eventdev
+ *  New event device.
+ */
+__rte_internal
+void
+event_dev_probing_finish(struct rte_eventdev *eventdev);
+
+/**
+ * Reset eventdevice fastpath APIs to dummy values.
+ *
+ * @param fp_ops
+ * The *fp_ops* pointer to reset.
+ */
+__rte_internal
+void
+event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op);
+
+/**
+ * Set eventdevice fastpath APIs to event device values.
+ *
+ * @param fp_ops
+ * The *fp_ops* pointer to set.
+ */
+__rte_internal
+void
+event_dev_fp_ops_set(struct rte_event_fp_ops *fp_ops,
+		     const struct rte_eventdev *dev);
+
+#ifdef __cplusplus
+}
+#endif
+
 #endif /* _RTE_EVENTDEV_PMD_H_ */
diff --git a/lib/eventdev/eventdev_pmd_pci.h b/lib/eventdev/eventdev_pmd_pci.h
index 2f12a5eb24..499852db16 100644
--- a/lib/eventdev/eventdev_pmd_pci.h
+++ b/lib/eventdev/eventdev_pmd_pci.h
@@ -67,8 +67,10 @@ rte_event_pmd_pci_probe_named(struct rte_pci_driver *pci_drv,
 
 	/* Invoke PMD device initialization function */
 	retval = devinit(eventdev);
-	if (retval == 0)
+	if (retval == 0) {
+		event_dev_probing_finish(eventdev);
 		return 0;
+	}
 
 	RTE_EDEV_LOG_ERR("driver %s: (vendor_id=0x%x device_id=0x%x)"
 			" failed", pci_drv->driver.name,
diff --git a/lib/eventdev/eventdev_private.c b/lib/eventdev/eventdev_private.c
new file mode 100644
index 0000000000..9084833847
--- /dev/null
+++ b/lib/eventdev/eventdev_private.c
@@ -0,0 +1,112 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "eventdev_pmd.h"
+#include "rte_eventdev.h"
+
+static uint16_t
+dummy_event_enqueue(__rte_unused void *port,
+		    __rte_unused const struct rte_event *ev)
+{
+	RTE_EDEV_LOG_ERR(
+		"event enqueue requested for unconfigured event device");
+	return 0;
+}
+
+static uint16_t
+dummy_event_enqueue_burst(__rte_unused void *port,
+			  __rte_unused const struct rte_event ev[],
+			  __rte_unused uint16_t nb_events)
+{
+	RTE_EDEV_LOG_ERR(
+		"event enqueue burst requested for unconfigured event device");
+	return 0;
+}
+
+static uint16_t
+dummy_event_dequeue(__rte_unused void *port, __rte_unused struct rte_event *ev,
+		    __rte_unused uint64_t timeout_ticks)
+{
+	RTE_EDEV_LOG_ERR(
+		"event dequeue requested for unconfigured event device");
+	return 0;
+}
+
+static uint16_t
+dummy_event_dequeue_burst(__rte_unused void *port,
+			  __rte_unused struct rte_event ev[],
+			  __rte_unused uint16_t nb_events,
+			  __rte_unused uint64_t timeout_ticks)
+{
+	RTE_EDEV_LOG_ERR(
+		"event dequeue burst requested for unconfigured event device");
+	return 0;
+}
+
+static uint16_t
+dummy_event_tx_adapter_enqueue(__rte_unused void *port,
+			       __rte_unused struct rte_event ev[],
+			       __rte_unused uint16_t nb_events)
+{
+	RTE_EDEV_LOG_ERR(
+		"event Tx adapter enqueue requested for unconfigured event device");
+	return 0;
+}
+
+static uint16_t
+dummy_event_tx_adapter_enqueue_same_dest(__rte_unused void *port,
+					 __rte_unused struct rte_event ev[],
+					 __rte_unused uint16_t nb_events)
+{
+	RTE_EDEV_LOG_ERR(
+		"event Tx adapter enqueue same destination requested for unconfigured event device");
+	return 0;
+}
+
+static uint16_t
+dummy_event_crypto_adapter_enqueue(__rte_unused void *port,
+				   __rte_unused struct rte_event ev[],
+				   __rte_unused uint16_t nb_events)
+{
+	RTE_EDEV_LOG_ERR(
+		"event crypto adapter enqueue requested for unconfigured event device");
+	return 0;
+}
+
+void
+event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op)
+{
+	static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
+	static const struct rte_event_fp_ops dummy = {
+		.enqueue = dummy_event_enqueue,
+		.enqueue_burst = dummy_event_enqueue_burst,
+		.enqueue_new_burst = dummy_event_enqueue_burst,
+		.enqueue_forward_burst = dummy_event_enqueue_burst,
+		.dequeue = dummy_event_dequeue,
+		.dequeue_burst = dummy_event_dequeue_burst,
+		.txa_enqueue = dummy_event_tx_adapter_enqueue,
+		.txa_enqueue_same_dest =
+			dummy_event_tx_adapter_enqueue_same_dest,
+		.ca_enqueue = dummy_event_crypto_adapter_enqueue,
+		.data = dummy_data,
+	};
+
+	*fp_op = dummy;
+}
+
+void
+event_dev_fp_ops_set(struct rte_event_fp_ops *fp_op,
+		     const struct rte_eventdev *dev)
+{
+	fp_op->enqueue = dev->enqueue;
+	fp_op->enqueue_burst = dev->enqueue_burst;
+	fp_op->enqueue_new_burst = dev->enqueue_new_burst;
+	fp_op->enqueue_forward_burst = dev->enqueue_forward_burst;
+	fp_op->dequeue = dev->dequeue;
+	fp_op->dequeue_burst = dev->dequeue_burst;
+	fp_op->txa_enqueue = dev->txa_enqueue;
+	fp_op->txa_enqueue_same_dest = dev->txa_enqueue_same_dest;
+	fp_op->ca_enqueue = dev->ca_enqueue;
+	fp_op->data = dev->data->ports;
+}
diff --git a/lib/eventdev/meson.build b/lib/eventdev/meson.build
index 8b51fde361..9051ff04b7 100644
--- a/lib/eventdev/meson.build
+++ b/lib/eventdev/meson.build
@@ -8,6 +8,7 @@ else
 endif
 
 sources = files(
+        'eventdev_private.c',
         'rte_eventdev.c',
         'rte_event_ring.c',
         'eventdev_trace_points.c',
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index bfcfa31cd1..4c30a37831 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -46,6 +46,9 @@ static struct rte_eventdev_global eventdev_globals = {
 	.nb_devs		= 0
 };
 
+/* Public fastpath APIs. */
+struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS];
+
 /* Event dev north bound API implementation */
 
 uint8_t
@@ -300,8 +303,8 @@ int
 rte_event_dev_configure(uint8_t dev_id,
 			const struct rte_event_dev_config *dev_conf)
 {
-	struct rte_eventdev *dev;
 	struct rte_event_dev_info info;
+	struct rte_eventdev *dev;
 	int diag;
 
 	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
@@ -470,10 +473,13 @@ rte_event_dev_configure(uint8_t dev_id,
 		return diag;
 	}
 
+	event_dev_fp_ops_reset(rte_event_fp_ops + dev_id);
+
 	/* Configure the device */
 	diag = (*dev->dev_ops->dev_configure)(dev);
 	if (diag != 0) {
 		RTE_EDEV_LOG_ERR("dev%d dev_configure = %d", dev_id, diag);
+		event_dev_fp_ops_reset(rte_event_fp_ops + dev_id);
 		event_dev_queue_config(dev, 0);
 		event_dev_port_config(dev, 0);
 	}
@@ -1244,6 +1250,8 @@ rte_event_dev_start(uint8_t dev_id)
 	else
 		return diag;
 
+	event_dev_fp_ops_set(rte_event_fp_ops + dev_id, dev);
+
 	return 0;
 }
 
@@ -1284,6 +1292,7 @@ rte_event_dev_stop(uint8_t dev_id)
 	dev->data->dev_started = 0;
 	(*dev->dev_ops->dev_stop)(dev);
 	rte_eventdev_trace_stop(dev_id);
+	event_dev_fp_ops_reset(rte_event_fp_ops + dev_id);
 }
 
 int
@@ -1302,6 +1311,7 @@ rte_event_dev_close(uint8_t dev_id)
 		return -EBUSY;
 	}
 
+	event_dev_fp_ops_reset(rte_event_fp_ops + dev_id);
 	rte_eventdev_trace_close(dev_id);
 	return (*dev->dev_ops->dev_close)(dev);
 }
@@ -1435,6 +1445,7 @@ rte_event_pmd_release(struct rte_eventdev *eventdev)
 	if (eventdev == NULL)
 		return -EINVAL;
 
+	event_dev_fp_ops_reset(rte_event_fp_ops + eventdev->data->dev_id);
 	eventdev->attached = RTE_EVENTDEV_DETACHED;
 	eventdev_globals.nb_devs--;
 
@@ -1460,6 +1471,15 @@ rte_event_pmd_release(struct rte_eventdev *eventdev)
 	return 0;
 }
 
+void
+event_dev_probing_finish(struct rte_eventdev *eventdev)
+{
+	if (eventdev == NULL)
+		return;
+
+	event_dev_fp_ops_set(rte_event_fp_ops + eventdev->data->dev_id,
+			     eventdev);
+}
 
 static int
 handle_dev_list(const char *cmd __rte_unused,
diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h
index 115b97e431..4461073101 100644
--- a/lib/eventdev/rte_eventdev_core.h
+++ b/lib/eventdev/rte_eventdev_core.h
@@ -39,6 +39,34 @@ typedef uint16_t (*event_crypto_adapter_enqueue_t)(void *port,
 						   uint16_t nb_events);
 /**< @internal Enqueue burst of events on crypto adapter */
 
+struct rte_event_fp_ops {
+	event_enqueue_t enqueue;
+	/**< PMD enqueue function. */
+	event_enqueue_burst_t enqueue_burst;
+	/**< PMD enqueue burst function. */
+	event_enqueue_burst_t enqueue_new_burst;
+	/**< PMD enqueue burst new function. */
+	event_enqueue_burst_t enqueue_forward_burst;
+	/**< PMD enqueue burst fwd function. */
+	event_dequeue_t dequeue;
+	/**< PMD dequeue function. */
+	event_dequeue_burst_t dequeue_burst;
+	/**< PMD dequeue burst function. */
+	event_tx_adapter_enqueue_t txa_enqueue;
+	/**< PMD Tx adapter enqueue function. */
+	event_tx_adapter_enqueue_t txa_enqueue_same_dest;
+	/**< PMD Tx adapter enqueue same destination function. */
+	event_crypto_adapter_enqueue_t ca_enqueue;
+	/**< PMD Crypto adapter enqueue function. */
+	uintptr_t reserved[2];
+
+	void **data;
+	/**< points to array of internal port data pointers */
+	uintptr_t reserved2[4];
+} __rte_cache_aligned;
+
+extern struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS];
+
 #define RTE_EVENTDEV_NAME_MAX_LEN (64)
 /**< @internal Max length of name of event PMD */
 
diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
index 5f1fe412a4..a3a732089b 100644
--- a/lib/eventdev/version.map
+++ b/lib/eventdev/version.map
@@ -85,6 +85,9 @@ DPDK_22 {
 	rte_event_timer_cancel_burst;
 	rte_eventdevs;
 
+	#added in 21.11
+	rte_event_fp_ops;
+
 	local: *;
 };
 
@@ -141,6 +144,9 @@ EXPERIMENTAL {
 INTERNAL {
 	global:
 
+	event_dev_fp_ops_reset;
+	event_dev_fp_ops_set;
+	event_dev_probing_finish;
 	rte_event_pmd_selftest_seqn_dynfield_offset;
 	rte_event_pmd_allocate;
 	rte_event_pmd_get_named_dev;
-- 
2.17.1


^ permalink raw reply	[relevance 2%]

* Re: [dpdk-dev] [PATCH v3 14/14] eventdev: mark trace variables as internal
  @ 2021-10-06  7:11  5%       ` David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2021-10-06  7:11 UTC (permalink / raw)
  To: Pavan Nikhilesh, Ray Kinsella; +Cc: Jerin Jacob Kollanukkaran, dev

Hello Pavan, Ray,

On Wed, Oct 6, 2021 at 8:52 AM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Mark rte_trace global variables as internal i.e. remove them
> from experimental section of version map.
> Some of them are used in inline APIs, mark those as global.
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> Acked-by: Ray Kinsella <mdr@ashroe.eu>

Please, sort those symbols.
I check with ./devtools/update-abi.sh $(cat ABI_VERSION)


> ---
>  lib/eventdev/version.map | 77 ++++++++++++++++++----------------------
>  1 file changed, 35 insertions(+), 42 deletions(-)
>
> diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
> index 068d186c66..617fff0ae6 100644
> --- a/lib/eventdev/version.map
> +++ b/lib/eventdev/version.map
> @@ -88,57 +88,19 @@ DPDK_22 {
>         rte_event_vector_pool_create;
>         rte_eventdevs;
>
> -       #added in 21.11
> -       rte_event_fp_ops;
> -
> -       local: *;
> -};
> -
> -EXPERIMENTAL {
> -       global:
> -
>         # added in 20.05

At the next ABI bump, ./devtools/update-abi.sh will strip those
comments from the stable section.
You can notice this when you run ./devtools/update-abi.sh $CURRENT_ABI
as suggested above.

I would strip the comments now that the symbols are going to stable.
Ray, do you have an opinion?


-- 
David Marchand


^ permalink raw reply	[relevance 5%]

* Re: [dpdk-dev] [PATCH v4 0/7] hide eth dev related structures
  2021-10-04 13:55  4%     ` [dpdk-dev] [PATCH v4 " Konstantin Ananyev
                         ` (3 preceding siblings ...)
  2021-10-04 13:56  8%       ` [dpdk-dev] [PATCH v4 7/7] ethdev: hide eth dev related structures Konstantin Ananyev
@ 2021-10-06 16:42  0%       ` Ali Alnubani
  2021-10-06 17:26  0%         ` Ali Alnubani
  2021-10-07 11:27  4%       ` [dpdk-dev] [PATCH v5 " Konstantin Ananyev
  5 siblings, 1 reply; 200+ results
From: Ali Alnubani @ 2021-10-06 16:42 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	Matan Azrad, Slava Ovsiienko, sthemmin, NBU-Contact-longli,
	heinrich.kuhn, kirankumark, andrew.rybchenko, mczekaj, jiawenwu,
	jianwang, maxime.coquelin, chenbo.xia,
	NBU-Contact-Thomas Monjalon, ferruh.yigit, mdr,
	jay.jayatheerthan

> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Konstantin Ananyev
> Sent: Monday, October 4, 2021 4:56 PM
> To: dev@dpdk.org
> Cc: xiaoyun.li@intel.com; anoobj@marvell.com; jerinj@marvell.com;
> ndabilpuram@marvell.com; adwivedi@marvell.com;
> shepard.siegel@atomicrules.com; ed.czeck@atomicrules.com;
> john.miller@atomicrules.com; irusskikh@marvell.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com;
> rahul.lakkireddy@chelsio.com; hemant.agrawal@nxp.com;
> sachin.saxena@oss.nxp.com; haiyue.wang@intel.com; johndale@cisco.com;
> hyonkim@cisco.com; qi.z.zhang@intel.com; xiao.w.wang@intel.com;
> humin29@huawei.com; yisen.zhuang@huawei.com; oulijun@huawei.com;
> beilei.xing@intel.com; jingjing.wu@intel.com; qiming.yang@intel.com;
> Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; sthemmin@microsoft.com; NBU-Contact-longli
> <longli@microsoft.com>; heinrich.kuhn@corigine.com;
> kirankumark@marvell.com; andrew.rybchenko@oktetlabs.ru;
> mczekaj@marvell.com; jiawenwu@trustnetic.com;
> jianwang@trustnetic.com; maxime.coquelin@redhat.com;
> chenbo.xia@intel.com; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; ferruh.yigit@intel.com; mdr@ashroe.eu;
> jay.jayatheerthan@intel.com; Konstantin Ananyev
> <konstantin.ananyev@intel.com>
> Subject: [dpdk-dev] [PATCH v4 0/7] hide eth dev related structures
> 
> v4 changes:
>  - Fix secondary process attach (Pavan)
>  - Fix build failure (Ferruh)
>  - Update lib/ethdev/verion.map (Ferruh)
>    Note that moving newly added symbols from EXPERIMENTAL to DPDK_22
>    section makes checkpatch.sh to complain.
> 
> v3 changes:
>  - Changes in public struct naming (Jerin/Haiyue)
>  - Split patches
>  - Update docs
>  - Shamelessly included Andrew's patch:
>    https://patches.dpdk.org/project/dpdk/patch/20210928154856.1015020-1-
> andrew.rybchenko@oktetlabs.ru/
>    into these series.
>    I have to do similar thing here, so decided to avoid duplicated effort.
> 
> The aim of these patch series is to make rte_ethdev core data structures
> (rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback, etc.) internal to
> DPDK and not visible to the user.
> That should allow future possible changes to core ethdev related structures
> to be transparent to the user and help to improve ABI/API stability.
> Note that current ethdev API is preserved, but it is a formal ABI break.
> 
> The work is based on previous discussions at:
> https://www.mail-archive.com/dev@dpdk.org/msg211405.html
> https://www.mail-archive.com/dev@dpdk.org/msg216685.html
> and consists of the following main points:
> 1. Copy public 'fast' function pointers (rx_pkt_burst(), etc.) and
>    related data pointer from rte_eth_dev into a separate flat array.
>    We keep it public to still be able to use inline functions for these
>    'fast' calls (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
>    Note that apart from function pointers itself, each element of this
>    flat array also contains two opaque pointers for each ethdev:
>    1) a pointer to an array of internal queue data pointers
>    2)  points to array of queue callback data pointers.
>    Note that exposing this extra information allows us to avoid extra
>    changes inside PMD level, plus should help to avoid possible
>    performance degradation.
> 2. Change implementation of 'fast' inline ethdev functions
>    (rte_eth_rx_burst(), etc.) to use new public flat array.
>    While it is an ABI breakage, this change is intended to be transparent
>    for both users (no changes in user app is required) and PMD developers
>    (no changes in PMD is required).
>    One extra note - with new implementation RX/TX callback invocation
>    will cost one extra function call with this changes. That might cause
>    some slowdown for code-path with RX/TX callbacks heavily involved.
>    Hope such trade-off is acceptable for the community.
> 3. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
>    things into internal header: <ethdev_driver.h>.
> 
> That approach was selected to:
>   - Avoid(/minimize) possible performance losses.
>   - Minimize required changes inside PMDs.
> 
> Performance testing results (ICX 2.0GHz, E810 (ice)):
>  - testpmd macswap fwd mode, plus
>    a) no RX/TX callbacks:
>       no actual slowdown observed
>    b) bpf-load rx 0 0 JM ./dpdk.org/examples/bpf/t3.o:
>       ~2% slowdown
>  - l3fwd: no actual slowdown observed
> 
> Would like to thank everyone who already reviewed and tested previous
> versions of these series. All other interested parties please don't be shy and
> provide your feedback.
> 
> Konstantin Ananyev (7):
>   ethdev: allocate max space for internal queue array
>   ethdev: change input parameters for rx_queue_count
>   ethdev: copy ethdev 'fast' API into separate structure
>   ethdev: make burst functions to use new flat array
>   ethdev: add API to retrieve multiple ethernet addresses
>   ethdev: remove legacy Rx descriptor done API
>   ethdev: hide eth dev related structures
> 

Tested single and multi-core packet forwarding performance with testpmd on both ConnectX-5 and ConnectX-6 Dx.

Tested-by: Ali Alnubani <alialnu@nvidia.com>

Thanks,
Ali

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v4 0/7] hide eth dev related structures
  2021-10-06 16:42  0%       ` [dpdk-dev] [PATCH v4 0/7] " Ali Alnubani
@ 2021-10-06 17:26  0%         ` Ali Alnubani
  0 siblings, 0 replies; 200+ results
From: Ali Alnubani @ 2021-10-06 17:26 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	Matan Azrad, Slava Ovsiienko, sthemmin, NBU-Contact-longli,
	heinrich.kuhn, kirankumark, andrew.rybchenko, mczekaj, jiawenwu,
	jianwang, maxime.coquelin, chenbo.xia,
	NBU-Contact-Thomas Monjalon, ferruh.yigit, mdr,
	jay.jayatheerthan

> -----Original Message-----
> From: Ali Alnubani
> Sent: Wednesday, October 6, 2021 7:43 PM
> To: Konstantin Ananyev <konstantin.ananyev@intel.com>; dev@dpdk.org
> Cc: xiaoyun.li@intel.com; anoobj@marvell.com; jerinj@marvell.com;
> ndabilpuram@marvell.com; adwivedi@marvell.com;
> shepard.siegel@atomicrules.com; ed.czeck@atomicrules.com;
> john.miller@atomicrules.com; irusskikh@marvell.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com;
> rahul.lakkireddy@chelsio.com; hemant.agrawal@nxp.com;
> sachin.saxena@oss.nxp.com; haiyue.wang@intel.com; johndale@cisco.com;
> hyonkim@cisco.com; qi.z.zhang@intel.com; xiao.w.wang@intel.com;
> humin29@huawei.com; yisen.zhuang@huawei.com; oulijun@huawei.com;
> beilei.xing@intel.com; jingjing.wu@intel.com; qiming.yang@intel.com;
> Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; sthemmin@microsoft.com; NBU-Contact-longli
> <longli@microsoft.com>; heinrich.kuhn@corigine.com;
> kirankumark@marvell.com; andrew.rybchenko@oktetlabs.ru;
> mczekaj@marvell.com; jiawenwu@trustnetic.com;
> jianwang@trustnetic.com; maxime.coquelin@redhat.com;
> chenbo.xia@intel.com; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; ferruh.yigit@intel.com; mdr@ashroe.eu;
> jay.jayatheerthan@intel.com
> Subject: RE: [dpdk-dev] [PATCH v4 0/7] hide eth dev related structures
> 
> > -----Original Message-----
> > From: dev <dev-bounces@dpdk.org> On Behalf Of Konstantin Ananyev
> > Sent: Monday, October 4, 2021 4:56 PM
> > To: dev@dpdk.org
> > Cc: xiaoyun.li@intel.com; anoobj@marvell.com; jerinj@marvell.com;
> > ndabilpuram@marvell.com; adwivedi@marvell.com;
> > shepard.siegel@atomicrules.com; ed.czeck@atomicrules.com;
> > john.miller@atomicrules.com; irusskikh@marvell.com;
> > ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com;
> > rahul.lakkireddy@chelsio.com; hemant.agrawal@nxp.com;
> > sachin.saxena@oss.nxp.com; haiyue.wang@intel.com;
> johndale@cisco.com;
> > hyonkim@cisco.com; qi.z.zhang@intel.com; xiao.w.wang@intel.com;
> > humin29@huawei.com; yisen.zhuang@huawei.com; oulijun@huawei.com;
> > beilei.xing@intel.com; jingjing.wu@intel.com; qiming.yang@intel.com;
> > Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> > <viacheslavo@nvidia.com>; sthemmin@microsoft.com; NBU-Contact-longli
> > <longli@microsoft.com>; heinrich.kuhn@corigine.com;
> > kirankumark@marvell.com; andrew.rybchenko@oktetlabs.ru;
> > mczekaj@marvell.com; jiawenwu@trustnetic.com;
> jianwang@trustnetic.com;
> > maxime.coquelin@redhat.com; chenbo.xia@intel.com; NBU-Contact-
> Thomas
> > Monjalon <thomas@monjalon.net>; ferruh.yigit@intel.com;
> mdr@ashroe.eu;
> > jay.jayatheerthan@intel.com; Konstantin Ananyev
> > <konstantin.ananyev@intel.com>
> > Subject: [dpdk-dev] [PATCH v4 0/7] hide eth dev related structures
> >
> > v4 changes:
> >  - Fix secondary process attach (Pavan)
> >  - Fix build failure (Ferruh)
> >  - Update lib/ethdev/verion.map (Ferruh)
> >    Note that moving newly added symbols from EXPERIMENTAL to DPDK_22
> >    section makes checkpatch.sh to complain.
> >
> > v3 changes:
> >  - Changes in public struct naming (Jerin/Haiyue)
> >  - Split patches
> >  - Update docs
> >  - Shamelessly included Andrew's patch:
> >
> > https://patches.dpdk.org/project/dpdk/patch/20210928154856.1015020-1-
> > andrew.rybchenko@oktetlabs.ru/
> >    into these series.
> >    I have to do similar thing here, so decided to avoid duplicated effort.
> >
> > The aim of these patch series is to make rte_ethdev core data
> > structures (rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback,
> > etc.) internal to DPDK and not visible to the user.
> > That should allow future possible changes to core ethdev related
> > structures to be transparent to the user and help to improve ABI/API
> stability.
> > Note that current ethdev API is preserved, but it is a formal ABI break.
> >
> > The work is based on previous discussions at:
> > https://www.mail-archive.com/dev@dpdk.org/msg211405.html
> > https://www.mail-archive.com/dev@dpdk.org/msg216685.html
> > and consists of the following main points:
> > 1. Copy public 'fast' function pointers (rx_pkt_burst(), etc.) and
> >    related data pointer from rte_eth_dev into a separate flat array.
> >    We keep it public to still be able to use inline functions for these
> >    'fast' calls (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
> >    Note that apart from function pointers itself, each element of this
> >    flat array also contains two opaque pointers for each ethdev:
> >    1) a pointer to an array of internal queue data pointers
> >    2)  points to array of queue callback data pointers.
> >    Note that exposing this extra information allows us to avoid extra
> >    changes inside PMD level, plus should help to avoid possible
> >    performance degradation.
> > 2. Change implementation of 'fast' inline ethdev functions
> >    (rte_eth_rx_burst(), etc.) to use new public flat array.
> >    While it is an ABI breakage, this change is intended to be transparent
> >    for both users (no changes in user app is required) and PMD developers
> >    (no changes in PMD is required).
> >    One extra note - with new implementation RX/TX callback invocation
> >    will cost one extra function call with this changes. That might cause
> >    some slowdown for code-path with RX/TX callbacks heavily involved.
> >    Hope such trade-off is acceptable for the community.
> > 3. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and
> related
> >    things into internal header: <ethdev_driver.h>.
> >
> > That approach was selected to:
> >   - Avoid(/minimize) possible performance losses.
> >   - Minimize required changes inside PMDs.
> >
> > Performance testing results (ICX 2.0GHz, E810 (ice)):
> >  - testpmd macswap fwd mode, plus
> >    a) no RX/TX callbacks:
> >       no actual slowdown observed
> >    b) bpf-load rx 0 0 JM ./dpdk.org/examples/bpf/t3.o:
> >       ~2% slowdown
> >  - l3fwd: no actual slowdown observed
> >
> > Would like to thank everyone who already reviewed and tested previous
> > versions of these series. All other interested parties please don't be
> > shy and provide your feedback.
> >
> > Konstantin Ananyev (7):
> >   ethdev: allocate max space for internal queue array
> >   ethdev: change input parameters for rx_queue_count
> >   ethdev: copy ethdev 'fast' API into separate structure
> >   ethdev: make burst functions to use new flat array
> >   ethdev: add API to retrieve multiple ethernet addresses
> >   ethdev: remove legacy Rx descriptor done API
> >   ethdev: hide eth dev related structures
> >
> 
> Tested single and multi-core packet forwarding performance with testpmd
> on both ConnectX-5 and ConnectX-6 Dx.
> 

I should have mentioned that I didn't see any noticeable regressions with the cases I mentioned.

> Tested-by: Ali Alnubani <alialnu@nvidia.com>
> 
> Thanks,
> Ali

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v9] bbdev: add device info related to data endianness assumption
  @ 2021-10-06 20:58  4% ` Nicolas Chautru
  2021-10-07 12:01  0%   ` Tom Rix
  2021-10-07 13:13  0%   ` [dpdk-dev] [EXT] " Akhil Goyal
  0 siblings, 2 replies; 200+ results
From: Nicolas Chautru @ 2021-10-06 20:58 UTC (permalink / raw)
  To: dev, gakhil, nipun.gupta, trix
  Cc: thomas, mingshan.zhang, arun.joshi, hemant.agrawal,
	david.marchand, Nicolas Chautru

Adding device information to capture explicitly the assumption
of the input/output data byte endianness being processed.

Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
---
 doc/guides/rel_notes/release_21_11.rst             | 1 +
 drivers/baseband/acc100/rte_acc100_pmd.c           | 1 +
 drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 1 +
 drivers/baseband/fpga_lte_fec/fpga_lte_fec.c       | 1 +
 drivers/baseband/turbo_sw/bbdev_turbo_software.c   | 1 +
 lib/bbdev/rte_bbdev.h                              | 8 ++++++++
 6 files changed, 13 insertions(+)

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index a8900a3..f0b3006 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -191,6 +191,7 @@ API Changes
 
 * bbdev: Added capability related to more comprehensive CRC options.
 
+* bbdev: Added device info related to data byte endianness processing assumption.
 
 ABI Changes
 -----------
diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c b/drivers/baseband/acc100/rte_acc100_pmd.c
index 4e2feef..eb2c6c1 100644
--- a/drivers/baseband/acc100/rte_acc100_pmd.c
+++ b/drivers/baseband/acc100/rte_acc100_pmd.c
@@ -1089,6 +1089,7 @@
 #else
 	dev_info->harq_buffer_size = 0;
 #endif
+	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
 	acc100_check_ir(d);
 }
 
diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
index 6485cc8..c7f15c0 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
+++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
@@ -372,6 +372,7 @@
 	dev_info->default_queue_conf = default_queue_conf;
 	dev_info->capabilities = bbdev_capabilities;
 	dev_info->cpu_flag_reqs = NULL;
+	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
 
 	/* Calculates number of queues assigned to device */
 	dev_info->max_num_queues = 0;
diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
index 350c424..72e213e 100644
--- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
+++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
@@ -644,6 +644,7 @@ struct __rte_cache_aligned fpga_queue {
 	dev_info->default_queue_conf = default_queue_conf;
 	dev_info->capabilities = bbdev_capabilities;
 	dev_info->cpu_flag_reqs = NULL;
+	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
 
 	/* Calculates number of queues assigned to device */
 	dev_info->max_num_queues = 0;
diff --git a/drivers/baseband/turbo_sw/bbdev_turbo_software.c b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
index e1db2bf..0cab91a 100644
--- a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
+++ b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
@@ -253,6 +253,7 @@ struct turbo_sw_queue {
 	dev_info->capabilities = bbdev_capabilities;
 	dev_info->min_alignment = 64;
 	dev_info->harq_buffer_size = 0;
+	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
 
 	rte_bbdev_log_debug("got device info from %u\n", dev->data->dev_id);
 }
diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h
index 3ebf62e..b3f3000 100644
--- a/lib/bbdev/rte_bbdev.h
+++ b/lib/bbdev/rte_bbdev.h
@@ -49,6 +49,12 @@ enum rte_bbdev_state {
 	RTE_BBDEV_INITIALIZED
 };
 
+/** Definitions of device data byte endianness types */
+enum rte_bbdev_endianness {
+	RTE_BBDEV_BIG_ENDIAN,    /**< Data with byte-endianness BE */
+	RTE_BBDEV_LITTLE_ENDIAN, /**< Data with byte-endianness LE */
+};
+
 /**
  * Get the total number of devices that have been successfully initialised.
  *
@@ -309,6 +315,8 @@ struct rte_bbdev_driver_info {
 	uint16_t min_alignment;
 	/** HARQ memory available in kB */
 	uint32_t harq_buffer_size;
+	/** Byte endianness assumption for input/output data */
+	enum rte_bbdev_endianness data_endianness;
 	/** Default queue configuration used if none is supplied  */
 	struct rte_bbdev_queue_conf default_queue_conf;
 	/** Device operation capabilities */
-- 
1.8.3.1


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v9 1/8] bbdev: add device info related to data endianness assumption
  @ 2021-10-07  9:33  4%   ` nipun.gupta
  0 siblings, 0 replies; 200+ results
From: nipun.gupta @ 2021-10-07  9:33 UTC (permalink / raw)
  To: dev, gakhil, nicolas.chautru; +Cc: david.marchand, hemant.agrawal

From: Nicolas Chautru <nicolas.chautru@intel.com>

Adding device information to capture explicitly the assumption
of the input/output data byte endianness being processed.

Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
---
 doc/guides/rel_notes/release_21_11.rst             | 1 +
 drivers/baseband/acc100/rte_acc100_pmd.c           | 1 +
 drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 1 +
 drivers/baseband/fpga_lte_fec/fpga_lte_fec.c       | 1 +
 drivers/baseband/turbo_sw/bbdev_turbo_software.c   | 1 +
 lib/bbdev/rte_bbdev.h                              | 8 ++++++++
 6 files changed, 13 insertions(+)

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index dfc2cbdeed..a991f01bf3 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -187,6 +187,7 @@ API Changes
   the crypto/security operation. This field will be used to communicate
   events such as soft expiry with IPsec in lookaside mode.
 
+* bbdev: Added device info related to data byte endianness processing assumption.
 
 ABI Changes
 -----------
diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c b/drivers/baseband/acc100/rte_acc100_pmd.c
index 68ba523ea9..5c3901c9ca 100644
--- a/drivers/baseband/acc100/rte_acc100_pmd.c
+++ b/drivers/baseband/acc100/rte_acc100_pmd.c
@@ -1088,6 +1088,7 @@ acc100_dev_info_get(struct rte_bbdev *dev,
 #else
 	dev_info->harq_buffer_size = 0;
 #endif
+	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
 	acc100_check_ir(d);
 }
 
diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
index 6485cc824a..c7f15c0bfc 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
+++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
@@ -372,6 +372,7 @@ fpga_dev_info_get(struct rte_bbdev *dev,
 	dev_info->default_queue_conf = default_queue_conf;
 	dev_info->capabilities = bbdev_capabilities;
 	dev_info->cpu_flag_reqs = NULL;
+	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
 
 	/* Calculates number of queues assigned to device */
 	dev_info->max_num_queues = 0;
diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
index 350c4248eb..72e213ed9a 100644
--- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
+++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
@@ -644,6 +644,7 @@ fpga_dev_info_get(struct rte_bbdev *dev,
 	dev_info->default_queue_conf = default_queue_conf;
 	dev_info->capabilities = bbdev_capabilities;
 	dev_info->cpu_flag_reqs = NULL;
+	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
 
 	/* Calculates number of queues assigned to device */
 	dev_info->max_num_queues = 0;
diff --git a/drivers/baseband/turbo_sw/bbdev_turbo_software.c b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
index 77e9a2ecbc..193f701028 100644
--- a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
+++ b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
@@ -251,6 +251,7 @@ info_get(struct rte_bbdev *dev, struct rte_bbdev_driver_info *dev_info)
 	dev_info->capabilities = bbdev_capabilities;
 	dev_info->min_alignment = 64;
 	dev_info->harq_buffer_size = 0;
+	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
 
 	rte_bbdev_log_debug("got device info from %u\n", dev->data->dev_id);
 }
diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h
index 3ebf62e697..b3f30002cd 100644
--- a/lib/bbdev/rte_bbdev.h
+++ b/lib/bbdev/rte_bbdev.h
@@ -49,6 +49,12 @@ enum rte_bbdev_state {
 	RTE_BBDEV_INITIALIZED
 };
 
+/** Definitions of device data byte endianness types */
+enum rte_bbdev_endianness {
+	RTE_BBDEV_BIG_ENDIAN,    /**< Data with byte-endianness BE */
+	RTE_BBDEV_LITTLE_ENDIAN, /**< Data with byte-endianness LE */
+};
+
 /**
  * Get the total number of devices that have been successfully initialised.
  *
@@ -309,6 +315,8 @@ struct rte_bbdev_driver_info {
 	uint16_t min_alignment;
 	/** HARQ memory available in kB */
 	uint32_t harq_buffer_size;
+	/** Byte endianness assumption for input/output data */
+	enum rte_bbdev_endianness data_endianness;
 	/** Default queue configuration used if none is supplied  */
 	struct rte_bbdev_queue_conf default_queue_conf;
 	/** Device operation capabilities */
-- 
2.17.1


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v5 0/7] hide eth dev related structures
  2021-10-04 13:55  4%     ` [dpdk-dev] [PATCH v4 " Konstantin Ananyev
                         ` (4 preceding siblings ...)
  2021-10-06 16:42  0%       ` [dpdk-dev] [PATCH v4 0/7] " Ali Alnubani
@ 2021-10-07 11:27  4%       ` Konstantin Ananyev
  2021-10-07 11:27  6%         ` [dpdk-dev] [PATCH v5 3/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
                           ` (5 more replies)
  5 siblings, 6 replies; 200+ results
From: Konstantin Ananyev @ 2021-10-07 11:27 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

v5 changes:
- Fix spelling (Thomas/David)
- Rename internal helper functions (David)
- Reorder patches and update commit messages (Thomas)
- Update comments (Thomas)
- Changed layout in rte_eth_fp_ops, to group functions and
   related data based on their functionality:
   first 64B line for Rx, second one for Tx.
   Didn't observe any real performance difference comparing to
   original layout. Though decided to keep a new one, as it seems
   a bit more plausible. 

v4 changes:
 - Fix secondary process attach (Pavan)
 - Fix build failure (Ferruh)
 - Update lib/ethdev/verion.map (Ferruh)
   Note that moving newly added symbols from EXPERIMENTAL to DPDK_22
   section makes checkpatch.sh to complain.

v3 changes:
 - Changes in public struct naming (Jerin/Haiyue)
 - Split patches
 - Update docs
 - Shamelessly included Andrew's patch:
   https://patches.dpdk.org/project/dpdk/patch/20210928154856.1015020-1-andrew.rybchenko@oktetlabs.ru/
   into these series.
   I have to do similar thing here, so decided to avoid duplicated effort.

The aim of these patch series is to make rte_ethdev core data structures
(rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback, etc.) internal to
DPDK and not visible to the user.
That should allow future possible changes to core ethdev related structures
to be transparent to the user and help to improve ABI/API stability.
Note that current ethdev API is preserved, but it is a formal ABI break.

The work is based on previous discussions at:
https://www.mail-archive.com/dev@dpdk.org/msg211405.html
https://www.mail-archive.com/dev@dpdk.org/msg216685.html
and consists of the following main points:
1. Copy public 'fast' function pointers (rx_pkt_burst(), etc.) and
   related data pointer from rte_eth_dev into a separate flat array.
   We keep it public to still be able to use inline functions for these
   'fast' calls (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
   Note that apart from function pointers itself, each element of this
   flat array also contains two opaque pointers for each ethdev:
   1) a pointer to an array of internal queue data pointers
   2)  points to array of queue callback data pointers.
   Note that exposing this extra information allows us to avoid extra
   changes inside PMD level, plus should help to avoid possible
   performance degradation.
2. Change implementation of 'fast' inline ethdev functions
   (rte_eth_rx_burst(), etc.) to use new public flat array.
   While it is an ABI breakage, this change is intended to be transparent
   for both users (no changes in user app is required) and PMD developers
   (no changes in PMD is required).
   One extra note - with new implementation RX/TX callback invocation
   will cost one extra function call with this changes. That might cause
   some slowdown for code-path with RX/TX callbacks heavily involved.
   Hope such trade-off is acceptable for the community.
3. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
   things into internal header: <ethdev_driver.h>.

That approach was selected to:
  - Avoid(/minimize) possible performance losses.
  - Minimize required changes inside PMDs.

Performance testing results (ICX 2.0GHz, E810 (ice)):
 - testpmd macswap fwd mode, plus
   a) no RX/TX callbacks:
      no actual slowdown observed
   b) bpf-load rx 0 0 JM ./dpdk.org/examples/bpf/t3.o:
      ~2% slowdown
 - l3fwd: no actual slowdown observed

Would like to thank everyone who already reviewed and tested previous
versions of these series. All other interested parties please don't be shy
and provide your feedback.

Andrew Rybchenko (1):
  ethdev: remove legacy Rx descriptor done API

Konstantin Ananyev (6):
  ethdev: allocate max space for internal queue array
  ethdev: change input parameters for rx_queue_count
  ethdev: copy fast-path API into separate structure
  ethdev: make fast-path functions to use new flat array
  ethdev: add API to retrieve multiple ethernet addresses
  ethdev: hide eth dev related structures

 app/test-pmd/config.c                         |  23 +-
 doc/guides/nics/features.rst                  |   6 +-
 doc/guides/rel_notes/deprecation.rst          |   5 -
 doc/guides/rel_notes/release_21_11.rst        |  21 ++
 drivers/common/octeontx2/otx2_sec_idev.c      |   2 +-
 drivers/crypto/octeontx2/otx2_cryptodev_ops.c |   2 +-
 drivers/net/ark/ark_ethdev_rx.c               |   4 +-
 drivers/net/ark/ark_ethdev_rx.h               |   3 +-
 drivers/net/atlantic/atl_ethdev.h             |   2 +-
 drivers/net/atlantic/atl_rxtx.c               |   9 +-
 drivers/net/bnxt/bnxt_ethdev.c                |   8 +-
 drivers/net/cxgbe/base/adapter.h              |   2 +-
 drivers/net/dpaa/dpaa_ethdev.c                |   9 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |   9 +-
 drivers/net/dpaa2/dpaa2_ptp.c                 |   2 +-
 drivers/net/e1000/e1000_ethdev.h              |  10 +-
 drivers/net/e1000/em_ethdev.c                 |   1 -
 drivers/net/e1000/em_rxtx.c                   |  21 +-
 drivers/net/e1000/igb_ethdev.c                |   2 -
 drivers/net/e1000/igb_rxtx.c                  |  21 +-
 drivers/net/enic/enic_ethdev.c                |  12 +-
 drivers/net/fm10k/fm10k.h                     |   5 +-
 drivers/net/fm10k/fm10k_ethdev.c              |   1 -
 drivers/net/fm10k/fm10k_rxtx.c                |  29 +-
 drivers/net/hns3/hns3_rxtx.c                  |   7 +-
 drivers/net/hns3/hns3_rxtx.h                  |   2 +-
 drivers/net/i40e/i40e_ethdev.c                |   1 -
 drivers/net/i40e/i40e_rxtx.c                  |  30 +-
 drivers/net/i40e/i40e_rxtx.h                  |   4 +-
 drivers/net/iavf/iavf_rxtx.c                  |   4 +-
 drivers/net/iavf/iavf_rxtx.h                  |   2 +-
 drivers/net/ice/ice_rxtx.c                    |   4 +-
 drivers/net/ice/ice_rxtx.h                    |   2 +-
 drivers/net/igc/igc_ethdev.c                  |   1 -
 drivers/net/igc/igc_txrx.c                    |  23 +-
 drivers/net/igc/igc_txrx.h                    |   5 +-
 drivers/net/ixgbe/ixgbe_ethdev.c              |   2 -
 drivers/net/ixgbe/ixgbe_ethdev.h              |   5 +-
 drivers/net/ixgbe/ixgbe_rxtx.c                |  22 +-
 drivers/net/mlx5/mlx5_rx.c                    |  26 +-
 drivers/net/mlx5/mlx5_rx.h                    |   2 +-
 drivers/net/netvsc/hn_rxtx.c                  |   4 +-
 drivers/net/netvsc/hn_var.h                   |   3 +-
 drivers/net/nfp/nfp_rxtx.c                    |   4 +-
 drivers/net/nfp/nfp_rxtx.h                    |   3 +-
 drivers/net/octeontx2/otx2_ethdev.c           |   1 -
 drivers/net/octeontx2/otx2_ethdev.h           |   3 +-
 drivers/net/octeontx2/otx2_ethdev_ops.c       |  20 +-
 drivers/net/sfc/sfc_ethdev.c                  |  29 +-
 drivers/net/thunderx/nicvf_ethdev.c           |   3 +-
 drivers/net/thunderx/nicvf_rxtx.c             |   4 +-
 drivers/net/thunderx/nicvf_rxtx.h             |   2 +-
 drivers/net/txgbe/txgbe_ethdev.h              |   3 +-
 drivers/net/txgbe/txgbe_rxtx.c                |   4 +-
 drivers/net/vhost/rte_eth_vhost.c             |   4 +-
 drivers/net/virtio/virtio_ethdev.c            |   1 -
 lib/ethdev/ethdev_driver.h                    | 148 +++++++++
 lib/ethdev/ethdev_private.c                   |  83 +++++
 lib/ethdev/ethdev_private.h                   |   7 +
 lib/ethdev/rte_ethdev.c                       |  89 ++++--
 lib/ethdev/rte_ethdev.h                       | 288 ++++++++++++------
 lib/ethdev/rte_ethdev_core.h                  | 171 +++--------
 lib/ethdev/version.map                        |   8 +-
 lib/eventdev/rte_event_eth_rx_adapter.c       |   2 +-
 lib/eventdev/rte_event_eth_tx_adapter.c       |   2 +-
 lib/eventdev/rte_eventdev.c                   |   2 +-
 lib/metrics/rte_metrics_telemetry.c           |   2 +-
 67 files changed, 677 insertions(+), 564 deletions(-)

-- 
2.26.3


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v5 3/7] ethdev: change input parameters for rx_queue_count
  2021-10-07 11:27  4%       ` [dpdk-dev] [PATCH v5 " Konstantin Ananyev
@ 2021-10-07 11:27  6%         ` Konstantin Ananyev
  2021-10-11  8:06  0%           ` Andrew Rybchenko
  2021-10-07 11:27  2%         ` [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure Konstantin Ananyev
                           ` (4 subsequent siblings)
  5 siblings, 1 reply; 200+ results
From: Konstantin Ananyev @ 2021-10-07 11:27 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Currently majority of fast-path ethdev ops take pointers to internal
queue data structures as an input parameter.
While eth_rx_queue_count() takes a pointer to rte_eth_dev and queue
index.
For future work to hide rte_eth_devices[] and friends it would be
plausible to unify parameters list of all fast-path ethdev ops.
This patch changes eth_rx_queue_count() to accept pointer to internal
queue data as input parameter.
While this change is transparent to user, it still counts as an ABI change,
as eth_rx_queue_count_t is used by ethdev public inline function
rte_eth_rx_queue_count().

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/release_21_11.rst  |  6 ++++++
 drivers/net/ark/ark_ethdev_rx.c         |  4 ++--
 drivers/net/ark/ark_ethdev_rx.h         |  3 +--
 drivers/net/atlantic/atl_ethdev.h       |  2 +-
 drivers/net/atlantic/atl_rxtx.c         |  9 ++-------
 drivers/net/bnxt/bnxt_ethdev.c          |  8 +++++---
 drivers/net/dpaa/dpaa_ethdev.c          |  9 ++++-----
 drivers/net/dpaa2/dpaa2_ethdev.c        |  9 ++++-----
 drivers/net/e1000/e1000_ethdev.h        |  6 ++----
 drivers/net/e1000/em_rxtx.c             |  4 ++--
 drivers/net/e1000/igb_rxtx.c            |  4 ++--
 drivers/net/enic/enic_ethdev.c          | 12 ++++++------
 drivers/net/fm10k/fm10k.h               |  2 +-
 drivers/net/fm10k/fm10k_rxtx.c          |  4 ++--
 drivers/net/hns3/hns3_rxtx.c            |  7 +++++--
 drivers/net/hns3/hns3_rxtx.h            |  2 +-
 drivers/net/i40e/i40e_rxtx.c            |  4 ++--
 drivers/net/i40e/i40e_rxtx.h            |  3 +--
 drivers/net/iavf/iavf_rxtx.c            |  4 ++--
 drivers/net/iavf/iavf_rxtx.h            |  2 +-
 drivers/net/ice/ice_rxtx.c              |  4 ++--
 drivers/net/ice/ice_rxtx.h              |  2 +-
 drivers/net/igc/igc_txrx.c              |  5 ++---
 drivers/net/igc/igc_txrx.h              |  3 +--
 drivers/net/ixgbe/ixgbe_ethdev.h        |  3 +--
 drivers/net/ixgbe/ixgbe_rxtx.c          |  4 ++--
 drivers/net/mlx5/mlx5_rx.c              | 26 ++++++++++++-------------
 drivers/net/mlx5/mlx5_rx.h              |  2 +-
 drivers/net/netvsc/hn_rxtx.c            |  4 ++--
 drivers/net/netvsc/hn_var.h             |  2 +-
 drivers/net/nfp/nfp_rxtx.c              |  4 ++--
 drivers/net/nfp/nfp_rxtx.h              |  3 +--
 drivers/net/octeontx2/otx2_ethdev.h     |  2 +-
 drivers/net/octeontx2/otx2_ethdev_ops.c |  8 ++++----
 drivers/net/sfc/sfc_ethdev.c            | 12 ++++++------
 drivers/net/thunderx/nicvf_ethdev.c     |  3 +--
 drivers/net/thunderx/nicvf_rxtx.c       |  4 ++--
 drivers/net/thunderx/nicvf_rxtx.h       |  2 +-
 drivers/net/txgbe/txgbe_ethdev.h        |  3 +--
 drivers/net/txgbe/txgbe_rxtx.c          |  4 ++--
 drivers/net/vhost/rte_eth_vhost.c       |  4 ++--
 lib/ethdev/rte_ethdev.h                 |  2 +-
 lib/ethdev/rte_ethdev_core.h            |  3 +--
 43 files changed, 103 insertions(+), 110 deletions(-)

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 436d29afda..ca5d169598 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -226,6 +226,12 @@ ABI Changes
   ``rte_security_ipsec_xform`` to allow applications to configure SA soft
   and hard expiry limits. Limits can be either in number of packets or bytes.
 
+* ethdev: Input parameters for ``eth_rx_queue_count_t`` was changed.
+  Instead of pointer to ``rte_eth_dev`` and queue index, now it accepts pointer
+  to internal queue data as input parameter. While this change is transparent
+  to user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
+  is used by  public inline function ``rte_eth_rx_queue_count``.
+
 
 Known Issues
 ------------
diff --git a/drivers/net/ark/ark_ethdev_rx.c b/drivers/net/ark/ark_ethdev_rx.c
index d255f0177b..98658ce621 100644
--- a/drivers/net/ark/ark_ethdev_rx.c
+++ b/drivers/net/ark/ark_ethdev_rx.c
@@ -388,11 +388,11 @@ eth_ark_rx_queue_drain(struct ark_rx_queue *queue)
 }
 
 uint32_t
-eth_ark_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id)
+eth_ark_dev_rx_queue_count(void *rx_queue)
 {
 	struct ark_rx_queue *queue;
 
-	queue = dev->data->rx_queues[queue_id];
+	queue = rx_queue;
 	return (queue->prod_index - queue->cons_index);	/* mod arith */
 }
 
diff --git a/drivers/net/ark/ark_ethdev_rx.h b/drivers/net/ark/ark_ethdev_rx.h
index c8dc340a8a..859fcf1e6f 100644
--- a/drivers/net/ark/ark_ethdev_rx.h
+++ b/drivers/net/ark/ark_ethdev_rx.h
@@ -17,8 +17,7 @@ int eth_ark_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			       unsigned int socket_id,
 			       const struct rte_eth_rxconf *rx_conf,
 			       struct rte_mempool *mp);
-uint32_t eth_ark_dev_rx_queue_count(struct rte_eth_dev *dev,
-				    uint16_t rx_queue_id);
+uint32_t eth_ark_dev_rx_queue_count(void *rx_queue);
 int eth_ark_rx_stop_queue(struct rte_eth_dev *dev, uint16_t queue_id);
 int eth_ark_rx_start_queue(struct rte_eth_dev *dev, uint16_t queue_id);
 uint16_t eth_ark_recv_pkts_noop(void *rx_queue, struct rte_mbuf **rx_pkts,
diff --git a/drivers/net/atlantic/atl_ethdev.h b/drivers/net/atlantic/atl_ethdev.h
index f547571b5c..e808460520 100644
--- a/drivers/net/atlantic/atl_ethdev.h
+++ b/drivers/net/atlantic/atl_ethdev.h
@@ -66,7 +66,7 @@ int atl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
 
-uint32_t atl_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t atl_rx_queue_count(void *rx_queue);
 
 int atl_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int atl_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c
index 7d367c9306..35bb13044e 100644
--- a/drivers/net/atlantic/atl_rxtx.c
+++ b/drivers/net/atlantic/atl_rxtx.c
@@ -689,18 +689,13 @@ atl_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 /* Return Rx queue avail count */
 
 uint32_t
-atl_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+atl_rx_queue_count(void *rx_queue)
 {
 	struct atl_rx_queue *rxq;
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (rx_queue_id >= dev->data->nb_rx_queues) {
-		PMD_DRV_LOG(ERR, "Invalid RX queue id=%d", rx_queue_id);
-		return 0;
-	}
-
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 
 	if (rxq == NULL)
 		return 0;
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index aa7e7fdc85..72c3d4f0fc 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3150,20 +3150,22 @@ bnxt_dev_led_off_op(struct rte_eth_dev *dev)
 }
 
 static uint32_t
-bnxt_rx_queue_count_op(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+bnxt_rx_queue_count_op(void *rx_queue)
 {
-	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
+	struct bnxt *bp;
 	struct bnxt_cp_ring_info *cpr;
 	uint32_t desc = 0, raw_cons, cp_ring_size;
 	struct bnxt_rx_queue *rxq;
 	struct rx_pkt_cmpl *rxcmp;
 	int rc;
 
+	rxq = rx_queue;
+	bp = rxq->bp;
+
 	rc = is_bnxt_in_error(bp);
 	if (rc)
 		return rc;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
 	cpr = rxq->cp_ring;
 	raw_cons = cpr->cp_raw_cons;
 	cp_ring_size = cpr->cp_ring_struct->ring_size;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 36d8f9249d..b5589300c9 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1278,17 +1278,16 @@ static void dpaa_eth_tx_queue_release(void *txq __rte_unused)
 }
 
 static uint32_t
-dpaa_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+dpaa_dev_rx_queue_count(void *rx_queue)
 {
-	struct dpaa_if *dpaa_intf = dev->data->dev_private;
-	struct qman_fq *rxq = &dpaa_intf->rx_queues[rx_queue_id];
+	struct qman_fq *rxq = rx_queue;
 	u32 frm_cnt = 0;
 
 	PMD_INIT_FUNC_TRACE();
 
 	if (qman_query_fq_frm_cnt(rxq, &frm_cnt) == 0) {
-		DPAA_PMD_DEBUG("RX frame count for q(%d) is %u",
-			       rx_queue_id, frm_cnt);
+		DPAA_PMD_DEBUG("RX frame count for q(%p) is %u",
+			       rx_queue, frm_cnt);
 	}
 	return frm_cnt;
 }
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 275656fbe4..43d46b595e 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1013,10 +1013,9 @@ dpaa2_dev_tx_queue_release(void *q __rte_unused)
 }
 
 static uint32_t
-dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+dpaa2_dev_rx_queue_count(void *rx_queue)
 {
 	int32_t ret;
-	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	struct dpaa2_queue *dpaa2_q;
 	struct qbman_swp *swp;
 	struct qbman_fq_query_np_rslt state;
@@ -1033,12 +1032,12 @@ dpaa2_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 	swp = DPAA2_PER_LCORE_PORTAL;
 
-	dpaa2_q = (struct dpaa2_queue *)priv->rx_vq[rx_queue_id];
+	dpaa2_q = rx_queue;
 
 	if (qbman_fq_query_state(swp, dpaa2_q->fqid, &state) == 0) {
 		frame_cnt = qbman_fq_state_frame_count(&state);
-		DPAA2_PMD_DP_DEBUG("RX frame count for q(%d) is %u",
-				rx_queue_id, frame_cnt);
+		DPAA2_PMD_DP_DEBUG("RX frame count for q(%p) is %u",
+				rx_queue, frame_cnt);
 	}
 	return frame_cnt;
 }
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index c01e3ee9c5..fff52958df 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -399,8 +399,7 @@ int eth_igb_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		const struct rte_eth_rxconf *rx_conf,
 		struct rte_mempool *mb_pool);
 
-uint32_t eth_igb_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t eth_igb_rx_queue_count(void *rx_queue);
 
 int eth_igb_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int eth_igb_tx_descriptor_status(void *tx_queue, uint16_t offset);
@@ -474,8 +473,7 @@ int eth_em_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		const struct rte_eth_rxconf *rx_conf,
 		struct rte_mempool *mb_pool);
 
-uint32_t eth_em_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t eth_em_rx_queue_count(void *rx_queue);
 
 int eth_em_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int eth_em_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index 048b9148ed..13ea3a77f4 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -1489,14 +1489,14 @@ eth_em_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-eth_em_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_em_rx_queue_count(void *rx_queue)
 {
 #define EM_RXQ_SCAN_INTERVAL 4
 	volatile struct e1000_rx_desc *rxdp;
 	struct em_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 46a7789d90..0ee1b8d48d 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -1769,14 +1769,14 @@ eth_igb_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-eth_igb_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_igb_rx_queue_count(void *rx_queue)
 {
 #define IGB_RXQ_SCAN_INTERVAL 4
 	volatile union e1000_adv_rx_desc *rxdp;
 	struct igb_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 8d5797523b..5b2d60ad9c 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -233,18 +233,18 @@ static void enicpmd_dev_rx_queue_release(void *rxq)
 	enic_free_rq(rxq);
 }
 
-static uint32_t enicpmd_dev_rx_queue_count(struct rte_eth_dev *dev,
-					   uint16_t rx_queue_id)
+static uint32_t enicpmd_dev_rx_queue_count(void *rx_queue)
 {
-	struct enic *enic = pmd_priv(dev);
+	struct enic *enic;
+	struct vnic_rq *sop_rq;
 	uint32_t queue_count = 0;
 	struct vnic_cq *cq;
 	uint32_t cq_tail;
 	uint16_t cq_idx;
-	int rq_num;
 
-	rq_num = enic_rte_rq_idx_to_sop_idx(rx_queue_id);
-	cq = &enic->cq[enic_cq_rq(enic, rq_num)];
+	sop_rq = rx_queue;
+	enic = vnic_dev_priv(sop_rq->vdev);
+	cq = &enic->cq[enic_cq_rq(enic, sop_rq->index)];
 	cq_idx = cq->to_clean;
 
 	cq_tail = ioread32(&cq->ctrl->cq_tail);
diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h
index 2e47ada829..17c73c4dc5 100644
--- a/drivers/net/fm10k/fm10k.h
+++ b/drivers/net/fm10k/fm10k.h
@@ -324,7 +324,7 @@ uint16_t fm10k_recv_scattered_pkts(void *rx_queue,
 		struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
 
 uint32_t
-fm10k_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+fm10k_dev_rx_queue_count(void *rx_queue);
 
 int
 fm10k_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index d9833505d1..b3515ae96a 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -367,14 +367,14 @@ fm10k_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 }
 
 uint32_t
-fm10k_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+fm10k_dev_rx_queue_count(void *rx_queue)
 {
 #define FM10K_RXQ_SCAN_INTERVAL 4
 	volatile union fm10k_rx_desc *rxdp;
 	struct fm10k_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->hw_ring[rxq->next_dd];
 	while ((desc < rxq->nb_desc) &&
 		rxdp->w.status & rte_cpu_to_le_16(FM10K_RXD_STATUS_DD)) {
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 481872e395..04791ae7d0 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -4673,7 +4673,7 @@ hns3_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
 }
 
 uint32_t
-hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+hns3_rx_queue_count(void *rx_queue)
 {
 	/*
 	 * Number of BDs that have been processed by the driver
@@ -4681,9 +4681,12 @@ hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	 */
 	uint32_t driver_hold_bd_num;
 	struct hns3_rx_queue *rxq;
+	const struct rte_eth_dev *dev;
 	uint32_t fbd_num;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
+	dev = &rte_eth_devices[rxq->port_id];
+
 	fbd_num = hns3_read_dev(rxq, HNS3_RING_RX_FBDNUM_REG);
 	if (dev->rx_pkt_burst == hns3_recv_pkts_vec ||
 	    dev->rx_pkt_burst == hns3_recv_pkts_vec_sve)
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index cd7c21c1d0..34a028701f 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -696,7 +696,7 @@ int hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 			struct rte_mempool *mp);
 int hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 			unsigned int socket, const struct rte_eth_txconf *conf);
-uint32_t hns3_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t hns3_rx_queue_count(void *rx_queue);
 int hns3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int hns3_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int hns3_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 0fd9fef8e0..f5bebde302 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2107,14 +2107,14 @@ i40e_dev_rx_queue_release(void *rxq)
 }
 
 uint32_t
-i40e_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+i40e_dev_rx_queue_count(void *rx_queue)
 {
 #define I40E_RXQ_SCAN_INTERVAL 4
 	volatile union i40e_rx_desc *rxdp;
 	struct i40e_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 	while ((desc < rxq->nb_rx_desc) &&
 		((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 842924bce5..d495a741b6 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -225,8 +225,7 @@ int i40e_tx_done_cleanup(void *txq, uint32_t free_cnt);
 int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq);
 void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq);
 
-uint32_t i40e_dev_rx_queue_count(struct rte_eth_dev *dev,
-				 uint16_t rx_queue_id);
+uint32_t i40e_dev_rx_queue_count(void *rx_queue);
 int i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
 
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 87afc0b4cb..3dc1f04380 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -2799,14 +2799,14 @@ iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 
 /* Get the number of used descriptors of a rx queue */
 uint32_t
-iavf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id)
+iavf_dev_rxq_count(void *rx_queue)
 {
 #define IAVF_RXQ_SCAN_INTERVAL 4
 	volatile union iavf_rx_desc *rxdp;
 	struct iavf_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index e210b913d6..2f7bec2b63 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -453,7 +453,7 @@ void iavf_dev_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			  struct rte_eth_rxq_info *qinfo);
 void iavf_dev_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			  struct rte_eth_txq_info *qinfo);
-uint32_t iavf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id);
+uint32_t iavf_dev_rxq_count(void *rx_queue);
 int iavf_dev_rx_desc_status(void *rx_queue, uint16_t offset);
 int iavf_dev_tx_desc_status(void *tx_queue, uint16_t offset);
 
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 83fb788e69..c92fc5053b 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1444,14 +1444,14 @@ ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 }
 
 uint32_t
-ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+ice_rx_queue_count(void *rx_queue)
 {
 #define ICE_RXQ_SCAN_INTERVAL 4
 	volatile union ice_rx_flex_desc *rxdp;
 	struct ice_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 	while ((desc < rxq->nb_rx_desc) &&
 	       rte_le_to_cpu_16(rxdp->wb.status_error0) &
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index eef76ffdc5..8d078e0edc 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -226,7 +226,7 @@ uint16_t ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 void ice_set_tx_function_flag(struct rte_eth_dev *dev,
 			      struct ice_tx_queue *txq);
 void ice_set_tx_function(struct rte_eth_dev *dev);
-uint32_t ice_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t ice_rx_queue_count(void *rx_queue);
 void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 		      struct rte_eth_rxq_info *qinfo);
 void ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index 3979dca660..2498cfd290 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -722,8 +722,7 @@ void eth_igc_rx_queue_release(void *rxq)
 		igc_rx_queue_release(rxq);
 }
 
-uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id)
+uint32_t eth_igc_rx_queue_count(void *rx_queue)
 {
 	/**
 	 * Check the DD bit of a rx descriptor of each 4 in a group,
@@ -736,7 +735,7 @@ uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
 	struct igc_rx_queue *rxq;
 	uint16_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 
 	while (desc < rxq->nb_rx_desc - rxq->rx_tail) {
diff --git a/drivers/net/igc/igc_txrx.h b/drivers/net/igc/igc_txrx.h
index d6f3799639..3b4c7450cd 100644
--- a/drivers/net/igc/igc_txrx.h
+++ b/drivers/net/igc/igc_txrx.h
@@ -22,8 +22,7 @@ int eth_igc_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		const struct rte_eth_rxconf *rx_conf,
 		struct rte_mempool *mb_pool);
 
-uint32_t eth_igc_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t eth_igc_rx_queue_count(void *rx_queue);
 
 int eth_igc_rx_descriptor_status(void *rx_queue, uint16_t offset);
 
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 37976902a1..6b7a4079db 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -602,8 +602,7 @@ int  ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
 
-uint32_t ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t ixgbe_dev_rx_queue_count(void *rx_queue);
 
 int ixgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int ixgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 0af9ce8aee..8e056db761 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -3258,14 +3258,14 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+ixgbe_dev_rx_queue_count(void *rx_queue)
 {
 #define IXGBE_RXQ_SCAN_INTERVAL 4
 	volatile union ixgbe_adv_rx_desc *rxdp;
 	struct ixgbe_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index e3b1051ba4..1a9eb35acc 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -240,32 +240,32 @@ mlx5_rx_burst_mode_get(struct rte_eth_dev *dev,
 /**
  * DPDK callback to get the number of used descriptors in a RX queue.
  *
- * @param dev
- *   Pointer to the device structure.
- *
- * @param rx_queue_id
- *   The Rx queue.
+ * @param rx_queue
+ *   The Rx queue pointer.
  *
  * @return
  *   The number of used rx descriptor.
  *   -EINVAL if the queue is invalid
  */
 uint32_t
-mlx5_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+mlx5_rx_queue_count(void *rx_queue)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_rxq_data *rxq;
+	struct mlx5_rxq_data *rxq = rx_queue;
+	struct rte_eth_dev *dev;
+
+	if (!rxq) {
+		rte_errno = EINVAL;
+		return -rte_errno;
+	}
+
+	dev = &rte_eth_devices[rxq->port_id];
 
 	if (dev->rx_pkt_burst == NULL ||
 	    dev->rx_pkt_burst == removed_rx_burst) {
 		rte_errno = ENOTSUP;
 		return -rte_errno;
 	}
-	rxq = (*priv->rxqs)[rx_queue_id];
-	if (!rxq) {
-		rte_errno = EINVAL;
-		return -rte_errno;
-	}
+
 	return rx_queue_count(rxq);
 }
 
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 3f2b99fb65..5e4ac7324d 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -260,7 +260,7 @@ uint16_t mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts,
 uint16_t removed_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts,
 			  uint16_t pkts_n);
 int mlx5_rx_descriptor_status(void *rx_queue, uint16_t offset);
-uint32_t mlx5_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+uint32_t mlx5_rx_queue_count(void *rx_queue);
 void mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
 		       struct rte_eth_rxq_info *qinfo);
 int mlx5_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t rx_queue_id,
diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c
index c6bf7cc132..30aac371c8 100644
--- a/drivers/net/netvsc/hn_rxtx.c
+++ b/drivers/net/netvsc/hn_rxtx.c
@@ -1018,9 +1018,9 @@ hn_dev_rx_queue_release(void *arg)
  * For this device that means how many packets are pending in the ring.
  */
 uint32_t
-hn_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id)
+hn_dev_rx_queue_count(void *rx_queue)
 {
-	struct hn_rx_queue *rxq = dev->data->rx_queues[queue_id];
+	struct hn_rx_queue *rxq = rx_queue;
 
 	return rte_ring_count(rxq->rx_ring);
 }
diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h
index 43642408bc..2a2bac9338 100644
--- a/drivers/net/netvsc/hn_var.h
+++ b/drivers/net/netvsc/hn_var.h
@@ -215,7 +215,7 @@ int	hn_dev_rx_queue_setup(struct rte_eth_dev *dev,
 void	hn_dev_rx_queue_info(struct rte_eth_dev *dev, uint16_t queue_id,
 			     struct rte_eth_rxq_info *qinfo);
 void	hn_dev_rx_queue_release(void *arg);
-uint32_t hn_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_id);
+uint32_t hn_dev_rx_queue_count(void *rx_queue);
 int	hn_dev_rx_queue_status(void *rxq, uint16_t offset);
 void	hn_dev_free_queues(struct rte_eth_dev *dev);
 
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 1402c5f84a..4b2ac4cc43 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -97,14 +97,14 @@ nfp_net_rx_freelist_setup(struct rte_eth_dev *dev)
 }
 
 uint32_t
-nfp_net_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
+nfp_net_rx_queue_count(void *rx_queue)
 {
 	struct nfp_net_rxq *rxq;
 	struct nfp_net_rx_desc *rxds;
 	uint32_t idx;
 	uint32_t count;
 
-	rxq = (struct nfp_net_rxq *)dev->data->rx_queues[queue_idx];
+	rxq = rx_queue;
 
 	idx = rxq->rd_p;
 
diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h
index b0a8bf81b0..0fd50a6c22 100644
--- a/drivers/net/nfp/nfp_rxtx.h
+++ b/drivers/net/nfp/nfp_rxtx.h
@@ -275,8 +275,7 @@ struct nfp_net_rxq {
 } __rte_aligned(64);
 
 int nfp_net_rx_freelist_setup(struct rte_eth_dev *dev);
-uint32_t nfp_net_rx_queue_count(struct rte_eth_dev *dev,
-				       uint16_t queue_idx);
+uint32_t nfp_net_rx_queue_count(void *rx_queue);
 uint16_t nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 				  uint16_t nb_pkts);
 void nfp_net_rx_queue_release(void *rxq);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 90bafcea8e..d28fcaa281 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -431,7 +431,7 @@ int otx2_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			   struct rte_eth_burst_mode *mode);
 int otx2_tx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
 			   struct rte_eth_burst_mode *mode);
-uint32_t otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t qidx);
+uint32_t otx2_nix_rx_queue_count(void *rx_queue);
 int otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt);
 int otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int otx2_nix_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 5cb3905b64..3a763f691b 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -342,13 +342,13 @@ nix_rx_head_tail_get(struct otx2_eth_dev *dev,
 }
 
 uint32_t
-otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t queue_idx)
+otx2_nix_rx_queue_count(void *rx_queue)
 {
-	struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[queue_idx];
-	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
+	struct otx2_eth_rxq *rxq = rx_queue;
+	struct otx2_eth_dev *dev = otx2_eth_pmd_priv(rxq->eth_dev);
 	uint32_t head, tail;
 
-	nix_rx_head_tail_get(dev, &head, &tail, queue_idx);
+	nix_rx_head_tail_get(dev, &head, &tail, rxq->rq);
 	return (tail - head) % rxq->qlen;
 }
 
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 8debebc96e..c9b01480f8 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1281,19 +1281,19 @@ sfc_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
  * use any process-local pointers from the adapter data.
  */
 static uint32_t
-sfc_rx_queue_count(struct rte_eth_dev *dev, uint16_t ethdev_qid)
+sfc_rx_queue_count(void *rx_queue)
 {
-	const struct sfc_adapter_priv *sap = sfc_adapter_priv_by_eth_dev(dev);
-	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
-	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
+	struct sfc_dp_rxq *dp_rxq = rx_queue;
+	const struct sfc_dp_rx *dp_rx;
 	struct sfc_rxq_info *rxq_info;
 
-	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
+	dp_rx = sfc_dp_rx_by_dp_rxq(dp_rxq);
+	rxq_info = sfc_rxq_info_by_dp_rxq(dp_rxq);
 
 	if ((rxq_info->state & SFC_RXQ_STARTED) == 0)
 		return 0;
 
-	return sap->dp_rx->qdesc_npending(rxq_info->dp);
+	return dp_rx->qdesc_npending(dp_rxq);
 }
 
 /*
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 561a98fc81..0e87620e42 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -1060,8 +1060,7 @@ nicvf_rx_queue_release_mbufs(struct rte_eth_dev *dev, struct nicvf_rxq *rxq)
 	if (dev->rx_pkt_burst == NULL)
 		return;
 
-	while ((rxq_cnt = nicvf_dev_rx_queue_count(dev,
-				nicvf_netdev_qidx(rxq->nic, rxq->queue_id)))) {
+	while ((rxq_cnt = nicvf_dev_rx_queue_count(rxq))) {
 		nb_pkts = dev->rx_pkt_burst(rxq, rx_pkts,
 					NICVF_MAX_RX_FREE_THRESH);
 		PMD_DRV_LOG(INFO, "nb_pkts=%d  rxq_cnt=%d", nb_pkts, rxq_cnt);
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index 91e09ff8d5..0d4f4ae87e 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -649,11 +649,11 @@ nicvf_recv_pkts_multiseg_cksum_vlan_strip(void *rx_queue,
 }
 
 uint32_t
-nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
+nicvf_dev_rx_queue_count(void *rx_queue)
 {
 	struct nicvf_rxq *rxq;
 
-	rxq = dev->data->rx_queues[queue_idx];
+	rxq = rx_queue;
 	return nicvf_addr_read(rxq->cq_status) & NICVF_CQ_CQE_COUNT_MASK;
 }
 
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index d6ed660b4e..271f329dc4 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -83,7 +83,7 @@ nicvf_mbuff_init_mseg_update(struct rte_mbuf *pkt, const uint64_t mbuf_init,
 	*(uint64_t *)(&pkt->rearm_data) = init.value;
 }
 
-uint32_t nicvf_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx);
+uint32_t nicvf_dev_rx_queue_count(void *rx_queue);
 uint32_t nicvf_dev_rbdr_refill(struct rte_eth_dev *dev, uint16_t queue_idx);
 
 uint16_t nicvf_recv_pkts_no_offload(void *rxq, struct rte_mbuf **rx_pkts,
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 3021933965..569cd6a48f 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -446,8 +446,7 @@ int  txgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
 
-uint32_t txgbe_dev_rx_queue_count(struct rte_eth_dev *dev,
-		uint16_t rx_queue_id);
+uint32_t txgbe_dev_rx_queue_count(void *rx_queue);
 
 int txgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
 int txgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 1a261287d1..2a7cfdeedb 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -2688,14 +2688,14 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 }
 
 uint32_t
-txgbe_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+txgbe_dev_rx_queue_count(void *rx_queue)
 {
 #define TXGBE_RXQ_SCAN_INTERVAL 4
 	volatile struct txgbe_rx_desc *rxdp;
 	struct txgbe_rx_queue *rxq;
 	uint32_t desc = 0;
 
-	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq = rx_queue;
 	rxdp = &rxq->rx_ring[rxq->rx_tail];
 
 	while ((desc < rxq->nb_rx_desc) &&
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index a202931e9a..f2b3f142d8 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -1369,11 +1369,11 @@ eth_link_update(struct rte_eth_dev *dev __rte_unused,
 }
 
 static uint32_t
-eth_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+eth_rx_queue_count(void *rx_queue)
 {
 	struct vhost_queue *vq;
 
-	vq = dev->data->rx_queues[rx_queue_id];
+	vq = rx_queue;
 	if (vq == NULL)
 		return 0;
 
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 0bff526819..cdd16d6e57 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -5060,7 +5060,7 @@ rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
 	    dev->data->rx_queues[queue_id] == NULL)
 		return -EINVAL;
 
-	return (int)(*dev->rx_queue_count)(dev, queue_id);
+	return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]);
 }
 
 /**@{@name Rx hardware descriptor states
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index 2296872888..51cd68de94 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -41,8 +41,7 @@ typedef uint16_t (*eth_tx_prep_t)(void *txq,
 /**< @internal Prepare output packets on a transmit queue of an Ethernet device. */
 
 
-typedef uint32_t (*eth_rx_queue_count_t)(struct rte_eth_dev *dev,
-					 uint16_t rx_queue_id);
+typedef uint32_t (*eth_rx_queue_count_t)(void *rxq);
 /**< @internal Get number of used descriptors on a receive queue. */
 
 typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
-- 
2.26.3


^ permalink raw reply	[relevance 6%]

* [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure
  2021-10-07 11:27  4%       ` [dpdk-dev] [PATCH v5 " Konstantin Ananyev
  2021-10-07 11:27  6%         ` [dpdk-dev] [PATCH v5 3/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
@ 2021-10-07 11:27  2%         ` Konstantin Ananyev
  2021-10-09 12:05  0%           ` fengchengwen
  2021-10-11  8:25  0%           ` Andrew Rybchenko
  2021-10-07 11:27  2%         ` [dpdk-dev] [PATCH v5 5/7] ethdev: make fast-path functions to use new flat array Konstantin Ananyev
                           ` (3 subsequent siblings)
  5 siblings, 2 replies; 200+ results
From: Konstantin Ananyev @ 2021-10-07 11:27 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Copy public function pointers (rx_pkt_burst(), etc.) and related
pointers to internal data from rte_eth_dev structure into a
separate flat array. That array will remain in a public header.
The intention here is to make rte_eth_dev and related structures internal.
That should allow future possible changes to core eth_dev structures
to be transparent to the user and help to avoid ABI/API breakages.
The plan is to keep minimal part of data from rte_eth_dev public,
so we still can use inline functions for fast-path calls
(like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
The whole idea beyond this new schema:
1. PMDs keep to setup fast-path function pointers and related data
   inside rte_eth_dev struct in the same way they did it before.
2. Inside rte_eth_dev_start() and inside rte_eth_dev_probing_finish()
   (for secondary process) we call eth_dev_fp_ops_setup, which
   copies these function and data pointers into rte_eth_fp_ops[port_id].
3. Inside rte_eth_dev_stop() and inside rte_eth_dev_release_port()
   we call eth_dev_fp_ops_reset(), which resets rte_eth_fp_ops[port_id]
   into some dummy values.
4. fast-path ethdev API (rte_eth_rx_burst(), etc.) will use that new
   flat array to call PMD specific functions.
That approach should allow us to make rte_eth_devices[] private
without introducing regression and help to avoid changes in drivers code.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/ethdev/ethdev_private.c  | 52 ++++++++++++++++++++++++++++++++++
 lib/ethdev/ethdev_private.h  |  7 +++++
 lib/ethdev/rte_ethdev.c      | 27 ++++++++++++++++++
 lib/ethdev/rte_ethdev_core.h | 55 ++++++++++++++++++++++++++++++++++++
 4 files changed, 141 insertions(+)

diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 012cf73ca2..3eeda6e9f9 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -174,3 +174,55 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
 		RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
 	return str == NULL ? -1 : 0;
 }
+
+static uint16_t
+dummy_eth_rx_burst(__rte_unused void *rxq,
+		__rte_unused struct rte_mbuf **rx_pkts,
+		__rte_unused uint16_t nb_pkts)
+{
+	RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+static uint16_t
+dummy_eth_tx_burst(__rte_unused void *txq,
+		__rte_unused struct rte_mbuf **tx_pkts,
+		__rte_unused uint16_t nb_pkts)
+{
+	RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n");
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+void
+eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo)
+{
+	static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
+	static const struct rte_eth_fp_ops dummy_ops = {
+		.rx_pkt_burst = dummy_eth_rx_burst,
+		.tx_pkt_burst = dummy_eth_tx_burst,
+		.rxq = {.data = dummy_data, .clbk = dummy_data,},
+		.txq = {.data = dummy_data, .clbk = dummy_data,},
+	};
+
+	*fpo = dummy_ops;
+}
+
+void
+eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
+		const struct rte_eth_dev *dev)
+{
+	fpo->rx_pkt_burst = dev->rx_pkt_burst;
+	fpo->tx_pkt_burst = dev->tx_pkt_burst;
+	fpo->tx_pkt_prepare = dev->tx_pkt_prepare;
+	fpo->rx_queue_count = dev->rx_queue_count;
+	fpo->rx_descriptor_status = dev->rx_descriptor_status;
+	fpo->tx_descriptor_status = dev->tx_descriptor_status;
+
+	fpo->rxq.data = dev->data->rx_queues;
+	fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
+
+	fpo->txq.data = dev->data->tx_queues;
+	fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
+}
diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
index 3724429577..5721be7bdc 100644
--- a/lib/ethdev/ethdev_private.h
+++ b/lib/ethdev/ethdev_private.h
@@ -26,4 +26,11 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
 /* Parse devargs value for representor parameter. */
 int rte_eth_devargs_parse_representor_ports(char *str, void *data);
 
+/* reset eth fast-path API to dummy values */
+void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
+
+/* setup eth fast-path API to ethdev values */
+void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
+		const struct rte_eth_dev *dev);
+
 #endif /* _ETH_PRIVATE_H_ */
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index c8abda6dd7..9f7a0cbb8c 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -44,6 +44,9 @@
 static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
 struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
 
+/* public fast-path API */
+struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
+
 /* spinlock for eth device callbacks */
 static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
 
@@ -578,6 +581,8 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
 		rte_eth_dev_callback_process(eth_dev,
 				RTE_ETH_EVENT_DESTROY, NULL);
 
+	eth_dev_fp_ops_reset(rte_eth_fp_ops + eth_dev->data->port_id);
+
 	rte_spinlock_lock(&eth_dev_shared_data->ownership_lock);
 
 	eth_dev->state = RTE_ETH_DEV_UNUSED;
@@ -1787,6 +1792,9 @@ rte_eth_dev_start(uint16_t port_id)
 		(*dev->dev_ops->link_update)(dev, 0);
 	}
 
+	/* expose selection of PMD fast-path functions */
+	eth_dev_fp_ops_setup(rte_eth_fp_ops + port_id, dev);
+
 	rte_ethdev_trace_start(port_id);
 	return 0;
 }
@@ -1809,6 +1817,9 @@ rte_eth_dev_stop(uint16_t port_id)
 		return 0;
 	}
 
+	/* point fast-path functions to dummy ones */
+	eth_dev_fp_ops_reset(rte_eth_fp_ops + port_id);
+
 	dev->data->dev_started = 0;
 	ret = (*dev->dev_ops->dev_stop)(dev);
 	rte_ethdev_trace_stop(port_id, ret);
@@ -4567,6 +4578,14 @@ rte_eth_mirror_rule_reset(uint16_t port_id, uint8_t rule_id)
 	return eth_err(port_id, (*dev->dev_ops->mirror_rule_reset)(dev, rule_id));
 }
 
+RTE_INIT(eth_dev_init_fp_ops)
+{
+	uint32_t i;
+
+	for (i = 0; i != RTE_DIM(rte_eth_fp_ops); i++)
+		eth_dev_fp_ops_reset(rte_eth_fp_ops + i);
+}
+
 RTE_INIT(eth_dev_init_cb_lists)
 {
 	uint16_t i;
@@ -4735,6 +4754,14 @@ rte_eth_dev_probing_finish(struct rte_eth_dev *dev)
 	if (dev == NULL)
 		return;
 
+	/*
+	 * for secondary process, at that point we expect device
+	 * to be already 'usable', so shared data and all function pointers
+	 * for fast-path devops have to be setup properly inside rte_eth_dev.
+	 */
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev);
+
 	rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL);
 
 	dev->state = RTE_ETH_DEV_ATTACHED;
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index 51cd68de94..d5853dff86 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -50,6 +50,61 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
 typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
 /**< @internal Check the status of a Tx descriptor */
 
+/**
+ * @internal
+ * Structure used to hold opaque pointers to internal ethdev Rx/Tx
+ * queues data.
+ * The main purpose to expose these pointers at all - allow compiler
+ * to fetch this data for fast-path ethdev inline functions in advance.
+ */
+struct rte_ethdev_qdata {
+	void **data;
+	/**< points to array of internal queue data pointers */
+	void **clbk;
+	/**< points to array of queue callback data pointers */
+};
+
+/**
+ * @internal
+ * fast-path ethdev functions and related data are hold in a flat array.
+ * One entry per ethdev.
+ * On 64-bit systems contents of this structure occupy exactly two 64B lines.
+ * On 32-bit systems contents of this structure fits into one 64B line.
+ */
+struct rte_eth_fp_ops {
+
+	/**
+	 * Rx fast-path functions and related data.
+	 * 64-bit systems: occupies first 64B line
+	 */
+	eth_rx_burst_t rx_pkt_burst;
+	/**< PMD receive function. */
+	eth_rx_queue_count_t rx_queue_count;
+	/**< Get the number of used RX descriptors. */
+	eth_rx_descriptor_status_t rx_descriptor_status;
+	/**< Check the status of a Rx descriptor. */
+	struct rte_ethdev_qdata rxq;
+	/**< Rx queues data. */
+	uintptr_t reserved1[3];
+
+	/**
+	 * Tx fast-path functions and related data.
+	 * 64-bit systems: occupies second 64B line
+	 */
+	eth_tx_burst_t tx_pkt_burst;
+	/**< PMD transmit function. */
+	eth_tx_prep_t tx_pkt_prepare;
+	/**< PMD transmit prepare function. */
+	eth_tx_descriptor_status_t tx_descriptor_status;
+	/**< Check the status of a Tx descriptor. */
+	struct rte_ethdev_qdata txq;
+	/**< Tx queues data. */
+	uintptr_t reserved2[3];
+
+} __rte_cache_aligned;
+
+extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
+
 
 /**
  * @internal
-- 
2.26.3


^ permalink raw reply	[relevance 2%]

* [dpdk-dev] [PATCH v5 5/7] ethdev: make fast-path functions to use new flat array
  2021-10-07 11:27  4%       ` [dpdk-dev] [PATCH v5 " Konstantin Ananyev
  2021-10-07 11:27  6%         ` [dpdk-dev] [PATCH v5 3/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
  2021-10-07 11:27  2%         ` [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure Konstantin Ananyev
@ 2021-10-07 11:27  2%         ` Konstantin Ananyev
  2021-10-11  9:02  0%           ` Andrew Rybchenko
  2021-10-07 11:27  9%         ` [dpdk-dev] [PATCH v5 7/7] ethdev: hide eth dev related structures Konstantin Ananyev
                           ` (2 subsequent siblings)
  5 siblings, 1 reply; 200+ results
From: Konstantin Ananyev @ 2021-10-07 11:27 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Rework fast-path ethdev functions to use rte_eth_fp_ops[].
While it is an API/ABI breakage, this change is intended to be
transparent for both users (no changes in user app is required) and
PMD developers (no changes in PMD is required).
One extra thing to note - RX/TX callback invocation will cause extra
function call with these changes. That might cause some insignificant
slowdown for code-path where RX/TX callbacks are heavily involved.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/ethdev/ethdev_private.c |  31 +++++
 lib/ethdev/rte_ethdev.h     | 242 ++++++++++++++++++++++++++----------
 lib/ethdev/version.map      |   3 +
 3 files changed, 208 insertions(+), 68 deletions(-)

diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 3eeda6e9f9..1222c6f84e 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -226,3 +226,34 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
 	fpo->txq.data = dev->data->tx_queues;
 	fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
 }
+
+uint16_t
+rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id,
+	struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
+	void *opaque)
+{
+	const struct rte_eth_rxtx_callback *cb = opaque;
+
+	while (cb != NULL) {
+		nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
+				nb_pkts, cb->param);
+		cb = cb->next;
+	}
+
+	return nb_rx;
+}
+
+uint16_t
+rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
+	struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque)
+{
+	const struct rte_eth_rxtx_callback *cb = opaque;
+
+	while (cb != NULL) {
+		nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
+				cb->param);
+		cb = cb->next;
+	}
+
+	return nb_pkts;
+}
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index cdd16d6e57..c0e1a40681 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -4904,6 +4904,33 @@ int rte_eth_representor_info_get(uint16_t port_id,
 
 #include <rte_ethdev_core.h>
 
+/**
+ * @internal
+ * Helper routine for eth driver rx_burst API.
+ * Should be called at exit from PMD's rte_eth_rx_bulk implementation.
+ * Does necessary post-processing - invokes RX callbacks if any, etc.
+ *
+ * @param port_id
+ *  The port identifier of the Ethernet device.
+ * @param queue_id
+ *  The index of the receive queue from which to retrieve input packets.
+ * @param rx_pkts
+ *   The address of an array of pointers to *rte_mbuf* structures that
+ *   have been retrieved from the device.
+ * @param nb_pkts
+ *   The number of packets that were retrieved from the device.
+ * @param nb_pkts
+ *   The number of elements in *rx_pkts* array.
+ * @param opaque
+ *   Opaque pointer of RX queue callback related data.
+ *
+ * @return
+ *  The number of packets effectively supplied to the *rx_pkts* array.
+ */
+uint16_t rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id,
+		struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
+		void *opaque);
+
 /**
  *
  * Retrieve a burst of input packets from a receive queue of an Ethernet
@@ -4995,23 +5022,37 @@ static inline uint16_t
 rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
 		 struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
 {
-	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 	uint16_t nb_rx;
+	struct rte_eth_fp_ops *p;
+	void *cb, *qd;
+
+#ifdef RTE_ETHDEV_DEBUG_RX
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return 0;
+	}
+#endif
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->rxq.data[queue_id];
 
 #ifdef RTE_ETHDEV_DEBUG_RX
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
 
-	if (queue_id >= dev->data->nb_rx_queues) {
-		RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u\n", queue_id);
+	if (qd == NULL) {
+		RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u for port_id=%u\n",
+			queue_id, port_id);
 		return 0;
 	}
 #endif
-	nb_rx = (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id],
-				     rx_pkts, nb_pkts);
+
+	nb_rx = p->rx_pkt_burst(qd, rx_pkts, nb_pkts);
 
 #ifdef RTE_ETHDEV_RXTX_CALLBACKS
-	struct rte_eth_rxtx_callback *cb;
 
 	/* __ATOMIC_RELEASE memory order was used when the
 	 * call back was inserted into the list.
@@ -5019,16 +5060,10 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
 	 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
 	 * not required.
 	 */
-	cb = __atomic_load_n(&dev->post_rx_burst_cbs[queue_id],
-				__ATOMIC_RELAXED);
-
-	if (unlikely(cb != NULL)) {
-		do {
-			nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
-						nb_pkts, cb->param);
-			cb = cb->next;
-		} while (cb != NULL);
-	}
+	cb = __atomic_load_n((void **)&p->rxq.clbk[queue_id], __ATOMIC_RELAXED);
+	if (unlikely(cb != NULL))
+		nb_rx = rte_eth_call_rx_callbacks(port_id, queue_id, rx_pkts,
+				nb_rx, nb_pkts, cb);
 #endif
 
 	rte_ethdev_trace_rx_burst(port_id, queue_id, (void **)rx_pkts, nb_rx);
@@ -5051,16 +5086,27 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
 static inline int
 rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
 {
-	struct rte_eth_dev *dev;
+	struct rte_eth_fp_ops *p;
+	void *qd;
+
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return -EINVAL;
+	}
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->rxq.data[queue_id];
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
-	dev = &rte_eth_devices[port_id];
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_queue_count, -ENOTSUP);
-	if (queue_id >= dev->data->nb_rx_queues ||
-	    dev->data->rx_queues[queue_id] == NULL)
+	RTE_FUNC_PTR_OR_ERR_RET(*p->rx_queue_count, -ENOTSUP);
+	if (qd == NULL)
 		return -EINVAL;
 
-	return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]);
+	return (int)(*p->rx_queue_count)(qd);
 }
 
 /**@{@name Rx hardware descriptor states
@@ -5108,21 +5154,30 @@ static inline int
 rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
 	uint16_t offset)
 {
-	struct rte_eth_dev *dev;
-	void *rxq;
+	struct rte_eth_fp_ops *p;
+	void *qd;
 
 #ifdef RTE_ETHDEV_DEBUG_RX
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return -EINVAL;
+	}
 #endif
-	dev = &rte_eth_devices[port_id];
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->rxq.data[queue_id];
+
 #ifdef RTE_ETHDEV_DEBUG_RX
-	if (queue_id >= dev->data->nb_rx_queues)
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (qd == NULL)
 		return -ENODEV;
 #endif
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_descriptor_status, -ENOTSUP);
-	rxq = dev->data->rx_queues[queue_id];
-
-	return (*dev->rx_descriptor_status)(rxq, offset);
+	RTE_FUNC_PTR_OR_ERR_RET(*p->rx_descriptor_status, -ENOTSUP);
+	return (*p->rx_descriptor_status)(qd, offset);
 }
 
 /**@{@name Tx hardware descriptor states
@@ -5169,23 +5224,54 @@ rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
 static inline int rte_eth_tx_descriptor_status(uint16_t port_id,
 	uint16_t queue_id, uint16_t offset)
 {
-	struct rte_eth_dev *dev;
-	void *txq;
+	struct rte_eth_fp_ops *p;
+	void *qd;
 
 #ifdef RTE_ETHDEV_DEBUG_TX
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return -EINVAL;
+	}
 #endif
-	dev = &rte_eth_devices[port_id];
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->txq.data[queue_id];
+
 #ifdef RTE_ETHDEV_DEBUG_TX
-	if (queue_id >= dev->data->nb_tx_queues)
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+	if (qd == NULL)
 		return -ENODEV;
 #endif
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_descriptor_status, -ENOTSUP);
-	txq = dev->data->tx_queues[queue_id];
-
-	return (*dev->tx_descriptor_status)(txq, offset);
+	RTE_FUNC_PTR_OR_ERR_RET(*p->tx_descriptor_status, -ENOTSUP);
+	return (*p->tx_descriptor_status)(qd, offset);
 }
 
+/**
+ * @internal
+ * Helper routine for eth driver tx_burst API.
+ * Should be called before entry PMD's rte_eth_tx_bulk implementation.
+ * Does necessary pre-processing - invokes TX callbacks if any, etc.
+ *
+ * @param port_id
+ *   The port identifier of the Ethernet device.
+ * @param queue_id
+ *   The index of the transmit queue through which output packets must be
+ *   sent.
+ * @param tx_pkts
+ *   The address of an array of *nb_pkts* pointers to *rte_mbuf* structures
+ *   which contain the output packets.
+ * @param nb_pkts
+ *   The maximum number of packets to transmit.
+ * @return
+ *   The number of output packets to transmit.
+ */
+uint16_t rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
+	struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque);
+
 /**
  * Send a burst of output packets on a transmit queue of an Ethernet device.
  *
@@ -5256,20 +5342,34 @@ static inline uint16_t
 rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
 		 struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 {
-	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	struct rte_eth_fp_ops *p;
+	void *cb, *qd;
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
+		return 0;
+	}
+#endif
+
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->txq.data[queue_id];
 
 #ifdef RTE_ETHDEV_DEBUG_TX
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
-	RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0);
 
-	if (queue_id >= dev->data->nb_tx_queues) {
-		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
+	if (qd == NULL) {
+		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
+			queue_id, port_id);
 		return 0;
 	}
 #endif
 
 #ifdef RTE_ETHDEV_RXTX_CALLBACKS
-	struct rte_eth_rxtx_callback *cb;
 
 	/* __ATOMIC_RELEASE memory order was used when the
 	 * call back was inserted into the list.
@@ -5277,21 +5377,16 @@ rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
 	 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
 	 * not required.
 	 */
-	cb = __atomic_load_n(&dev->pre_tx_burst_cbs[queue_id],
-				__ATOMIC_RELAXED);
-
-	if (unlikely(cb != NULL)) {
-		do {
-			nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
-					cb->param);
-			cb = cb->next;
-		} while (cb != NULL);
-	}
+	cb = __atomic_load_n((void **)&p->txq.clbk[queue_id], __ATOMIC_RELAXED);
+	if (unlikely(cb != NULL))
+		nb_pkts = rte_eth_call_tx_callbacks(port_id, queue_id, tx_pkts,
+				nb_pkts, cb);
 #endif
 
-	rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts,
-		nb_pkts);
-	return (*dev->tx_pkt_burst)(dev->data->tx_queues[queue_id], tx_pkts, nb_pkts);
+	nb_pkts = p->tx_pkt_burst(qd, tx_pkts, nb_pkts);
+
+	rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts, nb_pkts);
+	return nb_pkts;
 }
 
 /**
@@ -5354,31 +5449,42 @@ static inline uint16_t
 rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id,
 		struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 {
-	struct rte_eth_dev *dev;
+	struct rte_eth_fp_ops *p;
+	void *qd;
 
 #ifdef RTE_ETHDEV_DEBUG_TX
-	if (!rte_eth_dev_is_valid_port(port_id)) {
-		RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
+	if (port_id >= RTE_MAX_ETHPORTS ||
+			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+		RTE_ETHDEV_LOG(ERR,
+			"Invalid port_id=%u or queue_id=%u\n",
+			port_id, queue_id);
 		rte_errno = ENODEV;
 		return 0;
 	}
 #endif
 
-	dev = &rte_eth_devices[port_id];
+	/* fetch pointer to queue data */
+	p = &rte_eth_fp_ops[port_id];
+	qd = p->txq.data[queue_id];
 
 #ifdef RTE_ETHDEV_DEBUG_TX
-	if (queue_id >= dev->data->nb_tx_queues) {
-		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
+	if (!rte_eth_dev_is_valid_port(port_id)) {
+		RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
+		rte_errno = ENODEV;
+		return 0;
+	}
+	if (qd == NULL) {
+		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",
+			queue_id, port_id);
 		rte_errno = EINVAL;
 		return 0;
 	}
 #endif
 
-	if (!dev->tx_pkt_prepare)
+	if (!p->tx_pkt_prepare)
 		return nb_pkts;
 
-	return (*dev->tx_pkt_prepare)(dev->data->tx_queues[queue_id],
-			tx_pkts, nb_pkts);
+	return p->tx_pkt_prepare(qd, tx_pkts, nb_pkts);
 }
 
 #else
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 904bce6ea1..79e62dcf61 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -7,6 +7,8 @@ DPDK_22 {
 	rte_eth_allmulticast_disable;
 	rte_eth_allmulticast_enable;
 	rte_eth_allmulticast_get;
+	rte_eth_call_rx_callbacks;
+	rte_eth_call_tx_callbacks;
 	rte_eth_dev_adjust_nb_rx_tx_desc;
 	rte_eth_dev_callback_register;
 	rte_eth_dev_callback_unregister;
@@ -76,6 +78,7 @@ DPDK_22 {
 	rte_eth_find_next_of;
 	rte_eth_find_next_owned_by;
 	rte_eth_find_next_sibling;
+	rte_eth_fp_ops;
 	rte_eth_iterator_cleanup;
 	rte_eth_iterator_init;
 	rte_eth_iterator_next;
-- 
2.26.3


^ permalink raw reply	[relevance 2%]

* [dpdk-dev] [PATCH v5 7/7] ethdev: hide eth dev related structures
  2021-10-07 11:27  4%       ` [dpdk-dev] [PATCH v5 " Konstantin Ananyev
                           ` (2 preceding siblings ...)
  2021-10-07 11:27  2%         ` [dpdk-dev] [PATCH v5 5/7] ethdev: make fast-path functions to use new flat array Konstantin Ananyev
@ 2021-10-07 11:27  9%         ` Konstantin Ananyev
  2021-10-08 18:13  0%         ` [dpdk-dev] [PATCH v5 0/7] " Slava Ovsiienko
  2021-10-11  9:22  0%         ` Andrew Rybchenko
  5 siblings, 0 replies; 200+ results
From: Konstantin Ananyev @ 2021-10-07 11:27 UTC (permalink / raw)
  To: dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan,
	Konstantin Ananyev

Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
data into private header (ethdev_driver.h).
Few minor changes to keep DPDK building after that.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/release_21_11.rst        |   6 +
 drivers/common/octeontx2/otx2_sec_idev.c      |   2 +-
 drivers/crypto/octeontx2/otx2_cryptodev_ops.c |   2 +-
 drivers/net/cxgbe/base/adapter.h              |   2 +-
 drivers/net/dpaa2/dpaa2_ptp.c                 |   2 +-
 drivers/net/netvsc/hn_var.h                   |   1 +
 lib/ethdev/ethdev_driver.h                    | 148 ++++++++++++++++++
 lib/ethdev/rte_ethdev_core.h                  | 143 -----------------
 lib/ethdev/version.map                        |   2 +-
 lib/eventdev/rte_event_eth_rx_adapter.c       |   2 +-
 lib/eventdev/rte_event_eth_tx_adapter.c       |   2 +-
 lib/eventdev/rte_eventdev.c                   |   2 +-
 lib/metrics/rte_metrics_telemetry.c           |   2 +-
 13 files changed, 164 insertions(+), 152 deletions(-)

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 0874108b1d..4c9751bb1d 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -237,6 +237,12 @@ ABI Changes
   to user, it still counts as an ABI change, as ``eth_rx_queue_count_t``
   is used by  public inline function ``rte_eth_rx_queue_count``.
 
+* ethdev: Made ``rte_eth_dev``, ``rte_eth_dev_data``, ``rte_eth_rxtx_callback``
+  private data structures. ``rte_eth_devices[]`` can't be accessed directly
+  by user any more. While it is an ABI breakage, this change is intended
+  to be transparent for both users (no changes in user app is required) and
+  PMD developers (no changes in PMD is required).
+
 
 Known Issues
 ------------
diff --git a/drivers/common/octeontx2/otx2_sec_idev.c b/drivers/common/octeontx2/otx2_sec_idev.c
index 6e9643c383..b561b67174 100644
--- a/drivers/common/octeontx2/otx2_sec_idev.c
+++ b/drivers/common/octeontx2/otx2_sec_idev.c
@@ -4,7 +4,7 @@
 
 #include <rte_atomic.h>
 #include <rte_bus_pci.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_spinlock.h>
 
 #include "otx2_common.h"
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
index 37fad11d91..f0b72e05c2 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
@@ -6,7 +6,7 @@
 
 #include <cryptodev_pmd.h>
 #include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_event_crypto_adapter.h>
 
 #include "otx2_cryptodev.h"
diff --git a/drivers/net/cxgbe/base/adapter.h b/drivers/net/cxgbe/base/adapter.h
index 01a2a9d147..1c7c8afe16 100644
--- a/drivers/net/cxgbe/base/adapter.h
+++ b/drivers/net/cxgbe/base/adapter.h
@@ -12,7 +12,7 @@
 #include <rte_mbuf.h>
 #include <rte_io.h>
 #include <rte_rwlock.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 
 #include "../cxgbe_compat.h"
 #include "../cxgbe_ofld.h"
diff --git a/drivers/net/dpaa2/dpaa2_ptp.c b/drivers/net/dpaa2/dpaa2_ptp.c
index 899dd5d442..8d79e39244 100644
--- a/drivers/net/dpaa2/dpaa2_ptp.c
+++ b/drivers/net/dpaa2/dpaa2_ptp.c
@@ -10,7 +10,7 @@
 #include <unistd.h>
 #include <stdarg.h>
 
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_log.h>
 #include <rte_eth_ctrl.h>
 #include <rte_malloc.h>
diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h
index 2a2bac9338..74e6e6010d 100644
--- a/drivers/net/netvsc/hn_var.h
+++ b/drivers/net/netvsc/hn_var.h
@@ -7,6 +7,7 @@
  */
 
 #include <rte_eal_paging.h>
+#include <ethdev_driver.h>
 
 /*
  * Tunable ethdev params
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index cc2c75261c..a743553d81 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -17,6 +17,154 @@
 
 #include <rte_ethdev.h>
 
+/**
+ * @internal
+ * Structure used to hold information about the callbacks to be called for a
+ * queue on RX and TX.
+ */
+struct rte_eth_rxtx_callback {
+	struct rte_eth_rxtx_callback *next;
+	union{
+		rte_rx_callback_fn rx;
+		rte_tx_callback_fn tx;
+	} fn;
+	void *param;
+};
+
+/**
+ * @internal
+ * The generic data structure associated with each ethernet device.
+ *
+ * Pointers to burst-oriented packet receive and transmit functions are
+ * located at the beginning of the structure, along with the pointer to
+ * where all the data elements for the particular device are stored in shared
+ * memory. This split allows the function pointer and driver data to be per-
+ * process, while the actual configuration data for the device is shared.
+ */
+struct rte_eth_dev {
+	eth_rx_burst_t rx_pkt_burst; /**< Pointer to PMD receive function. */
+	eth_tx_burst_t tx_pkt_burst; /**< Pointer to PMD transmit function. */
+	eth_tx_prep_t tx_pkt_prepare;
+	/**< Pointer to PMD transmit prepare function. */
+	eth_rx_queue_count_t rx_queue_count;
+	/**< Get the number of used RX descriptors. */
+	eth_rx_descriptor_status_t rx_descriptor_status;
+	/**< Check the status of a Rx descriptor. */
+	eth_tx_descriptor_status_t tx_descriptor_status;
+	/**< Check the status of a Tx descriptor. */
+
+	/**
+	 * points to device data that is shared between
+	 * primary and secondary processes.
+	 */
+	struct rte_eth_dev_data *data;
+	void *process_private; /**< Pointer to per-process device data. */
+	const struct eth_dev_ops *dev_ops; /**< Functions exported by PMD */
+	struct rte_device *device; /**< Backing device */
+	struct rte_intr_handle *intr_handle; /**< Device interrupt handle */
+	/** User application callbacks for NIC interrupts */
+	struct rte_eth_dev_cb_list link_intr_cbs;
+	/**
+	 * User-supplied functions called from rx_burst to post-process
+	 * received packets before passing them to the user
+	 */
+	struct rte_eth_rxtx_callback *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
+	/**
+	 * User-supplied functions called from tx_burst to pre-process
+	 * received packets before passing them to the driver for transmission.
+	 */
+	struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
+	enum rte_eth_dev_state state; /**< Flag indicating the port state */
+	void *security_ctx; /**< Context for security ops */
+
+	uint64_t reserved_64s[4]; /**< Reserved for future fields */
+	void *reserved_ptrs[4];   /**< Reserved for future fields */
+} __rte_cache_aligned;
+
+struct rte_eth_dev_sriov;
+struct rte_eth_dev_owner;
+
+/**
+ * @internal
+ * The data part, with no function pointers, associated with each ethernet
+ * device. This structure is safe to place in shared memory to be common
+ * among different processes in a multi-process configuration.
+ */
+struct rte_eth_dev_data {
+	char name[RTE_ETH_NAME_MAX_LEN]; /**< Unique identifier name */
+
+	void **rx_queues; /**< Array of pointers to RX queues. */
+	void **tx_queues; /**< Array of pointers to TX queues. */
+	uint16_t nb_rx_queues; /**< Number of RX queues. */
+	uint16_t nb_tx_queues; /**< Number of TX queues. */
+
+	struct rte_eth_dev_sriov sriov;    /**< SRIOV data */
+
+	void *dev_private;
+			/**< PMD-specific private data.
+			 *   @see rte_eth_dev_release_port()
+			 */
+
+	struct rte_eth_link dev_link;   /**< Link-level information & status. */
+	struct rte_eth_conf dev_conf;   /**< Configuration applied to device. */
+	uint16_t mtu;                   /**< Maximum Transmission Unit. */
+	uint32_t min_rx_buf_size;
+			/**< Common RX buffer size handled by all queues. */
+
+	uint64_t rx_mbuf_alloc_failed; /**< RX ring mbuf allocation failures. */
+	struct rte_ether_addr *mac_addrs;
+			/**< Device Ethernet link address.
+			 *   @see rte_eth_dev_release_port()
+			 */
+	uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
+			/**< Bitmap associating MAC addresses to pools. */
+	struct rte_ether_addr *hash_mac_addrs;
+			/**< Device Ethernet MAC addresses of hash filtering.
+			 *   @see rte_eth_dev_release_port()
+			 */
+	uint16_t port_id;           /**< Device [external] port identifier. */
+
+	__extension__
+	uint8_t promiscuous   : 1,
+		/**< RX promiscuous mode ON(1) / OFF(0). */
+		scattered_rx : 1,
+		/**< RX of scattered packets is ON(1) / OFF(0) */
+		all_multicast : 1,
+		/**< RX all multicast mode ON(1) / OFF(0). */
+		dev_started : 1,
+		/**< Device state: STARTED(1) / STOPPED(0). */
+		lro         : 1,
+		/**< RX LRO is ON(1) / OFF(0) */
+		dev_configured : 1;
+		/**< Indicates whether the device is configured.
+		 *   CONFIGURED(1) / NOT CONFIGURED(0).
+		 */
+	uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
+		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
+	uint8_t tx_queue_state[RTE_MAX_QUEUES_PER_PORT];
+		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
+	uint32_t dev_flags;             /**< Capabilities. */
+	int numa_node;                  /**< NUMA node connection. */
+	struct rte_vlan_filter_conf vlan_filter_conf;
+			/**< VLAN filter configuration. */
+	struct rte_eth_dev_owner owner; /**< The port owner. */
+	uint16_t representor_id;
+			/**< Switch-specific identifier.
+			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
+			 */
+
+	pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
+	uint64_t reserved_64s[4]; /**< Reserved for future fields */
+	void *reserved_ptrs[4];   /**< Reserved for future fields */
+} __rte_cache_aligned;
+
+/**
+ * @internal
+ * The pool of *rte_eth_dev* structures. The size of the pool
+ * is configured at compile-time in the <rte_ethdev.c> file.
+ */
+extern struct rte_eth_dev rte_eth_devices[];
+
 /**< @internal Declaration of the hairpin peer queue information structure. */
 struct rte_hairpin_peer_info;
 
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index d5853dff86..d0017bbe05 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -105,147 +105,4 @@ struct rte_eth_fp_ops {
 
 extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
 
-
-/**
- * @internal
- * Structure used to hold information about the callbacks to be called for a
- * queue on RX and TX.
- */
-struct rte_eth_rxtx_callback {
-	struct rte_eth_rxtx_callback *next;
-	union{
-		rte_rx_callback_fn rx;
-		rte_tx_callback_fn tx;
-	} fn;
-	void *param;
-};
-
-/**
- * @internal
- * The generic data structure associated with each ethernet device.
- *
- * Pointers to burst-oriented packet receive and transmit functions are
- * located at the beginning of the structure, along with the pointer to
- * where all the data elements for the particular device are stored in shared
- * memory. This split allows the function pointer and driver data to be per-
- * process, while the actual configuration data for the device is shared.
- */
-struct rte_eth_dev {
-	eth_rx_burst_t rx_pkt_burst; /**< Pointer to PMD receive function. */
-	eth_tx_burst_t tx_pkt_burst; /**< Pointer to PMD transmit function. */
-	eth_tx_prep_t tx_pkt_prepare; /**< Pointer to PMD transmit prepare function. */
-
-	eth_rx_queue_count_t       rx_queue_count; /**< Get the number of used RX descriptors. */
-	eth_rx_descriptor_status_t rx_descriptor_status; /**< Check the status of a Rx descriptor. */
-	eth_tx_descriptor_status_t tx_descriptor_status; /**< Check the status of a Tx descriptor. */
-
-	/**
-	 * Next two fields are per-device data but *data is shared between
-	 * primary and secondary processes and *process_private is per-process
-	 * private. The second one is managed by PMDs if necessary.
-	 */
-	struct rte_eth_dev_data *data;  /**< Pointer to device data. */
-	void *process_private; /**< Pointer to per-process device data. */
-	const struct eth_dev_ops *dev_ops; /**< Functions exported by PMD */
-	struct rte_device *device; /**< Backing device */
-	struct rte_intr_handle *intr_handle; /**< Device interrupt handle */
-	/** User application callbacks for NIC interrupts */
-	struct rte_eth_dev_cb_list link_intr_cbs;
-	/**
-	 * User-supplied functions called from rx_burst to post-process
-	 * received packets before passing them to the user
-	 */
-	struct rte_eth_rxtx_callback *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
-	/**
-	 * User-supplied functions called from tx_burst to pre-process
-	 * received packets before passing them to the driver for transmission.
-	 */
-	struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
-	enum rte_eth_dev_state state; /**< Flag indicating the port state */
-	void *security_ctx; /**< Context for security ops */
-
-	uint64_t reserved_64s[4]; /**< Reserved for future fields */
-	void *reserved_ptrs[4];   /**< Reserved for future fields */
-} __rte_cache_aligned;
-
-struct rte_eth_dev_sriov;
-struct rte_eth_dev_owner;
-
-/**
- * @internal
- * The data part, with no function pointers, associated with each ethernet device.
- *
- * This structure is safe to place in shared memory to be common among different
- * processes in a multi-process configuration.
- */
-struct rte_eth_dev_data {
-	char name[RTE_ETH_NAME_MAX_LEN]; /**< Unique identifier name */
-
-	void **rx_queues; /**< Array of pointers to RX queues. */
-	void **tx_queues; /**< Array of pointers to TX queues. */
-	uint16_t nb_rx_queues; /**< Number of RX queues. */
-	uint16_t nb_tx_queues; /**< Number of TX queues. */
-
-	struct rte_eth_dev_sriov sriov;    /**< SRIOV data */
-
-	void *dev_private;
-			/**< PMD-specific private data.
-			 *   @see rte_eth_dev_release_port()
-			 */
-
-	struct rte_eth_link dev_link;   /**< Link-level information & status. */
-	struct rte_eth_conf dev_conf;   /**< Configuration applied to device. */
-	uint16_t mtu;                   /**< Maximum Transmission Unit. */
-	uint32_t min_rx_buf_size;
-			/**< Common RX buffer size handled by all queues. */
-
-	uint64_t rx_mbuf_alloc_failed; /**< RX ring mbuf allocation failures. */
-	struct rte_ether_addr *mac_addrs;
-			/**< Device Ethernet link address.
-			 *   @see rte_eth_dev_release_port()
-			 */
-	uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
-			/**< Bitmap associating MAC addresses to pools. */
-	struct rte_ether_addr *hash_mac_addrs;
-			/**< Device Ethernet MAC addresses of hash filtering.
-			 *   @see rte_eth_dev_release_port()
-			 */
-	uint16_t port_id;           /**< Device [external] port identifier. */
-
-	__extension__
-	uint8_t promiscuous   : 1, /**< RX promiscuous mode ON(1) / OFF(0). */
-		scattered_rx : 1,  /**< RX of scattered packets is ON(1) / OFF(0) */
-		all_multicast : 1, /**< RX all multicast mode ON(1) / OFF(0). */
-		dev_started : 1,   /**< Device state: STARTED(1) / STOPPED(0). */
-		lro         : 1,   /**< RX LRO is ON(1) / OFF(0) */
-		dev_configured : 1;
-		/**< Indicates whether the device is configured.
-		 *   CONFIGURED(1) / NOT CONFIGURED(0).
-		 */
-	uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
-		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
-	uint8_t tx_queue_state[RTE_MAX_QUEUES_PER_PORT];
-		/**< Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0). */
-	uint32_t dev_flags;             /**< Capabilities. */
-	int numa_node;                  /**< NUMA node connection. */
-	struct rte_vlan_filter_conf vlan_filter_conf;
-			/**< VLAN filter configuration. */
-	struct rte_eth_dev_owner owner; /**< The port owner. */
-	uint16_t representor_id;
-			/**< Switch-specific identifier.
-			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
-			 */
-
-	pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
-	uint64_t reserved_64s[4]; /**< Reserved for future fields */
-	void *reserved_ptrs[4];   /**< Reserved for future fields */
-} __rte_cache_aligned;
-
-/**
- * @internal
- * The pool of *rte_eth_dev* structures. The size of the pool
- * is configured at compile-time in the <rte_ethdev.c> file.
- */
-extern struct rte_eth_dev rte_eth_devices[];
-
 #endif /* _RTE_ETHDEV_CORE_H_ */
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 2bad712958..cfe58e519d 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -73,7 +73,6 @@ DPDK_22 {
 	rte_eth_dev_udp_tunnel_port_add;
 	rte_eth_dev_udp_tunnel_port_delete;
 	rte_eth_dev_vlan_filter;
-	rte_eth_devices;
 	rte_eth_find_next;
 	rte_eth_find_next_of;
 	rte_eth_find_next_owned_by;
@@ -270,6 +269,7 @@ INTERNAL {
 	rte_eth_dev_release_port;
 	rte_eth_dev_internal_reset;
 	rte_eth_devargs_parse;
+	rte_eth_devices;
 	rte_eth_dma_zone_free;
 	rte_eth_dma_zone_reserve;
 	rte_eth_hairpin_queue_peer_bind;
diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c
index 13dfb28401..89c4ca5d40 100644
--- a/lib/eventdev/rte_event_eth_rx_adapter.c
+++ b/lib/eventdev/rte_event_eth_rx_adapter.c
@@ -11,7 +11,7 @@
 #include <rte_common.h>
 #include <rte_dev.h>
 #include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_log.h>
 #include <rte_malloc.h>
 #include <rte_service_component.h>
diff --git a/lib/eventdev/rte_event_eth_tx_adapter.c b/lib/eventdev/rte_event_eth_tx_adapter.c
index 18c0359db7..1c06c8707c 100644
--- a/lib/eventdev/rte_event_eth_tx_adapter.c
+++ b/lib/eventdev/rte_event_eth_tx_adapter.c
@@ -3,7 +3,7 @@
  */
 #include <rte_spinlock.h>
 #include <rte_service_component.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 
 #include "eventdev_pmd.h"
 #include "rte_eventdev_trace.h"
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index e347d6dfd5..ebef5f0906 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -29,7 +29,7 @@
 #include <rte_common.h>
 #include <rte_malloc.h>
 #include <rte_errno.h>
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_cryptodev.h>
 #include <cryptodev_pmd.h>
 #include <rte_telemetry.h>
diff --git a/lib/metrics/rte_metrics_telemetry.c b/lib/metrics/rte_metrics_telemetry.c
index 269f8ef613..5be21b2e86 100644
--- a/lib/metrics/rte_metrics_telemetry.c
+++ b/lib/metrics/rte_metrics_telemetry.c
@@ -2,7 +2,7 @@
  * Copyright(c) 2020 Intel Corporation
  */
 
-#include <rte_ethdev.h>
+#include <ethdev_driver.h>
 #include <rte_string_fns.h>
 #ifdef RTE_LIB_TELEMETRY
 #include <telemetry_internal.h>
-- 
2.26.3


^ permalink raw reply	[relevance 9%]

* Re: [dpdk-dev] [PATCH v9] bbdev: add device info related to data endianness assumption
  2021-10-06 20:58  4% ` Nicolas Chautru
@ 2021-10-07 12:01  0%   ` Tom Rix
  2021-10-07 15:19  0%     ` Chautru, Nicolas
  2021-10-07 13:13  0%   ` [dpdk-dev] [EXT] " Akhil Goyal
  1 sibling, 1 reply; 200+ results
From: Tom Rix @ 2021-10-07 12:01 UTC (permalink / raw)
  To: Nicolas Chautru, dev, gakhil, nipun.gupta
  Cc: thomas, mingshan.zhang, arun.joshi, hemant.agrawal, david.marchand


On 10/6/21 1:58 PM, Nicolas Chautru wrote:
> Adding device information to capture explicitly the assumption
> of the input/output data byte endianness being processed.
>
> Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
> ---
>   doc/guides/rel_notes/release_21_11.rst             | 1 +
>   drivers/baseband/acc100/rte_acc100_pmd.c           | 1 +
>   drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 1 +
>   drivers/baseband/fpga_lte_fec/fpga_lte_fec.c       | 1 +

Missed bbdev_null.c

If this was intentional data_endianness is uninitialized or implicitly 
big endian.

It would be better to say it is unknown. which may mean another enum is 
needed.

>   drivers/baseband/turbo_sw/bbdev_turbo_software.c   | 1 +
>   lib/bbdev/rte_bbdev.h                              | 8 ++++++++
>   6 files changed, 13 insertions(+)
>
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index a8900a3..f0b3006 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -191,6 +191,7 @@ API Changes
>   
>   * bbdev: Added capability related to more comprehensive CRC options.
>   
> +* bbdev: Added device info related to data byte endianness processing assumption.
>   
>   ABI Changes
>   -----------
> diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c b/drivers/baseband/acc100/rte_acc100_pmd.c
> index 4e2feef..eb2c6c1 100644
> --- a/drivers/baseband/acc100/rte_acc100_pmd.c
> +++ b/drivers/baseband/acc100/rte_acc100_pmd.c
> @@ -1089,6 +1089,7 @@
>   #else
>   	dev_info->harq_buffer_size = 0;
>   #endif
> +	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
>   	acc100_check_ir(d);
>   }
>   
> diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> index 6485cc8..c7f15c0 100644
> --- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> +++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> @@ -372,6 +372,7 @@
>   	dev_info->default_queue_conf = default_queue_conf;
>   	dev_info->capabilities = bbdev_capabilities;
>   	dev_info->cpu_flag_reqs = NULL;
> +	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
>   
>   	/* Calculates number of queues assigned to device */
>   	dev_info->max_num_queues = 0;
> diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> index 350c424..72e213e 100644
> --- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> +++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> @@ -644,6 +644,7 @@ struct __rte_cache_aligned fpga_queue {
>   	dev_info->default_queue_conf = default_queue_conf;
>   	dev_info->capabilities = bbdev_capabilities;
>   	dev_info->cpu_flag_reqs = NULL;
> +	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
>   
>   	/* Calculates number of queues assigned to device */
>   	dev_info->max_num_queues = 0;
> diff --git a/drivers/baseband/turbo_sw/bbdev_turbo_software.c b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> index e1db2bf..0cab91a 100644
> --- a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> +++ b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> @@ -253,6 +253,7 @@ struct turbo_sw_queue {
>   	dev_info->capabilities = bbdev_capabilities;
>   	dev_info->min_alignment = 64;
>   	dev_info->harq_buffer_size = 0;
> +	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
>   
>   	rte_bbdev_log_debug("got device info from %u\n", dev->data->dev_id);
>   }
> diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h
> index 3ebf62e..b3f3000 100644
> --- a/lib/bbdev/rte_bbdev.h
> +++ b/lib/bbdev/rte_bbdev.h
> @@ -49,6 +49,12 @@ enum rte_bbdev_state {
>   	RTE_BBDEV_INITIALIZED
>   };
>   
> +/** Definitions of device data byte endianness types */
> +enum rte_bbdev_endianness {
> +	RTE_BBDEV_BIG_ENDIAN,    /**< Data with byte-endianness BE */
> +	RTE_BBDEV_LITTLE_ENDIAN, /**< Data with byte-endianness LE */
> +};

Could RTE_BIG|LITTLE_ENDIAN be reused ?

Tom

> +
>   /**
>    * Get the total number of devices that have been successfully initialised.
>    *
> @@ -309,6 +315,8 @@ struct rte_bbdev_driver_info {
>   	uint16_t min_alignment;
>   	/** HARQ memory available in kB */
>   	uint32_t harq_buffer_size;
> +	/** Byte endianness assumption for input/output data */
> +	enum rte_bbdev_endianness data_endianness;
>   	/** Default queue configuration used if none is supplied  */
>   	struct rte_bbdev_queue_conf default_queue_conf;
>   	/** Device operation capabilities */


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [EXT] [PATCH v9] bbdev: add device info related to data endianness assumption
  2021-10-06 20:58  4% ` Nicolas Chautru
  2021-10-07 12:01  0%   ` Tom Rix
@ 2021-10-07 13:13  0%   ` Akhil Goyal
  2021-10-07 15:41  0%     ` Chautru, Nicolas
  1 sibling, 1 reply; 200+ results
From: Akhil Goyal @ 2021-10-07 13:13 UTC (permalink / raw)
  To: Nicolas Chautru, dev, nipun.gupta, trix
  Cc: thomas, mingshan.zhang, arun.joshi, hemant.agrawal, david.marchand

> Subject: [EXT] [PATCH v9] bbdev: add device info related to data endianness
> assumption
> 
Title is too long.
bbdev: add dev info for data endianness

> Adding device information to capture explicitly the assumption
> of the input/output data byte endianness being processed.
> 
> Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
> ---
>  doc/guides/rel_notes/release_21_11.rst             | 1 +
>  drivers/baseband/acc100/rte_acc100_pmd.c           | 1 +
>  drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 1 +
>  drivers/baseband/fpga_lte_fec/fpga_lte_fec.c       | 1 +
>  drivers/baseband/turbo_sw/bbdev_turbo_software.c   | 1 +
>  lib/bbdev/rte_bbdev.h                              | 8 ++++++++
>  6 files changed, 13 insertions(+)
> 
> diff --git a/doc/guides/rel_notes/release_21_11.rst
> b/doc/guides/rel_notes/release_21_11.rst
> index a8900a3..f0b3006 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -191,6 +191,7 @@ API Changes
> 
>  * bbdev: Added capability related to more comprehensive CRC options.
> 
> +* bbdev: Added device info related to data byte endianness processing
> assumption.

It is not clear from the description or the release notes, what the application
is supposed to do based on the new dev_info field set and how the driver determine
what value to set?
Isn't there a standard from the application stand point that the input/output data
Should be in BE or in LE like in case of IP packets which are always in BE?
I mean why is it dependent on the PMD which is processing it?
Whatever application understands, PMD should comply with that and do internal
Swapping if it does not support it.
Am I missing something?

> 
>  ABI Changes
>  -----------
> diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c
> b/drivers/baseband/acc100/rte_acc100_pmd.c
> index 4e2feef..eb2c6c1 100644
> --- a/drivers/baseband/acc100/rte_acc100_pmd.c
> +++ b/drivers/baseband/acc100/rte_acc100_pmd.c
> @@ -1089,6 +1089,7 @@
>  #else
>  	dev_info->harq_buffer_size = 0;
>  #endif
> +	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
>  	acc100_check_ir(d);
>  }
> 
> diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> index 6485cc8..c7f15c0 100644
> --- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> +++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> @@ -372,6 +372,7 @@
>  	dev_info->default_queue_conf = default_queue_conf;
>  	dev_info->capabilities = bbdev_capabilities;
>  	dev_info->cpu_flag_reqs = NULL;
> +	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> 
>  	/* Calculates number of queues assigned to device */
>  	dev_info->max_num_queues = 0;
> diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> index 350c424..72e213e 100644
> --- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> +++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> @@ -644,6 +644,7 @@ struct __rte_cache_aligned fpga_queue {
>  	dev_info->default_queue_conf = default_queue_conf;
>  	dev_info->capabilities = bbdev_capabilities;
>  	dev_info->cpu_flag_reqs = NULL;
> +	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> 
>  	/* Calculates number of queues assigned to device */
>  	dev_info->max_num_queues = 0;
> diff --git a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> index e1db2bf..0cab91a 100644
> --- a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> +++ b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> @@ -253,6 +253,7 @@ struct turbo_sw_queue {
>  	dev_info->capabilities = bbdev_capabilities;
>  	dev_info->min_alignment = 64;
>  	dev_info->harq_buffer_size = 0;
> +	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> 
>  	rte_bbdev_log_debug("got device info from %u\n", dev->data-
> >dev_id);
>  }
> diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h
> index 3ebf62e..b3f3000 100644
> --- a/lib/bbdev/rte_bbdev.h
> +++ b/lib/bbdev/rte_bbdev.h
> @@ -49,6 +49,12 @@ enum rte_bbdev_state {
>  	RTE_BBDEV_INITIALIZED
>  };
> 
> +/** Definitions of device data byte endianness types */
> +enum rte_bbdev_endianness {
> +	RTE_BBDEV_BIG_ENDIAN,    /**< Data with byte-endianness BE */
> +	RTE_BBDEV_LITTLE_ENDIAN, /**< Data with byte-endianness LE */
> +};
If at all be need this dev_info field,
as Tom suggested we should use RTE_BIG/LITTLE_ENDIAN.

> +
>  /**
>   * Get the total number of devices that have been successfully initialised.
>   *
> @@ -309,6 +315,8 @@ struct rte_bbdev_driver_info {
>  	uint16_t min_alignment;
>  	/** HARQ memory available in kB */
>  	uint32_t harq_buffer_size;
> +	/** Byte endianness assumption for input/output data */
> +	enum rte_bbdev_endianness data_endianness;

We should define how the input and output data are expected from the app.
If need be, we can define a simple ``bool swap`` instead of an enum.

>  	/** Default queue configuration used if none is supplied  */
>  	struct rte_bbdev_queue_conf default_queue_conf;
>  	/** Device operation capabilities */
> --
> 1.8.3.1


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v9] bbdev: add device info related to data endianness assumption
  2021-10-07 12:01  0%   ` Tom Rix
@ 2021-10-07 15:19  0%     ` Chautru, Nicolas
  0 siblings, 0 replies; 200+ results
From: Chautru, Nicolas @ 2021-10-07 15:19 UTC (permalink / raw)
  To: Tom Rix, dev, gakhil, nipun.gupta
  Cc: thomas, Zhang, Mingshan, Joshi, Arun, hemant.agrawal, david.marchand

Hi Tom, 

> -----Original Message-----
> From: Tom Rix <trix@redhat.com>
> Sent: Thursday, October 7, 2021 5:01 AM
> To: Chautru, Nicolas <nicolas.chautru@intel.com>; dev@dpdk.org;
> gakhil@marvell.com; nipun.gupta@nxp.com
> Cc: thomas@monjalon.net; Zhang, Mingshan <mingshan.zhang@intel.com>;
> Joshi, Arun <arun.joshi@intel.com>; hemant.agrawal@nxp.com;
> david.marchand@redhat.com
> Subject: Re: [PATCH v9] bbdev: add device info related to data endianness
> assumption
> 
> 
> On 10/6/21 1:58 PM, Nicolas Chautru wrote:
> > Adding device information to capture explicitly the assumption of the
> > input/output data byte endianness being processed.
> >
> > Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
> > ---
> >   doc/guides/rel_notes/release_21_11.rst             | 1 +
> >   drivers/baseband/acc100/rte_acc100_pmd.c           | 1 +
> >   drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 1 +
> >   drivers/baseband/fpga_lte_fec/fpga_lte_fec.c       | 1 +
> 
> Missed bbdev_null.c
> 
> If this was intentional data_endianness is uninitialized or implicitly big endian.
> 
> It would be better to say it is unknown. which may mean another enum is
> needed.

I considered this but null driver doesn't touch data, so not relevant.
Still if preferred, Nipin feel free to set it in null_driver as well in your serie (with a comment that it is not relevant). 

> 
> >   drivers/baseband/turbo_sw/bbdev_turbo_software.c   | 1 +
> >   lib/bbdev/rte_bbdev.h                              | 8 ++++++++
> >   6 files changed, 13 insertions(+)
> >
> > diff --git a/doc/guides/rel_notes/release_21_11.rst
> > b/doc/guides/rel_notes/release_21_11.rst
> > index a8900a3..f0b3006 100644
> > --- a/doc/guides/rel_notes/release_21_11.rst
> > +++ b/doc/guides/rel_notes/release_21_11.rst
> > @@ -191,6 +191,7 @@ API Changes
> >
> >   * bbdev: Added capability related to more comprehensive CRC options.
> >
> > +* bbdev: Added device info related to data byte endianness processing
> assumption.
> >
> >   ABI Changes
> >   -----------
> > diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c
> > b/drivers/baseband/acc100/rte_acc100_pmd.c
> > index 4e2feef..eb2c6c1 100644
> > --- a/drivers/baseband/acc100/rte_acc100_pmd.c
> > +++ b/drivers/baseband/acc100/rte_acc100_pmd.c
> > @@ -1089,6 +1089,7 @@
> >   #else
> >   	dev_info->harq_buffer_size = 0;
> >   #endif
> > +	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> >   	acc100_check_ir(d);
> >   }
> >
> > diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > index 6485cc8..c7f15c0 100644
> > --- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > +++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > @@ -372,6 +372,7 @@
> >   	dev_info->default_queue_conf = default_queue_conf;
> >   	dev_info->capabilities = bbdev_capabilities;
> >   	dev_info->cpu_flag_reqs = NULL;
> > +	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> >
> >   	/* Calculates number of queues assigned to device */
> >   	dev_info->max_num_queues = 0;
> > diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > index 350c424..72e213e 100644
> > --- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > +++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > @@ -644,6 +644,7 @@ struct __rte_cache_aligned fpga_queue {
> >   	dev_info->default_queue_conf = default_queue_conf;
> >   	dev_info->capabilities = bbdev_capabilities;
> >   	dev_info->cpu_flag_reqs = NULL;
> > +	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> >
> >   	/* Calculates number of queues assigned to device */
> >   	dev_info->max_num_queues = 0;
> > diff --git a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > index e1db2bf..0cab91a 100644
> > --- a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > +++ b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > @@ -253,6 +253,7 @@ struct turbo_sw_queue {
> >   	dev_info->capabilities = bbdev_capabilities;
> >   	dev_info->min_alignment = 64;
> >   	dev_info->harq_buffer_size = 0;
> > +	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> >
> >   	rte_bbdev_log_debug("got device info from %u\n", dev->data-
> >dev_id);
> >   }
> > diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h index
> > 3ebf62e..b3f3000 100644
> > --- a/lib/bbdev/rte_bbdev.h
> > +++ b/lib/bbdev/rte_bbdev.h
> > @@ -49,6 +49,12 @@ enum rte_bbdev_state {
> >   	RTE_BBDEV_INITIALIZED
> >   };
> >
> > +/** Definitions of device data byte endianness types */ enum
> > +rte_bbdev_endianness {
> > +	RTE_BBDEV_BIG_ENDIAN,    /**< Data with byte-endianness BE */
> > +	RTE_BBDEV_LITTLE_ENDIAN, /**< Data with byte-endianness LE */ };
> 
> Could RTE_BIG|LITTLE_ENDIAN be reused ?

I considered this but the usage is different, these are build time #define, and really would bring confusion here.
Note that there are not really the endianness of the system itself but specific to the bbdev data output going through signal processing.
I thought it was more explicit and less confusing this way, feel free to comment back. 

Thanks for the comments.

> 
> Tom
> 
> > +
> >   /**
> >    * Get the total number of devices that have been successfully initialised.
> >    *
> > @@ -309,6 +315,8 @@ struct rte_bbdev_driver_info {
> >   	uint16_t min_alignment;
> >   	/** HARQ memory available in kB */
> >   	uint32_t harq_buffer_size;
> > +	/** Byte endianness assumption for input/output data */
> > +	enum rte_bbdev_endianness data_endianness;
> >   	/** Default queue configuration used if none is supplied  */
> >   	struct rte_bbdev_queue_conf default_queue_conf;
> >   	/** Device operation capabilities */


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [EXT] [PATCH v9] bbdev: add device info related to data endianness assumption
  2021-10-07 13:13  0%   ` [dpdk-dev] [EXT] " Akhil Goyal
@ 2021-10-07 15:41  0%     ` Chautru, Nicolas
  2021-10-07 16:49  0%       ` Nipun Gupta
  0 siblings, 1 reply; 200+ results
From: Chautru, Nicolas @ 2021-10-07 15:41 UTC (permalink / raw)
  To: Akhil Goyal, dev, nipun.gupta, trix
  Cc: thomas, Zhang, Mingshan, Joshi, Arun, hemant.agrawal, david.marchand

Hi Akhil, 


> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Thursday, October 7, 2021 6:14 AM
> To: Chautru, Nicolas <nicolas.chautru@intel.com>; dev@dpdk.org;
> nipun.gupta@nxp.com; trix@redhat.com
> Cc: thomas@monjalon.net; Zhang, Mingshan <mingshan.zhang@intel.com>;
> Joshi, Arun <arun.joshi@intel.com>; hemant.agrawal@nxp.com;
> david.marchand@redhat.com
> Subject: RE: [EXT] [PATCH v9] bbdev: add device info related to data
> endianness assumption
> 
> > Subject: [EXT] [PATCH v9] bbdev: add device info related to data
> > endianness assumption
> >
> Title is too long.
> bbdev: add dev info for data endianness

OK

> 
> > Adding device information to capture explicitly the assumption of the
> > input/output data byte endianness being processed.
> >
> > Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
> > ---
> >  doc/guides/rel_notes/release_21_11.rst             | 1 +
> >  drivers/baseband/acc100/rte_acc100_pmd.c           | 1 +
> >  drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 1 +
> >  drivers/baseband/fpga_lte_fec/fpga_lte_fec.c       | 1 +
> >  drivers/baseband/turbo_sw/bbdev_turbo_software.c   | 1 +
> >  lib/bbdev/rte_bbdev.h                              | 8 ++++++++
> >  6 files changed, 13 insertions(+)
> >
> > diff --git a/doc/guides/rel_notes/release_21_11.rst
> > b/doc/guides/rel_notes/release_21_11.rst
> > index a8900a3..f0b3006 100644
> > --- a/doc/guides/rel_notes/release_21_11.rst
> > +++ b/doc/guides/rel_notes/release_21_11.rst
> > @@ -191,6 +191,7 @@ API Changes
> >
> >  * bbdev: Added capability related to more comprehensive CRC options.
> >
> > +* bbdev: Added device info related to data byte endianness processing
> > assumption.
> 
> It is not clear from the description or the release notes, what the application
> is supposed to do based on the new dev_info field set and how the driver
> determine what value to set?
> Isn't there a standard from the application stand point that the input/output
> data Should be in BE or in LE like in case of IP packets which are always in BE?
> I mean why is it dependent on the PMD which is processing it?
> Whatever application understands, PMD should comply with that and do
> internal Swapping if it does not support it.
> Am I missing something?

This is really to allow Nipin to add his own NXP la12xx PMD, which appears to have different assumption on endianness.
All existing processing is done in LE by default by the existing PMDs and the existing ecosystem. 
I cannot comment on why they would want to do that for the la12xx specifically, I could only speculate but here trying to help to find the best way for the new PMD to be supported. 
So here this suggested change is purely about exposing different assumption for the PMDs, so that this new PMD can still be supported under this API even though this is in effect incompatible with existing ecosystem.
In case the application has different assumption that what the PMD does, then byte swapping would have to be done in the application, more likely I assume that la12xx has its own ecosystem with different endianness required for other reasons. 
The option you are suggesting would be to put the burden on the PMD but I doubt there is an actual usecase for that. I assume they assume different endianness for other specific reason, not necessary to be compatible with existing ecosystem. 
Niping, Hemant, feel free to comment back, from previous discussion I believe this is what you wanted to do. Unsure of the reason, feel free to share more details or not. 


> 
> >
> >  ABI Changes
> >  -----------
> > diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c
> > b/drivers/baseband/acc100/rte_acc100_pmd.c
> > index 4e2feef..eb2c6c1 100644
> > --- a/drivers/baseband/acc100/rte_acc100_pmd.c
> > +++ b/drivers/baseband/acc100/rte_acc100_pmd.c
> > @@ -1089,6 +1089,7 @@
> >  #else
> >  	dev_info->harq_buffer_size = 0;
> >  #endif
> > +	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> >  	acc100_check_ir(d);
> >  }
> >
> > diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > index 6485cc8..c7f15c0 100644
> > --- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > +++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > @@ -372,6 +372,7 @@
> >  	dev_info->default_queue_conf = default_queue_conf;
> >  	dev_info->capabilities = bbdev_capabilities;
> >  	dev_info->cpu_flag_reqs = NULL;
> > +	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> >
> >  	/* Calculates number of queues assigned to device */
> >  	dev_info->max_num_queues = 0;
> > diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > index 350c424..72e213e 100644
> > --- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > +++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > @@ -644,6 +644,7 @@ struct __rte_cache_aligned fpga_queue {
> >  	dev_info->default_queue_conf = default_queue_conf;
> >  	dev_info->capabilities = bbdev_capabilities;
> >  	dev_info->cpu_flag_reqs = NULL;
> > +	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> >
> >  	/* Calculates number of queues assigned to device */
> >  	dev_info->max_num_queues = 0;
> > diff --git a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > index e1db2bf..0cab91a 100644
> > --- a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > +++ b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > @@ -253,6 +253,7 @@ struct turbo_sw_queue {
> >  	dev_info->capabilities = bbdev_capabilities;
> >  	dev_info->min_alignment = 64;
> >  	dev_info->harq_buffer_size = 0;
> > +	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> >
> >  	rte_bbdev_log_debug("got device info from %u\n", dev->data-
> > >dev_id);
> >  }
> > diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h index
> > 3ebf62e..b3f3000 100644
> > --- a/lib/bbdev/rte_bbdev.h
> > +++ b/lib/bbdev/rte_bbdev.h
> > @@ -49,6 +49,12 @@ enum rte_bbdev_state {
> >  	RTE_BBDEV_INITIALIZED
> >  };
> >
> > +/** Definitions of device data byte endianness types */ enum
> > +rte_bbdev_endianness {
> > +	RTE_BBDEV_BIG_ENDIAN,    /**< Data with byte-endianness BE */
> > +	RTE_BBDEV_LITTLE_ENDIAN, /**< Data with byte-endianness LE */ };
> If at all be need this dev_info field,
> as Tom suggested we should use RTE_BIG/LITTLE_ENDIAN.

See separate comment on my reply to Tom:
I considered this but the usage is different, these are build time #define, and really would bring confusion here.
Note that there are not really the endianness of the system itself but specific to the bbdev data output going through signal processing.
I thought it was more explicit and less confusing this way, feel free to comment back.
NXP would know best why a different endianness would be required in the PMD. 

> 
> > +
> >  /**
> >   * Get the total number of devices that have been successfully initialised.
> >   *
> > @@ -309,6 +315,8 @@ struct rte_bbdev_driver_info {
> >  	uint16_t min_alignment;
> >  	/** HARQ memory available in kB */
> >  	uint32_t harq_buffer_size;
> > +	/** Byte endianness assumption for input/output data */
> > +	enum rte_bbdev_endianness data_endianness;
> 
> We should define how the input and output data are expected from the app.
> If need be, we can define a simple ``bool swap`` instead of an enum.

This could be done as well. Default no swap, and swap required for the new PMD. 
I will let Nipin/Hemant comment back.

> 
> >  	/** Default queue configuration used if none is supplied  */
> >  	struct rte_bbdev_queue_conf default_queue_conf;
> >  	/** Device operation capabilities */
> > --
> > 1.8.3.1


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [EXT] [PATCH v9] bbdev: add device info related to data endianness assumption
  2021-10-07 15:41  0%     ` Chautru, Nicolas
@ 2021-10-07 16:49  0%       ` Nipun Gupta
  2021-10-07 18:58  0%         ` Chautru, Nicolas
  0 siblings, 1 reply; 200+ results
From: Nipun Gupta @ 2021-10-07 16:49 UTC (permalink / raw)
  To: Chautru, Nicolas, Akhil Goyal, dev, trix
  Cc: thomas, Zhang, Mingshan, Joshi, Arun, Hemant Agrawal, david.marchand



> -----Original Message-----
> From: Chautru, Nicolas <nicolas.chautru@intel.com>
> Sent: Thursday, October 7, 2021 9:12 PM
> To: Akhil Goyal <gakhil@marvell.com>; dev@dpdk.org; Nipun Gupta
> <nipun.gupta@nxp.com>; trix@redhat.com
> Cc: thomas@monjalon.net; Zhang, Mingshan <mingshan.zhang@intel.com>;
> Joshi, Arun <arun.joshi@intel.com>; Hemant Agrawal
> <hemant.agrawal@nxp.com>; david.marchand@redhat.com
> Subject: RE: [EXT] [PATCH v9] bbdev: add device info related to data endianness
> assumption
> 
> Hi Akhil,
> 
> 
> > -----Original Message-----
> > From: Akhil Goyal <gakhil@marvell.com>
> > Sent: Thursday, October 7, 2021 6:14 AM
> > To: Chautru, Nicolas <nicolas.chautru@intel.com>; dev@dpdk.org;
> > nipun.gupta@nxp.com; trix@redhat.com
> > Cc: thomas@monjalon.net; Zhang, Mingshan <mingshan.zhang@intel.com>;
> > Joshi, Arun <arun.joshi@intel.com>; hemant.agrawal@nxp.com;
> > david.marchand@redhat.com
> > Subject: RE: [EXT] [PATCH v9] bbdev: add device info related to data
> > endianness assumption
> >
> > > Subject: [EXT] [PATCH v9] bbdev: add device info related to data
> > > endianness assumption
> > >
> > Title is too long.
> > bbdev: add dev info for data endianness
> 
> OK
> 
> >
> > > Adding device information to capture explicitly the assumption of the
> > > input/output data byte endianness being processed.
> > >
> > > Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
> > > ---
> > >  doc/guides/rel_notes/release_21_11.rst             | 1 +
> > >  drivers/baseband/acc100/rte_acc100_pmd.c           | 1 +
> > >  drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 1 +
> > >  drivers/baseband/fpga_lte_fec/fpga_lte_fec.c       | 1 +
> > >  drivers/baseband/turbo_sw/bbdev_turbo_software.c   | 1 +
> > >  lib/bbdev/rte_bbdev.h                              | 8 ++++++++
> > >  6 files changed, 13 insertions(+)
> > >
> > > diff --git a/doc/guides/rel_notes/release_21_11.rst
> > > b/doc/guides/rel_notes/release_21_11.rst
> > > index a8900a3..f0b3006 100644
> > > --- a/doc/guides/rel_notes/release_21_11.rst
> > > +++ b/doc/guides/rel_notes/release_21_11.rst
> > > @@ -191,6 +191,7 @@ API Changes
> > >
> > >  * bbdev: Added capability related to more comprehensive CRC options.
> > >
> > > +* bbdev: Added device info related to data byte endianness processing
> > > assumption.
> >
> > It is not clear from the description or the release notes, what the application
> > is supposed to do based on the new dev_info field set and how the driver
> > determine what value to set?
> > Isn't there a standard from the application stand point that the input/output
> > data Should be in BE or in LE like in case of IP packets which are always in BE?
> > I mean why is it dependent on the PMD which is processing it?
> > Whatever application understands, PMD should comply with that and do
> > internal Swapping if it does not support it.
> > Am I missing something?
> 
> This is really to allow Nipin to add his own NXP la12xx PMD, which appears to
> have different assumption on endianness.
> All existing processing is done in LE by default by the existing PMDs and the
> existing ecosystem.
> I cannot comment on why they would want to do that for the la12xx specifically,
> I could only speculate but here trying to help to find the best way for the new
> PMD to be supported.
> So here this suggested change is purely about exposing different assumption for
> the PMDs, so that this new PMD can still be supported under this API even
> though this is in effect incompatible with existing ecosystem.
> In case the application has different assumption that what the PMD does, then
> byte swapping would have to be done in the application, more likely I assume
> that la12xx has its own ecosystem with different endianness required for other
> reasons.
> The option you are suggesting would be to put the burden on the PMD but I
> doubt there is an actual usecase for that. I assume they assume different
> endianness for other specific reason, not necessary to be compatible with
> existing ecosystem.
> Niping, Hemant, feel free to comment back, from previous discussion I believe
> this is what you wanted to do. Unsure of the reason, feel free to share more
> details or not.

Akhil/Nicolas,

As Hemant mentioned on v4 (previously asked by Dave)

"---
If we go back to the data providing source i.e. FAPI interface, it is 
implementation specific, as per SCF222.

Our customers do use BE data in network and at FAPI interface.

In LA12xx, at present, we use u8 Big-endian data for processing to FECA 
engine.  We do see that other drivers in DPDK are using Little Endian 
*(with u32 data)* but standards is open for both.
"---

Standard is not specific to endianness and is open for implementation.
So it does not makes a reason to have one endianness as default and
other managed in the PMD, and the current change seems right.

Yes endianness assumption is taken in the test vector input/output data,
but this should be acceptable as it does not impact the PMD's and
end user applications in general.

BTW Nicolos, my name is Nipun :)

> 
> 
> >
> > >
> > >  ABI Changes
> > >  -----------
> > > diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c
> > > b/drivers/baseband/acc100/rte_acc100_pmd.c
> > > index 4e2feef..eb2c6c1 100644
> > > --- a/drivers/baseband/acc100/rte_acc100_pmd.c
> > > +++ b/drivers/baseband/acc100/rte_acc100_pmd.c
> > > @@ -1089,6 +1089,7 @@
> > >  #else
> > >  	dev_info->harq_buffer_size = 0;
> > >  #endif
> > > +	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> > >  	acc100_check_ir(d);
> > >  }
> > >
> > > diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > > b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > > index 6485cc8..c7f15c0 100644
> > > --- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > > +++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > > @@ -372,6 +372,7 @@
> > >  	dev_info->default_queue_conf = default_queue_conf;
> > >  	dev_info->capabilities = bbdev_capabilities;
> > >  	dev_info->cpu_flag_reqs = NULL;
> > > +	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> > >
> > >  	/* Calculates number of queues assigned to device */
> > >  	dev_info->max_num_queues = 0;
> > > diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > > b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > > index 350c424..72e213e 100644
> > > --- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > > +++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > > @@ -644,6 +644,7 @@ struct __rte_cache_aligned fpga_queue {
> > >  	dev_info->default_queue_conf = default_queue_conf;
> > >  	dev_info->capabilities = bbdev_capabilities;
> > >  	dev_info->cpu_flag_reqs = NULL;
> > > +	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> > >
> > >  	/* Calculates number of queues assigned to device */
> > >  	dev_info->max_num_queues = 0;
> > > diff --git a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > > b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > > index e1db2bf..0cab91a 100644
> > > --- a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > > +++ b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > > @@ -253,6 +253,7 @@ struct turbo_sw_queue {
> > >  	dev_info->capabilities = bbdev_capabilities;
> > >  	dev_info->min_alignment = 64;
> > >  	dev_info->harq_buffer_size = 0;
> > > +	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> > >
> > >  	rte_bbdev_log_debug("got device info from %u\n", dev->data-
> > > >dev_id);
> > >  }
> > > diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h index
> > > 3ebf62e..b3f3000 100644
> > > --- a/lib/bbdev/rte_bbdev.h
> > > +++ b/lib/bbdev/rte_bbdev.h
> > > @@ -49,6 +49,12 @@ enum rte_bbdev_state {
> > >  	RTE_BBDEV_INITIALIZED
> > >  };
> > >
> > > +/** Definitions of device data byte endianness types */ enum
> > > +rte_bbdev_endianness {
> > > +	RTE_BBDEV_BIG_ENDIAN,    /**< Data with byte-endianness BE */
> > > +	RTE_BBDEV_LITTLE_ENDIAN, /**< Data with byte-endianness LE */ };
> > If at all be need this dev_info field,
> > as Tom suggested we should use RTE_BIG/LITTLE_ENDIAN.
> 
> See separate comment on my reply to Tom:
> I considered this but the usage is different, these are build time #define, and
> really would bring confusion here.
> Note that there are not really the endianness of the system itself but specific to
> the bbdev data output going through signal processing.
> I thought it was more explicit and less confusing this way, feel free to comment
> back.
> NXP would know best why a different endianness would be required in the PMD.

Please see previous comment for endianness support.
I agree with the RTE_ prefix we can add it as it is for the application interface.

> 
> >
> > > +
> > >  /**
> > >   * Get the total number of devices that have been successfully initialised.
> > >   *
> > > @@ -309,6 +315,8 @@ struct rte_bbdev_driver_info {
> > >  	uint16_t min_alignment;
> > >  	/** HARQ memory available in kB */
> > >  	uint32_t harq_buffer_size;
> > > +	/** Byte endianness assumption for input/output data */
> > > +	enum rte_bbdev_endianness data_endianness;
> >
> > We should define how the input and output data are expected from the app.
> > If need be, we can define a simple ``bool swap`` instead of an enum.
> 
> This could be done as well. Default no swap, and swap required for the new
> PMD.
> I will let Nipin/Hemant comment back.

Again endianness is implementation specific and not standard for 5G processing,
unlike it is for network packet.

Regards,
Nipun

> 
> >
> > >  	/** Default queue configuration used if none is supplied  */
> > >  	struct rte_bbdev_queue_conf default_queue_conf;
> > >  	/** Device operation capabilities */
> > > --
> > > 1.8.3.1


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [EXT] [PATCH v9] bbdev: add device info related to data endianness assumption
  2021-10-07 16:49  0%       ` Nipun Gupta
@ 2021-10-07 18:58  0%         ` Chautru, Nicolas
  2021-10-08  4:34  0%           ` Nipun Gupta
  0 siblings, 1 reply; 200+ results
From: Chautru, Nicolas @ 2021-10-07 18:58 UTC (permalink / raw)
  To: Nipun Gupta, Akhil Goyal, dev, trix
  Cc: thomas, Zhang, Mingshan, Joshi, Arun, Hemant Agrawal, david.marchand

Hi Nipun, 

> -----Original Message-----
> From: Nipun Gupta <nipun.gupta@nxp.com>
> Sent: Thursday, October 7, 2021 9:49 AM
> To: Chautru, Nicolas <nicolas.chautru@intel.com>; Akhil Goyal
> <gakhil@marvell.com>; dev@dpdk.org; trix@redhat.com
> Cc: thomas@monjalon.net; Zhang, Mingshan <mingshan.zhang@intel.com>;
> Joshi, Arun <arun.joshi@intel.com>; Hemant Agrawal
> <hemant.agrawal@nxp.com>; david.marchand@redhat.com
> Subject: RE: [EXT] [PATCH v9] bbdev: add device info related to data
> endianness assumption
> 
> 
> 
> > -----Original Message-----
> > From: Chautru, Nicolas <nicolas.chautru@intel.com>
> > Sent: Thursday, October 7, 2021 9:12 PM
> > To: Akhil Goyal <gakhil@marvell.com>; dev@dpdk.org; Nipun Gupta
> > <nipun.gupta@nxp.com>; trix@redhat.com
> > Cc: thomas@monjalon.net; Zhang, Mingshan
> <mingshan.zhang@intel.com>;
> > Joshi, Arun <arun.joshi@intel.com>; Hemant Agrawal
> > <hemant.agrawal@nxp.com>; david.marchand@redhat.com
> > Subject: RE: [EXT] [PATCH v9] bbdev: add device info related to data
> > endianness assumption
> >
> > Hi Akhil,
> >
> >
> > > -----Original Message-----
> > > From: Akhil Goyal <gakhil@marvell.com>
> > > Sent: Thursday, October 7, 2021 6:14 AM
> > > To: Chautru, Nicolas <nicolas.chautru@intel.com>; dev@dpdk.org;
> > > nipun.gupta@nxp.com; trix@redhat.com
> > > Cc: thomas@monjalon.net; Zhang, Mingshan
> <mingshan.zhang@intel.com>;
> > > Joshi, Arun <arun.joshi@intel.com>; hemant.agrawal@nxp.com;
> > > david.marchand@redhat.com
> > > Subject: RE: [EXT] [PATCH v9] bbdev: add device info related to data
> > > endianness assumption
> > >
> > > > Subject: [EXT] [PATCH v9] bbdev: add device info related to data
> > > > endianness assumption
> > > >
> > > Title is too long.
> > > bbdev: add dev info for data endianness
> >
> > OK
> >
> > >
> > > > Adding device information to capture explicitly the assumption of
> > > > the input/output data byte endianness being processed.
> > > >
> > > > Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
> > > > ---
> > > >  doc/guides/rel_notes/release_21_11.rst             | 1 +
> > > >  drivers/baseband/acc100/rte_acc100_pmd.c           | 1 +
> > > >  drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 1 +
> > > >  drivers/baseband/fpga_lte_fec/fpga_lte_fec.c       | 1 +
> > > >  drivers/baseband/turbo_sw/bbdev_turbo_software.c   | 1 +
> > > >  lib/bbdev/rte_bbdev.h                              | 8 ++++++++
> > > >  6 files changed, 13 insertions(+)
> > > >
> > > > diff --git a/doc/guides/rel_notes/release_21_11.rst
> > > > b/doc/guides/rel_notes/release_21_11.rst
> > > > index a8900a3..f0b3006 100644
> > > > --- a/doc/guides/rel_notes/release_21_11.rst
> > > > +++ b/doc/guides/rel_notes/release_21_11.rst
> > > > @@ -191,6 +191,7 @@ API Changes
> > > >
> > > >  * bbdev: Added capability related to more comprehensive CRC
> options.
> > > >
> > > > +* bbdev: Added device info related to data byte endianness
> > > > +processing
> > > > assumption.
> > >
> > > It is not clear from the description or the release notes, what the
> > > application is supposed to do based on the new dev_info field set
> > > and how the driver determine what value to set?
> > > Isn't there a standard from the application stand point that the
> > > input/output data Should be in BE or in LE like in case of IP packets which
> are always in BE?
> > > I mean why is it dependent on the PMD which is processing it?
> > > Whatever application understands, PMD should comply with that and do
> > > internal Swapping if it does not support it.
> > > Am I missing something?
> >
> > This is really to allow Nipin to add his own NXP la12xx PMD, which
> > appears to have different assumption on endianness.
> > All existing processing is done in LE by default by the existing PMDs
> > and the existing ecosystem.
> > I cannot comment on why they would want to do that for the la12xx
> > specifically, I could only speculate but here trying to help to find
> > the best way for the new PMD to be supported.
> > So here this suggested change is purely about exposing different
> > assumption for the PMDs, so that this new PMD can still be supported
> > under this API even though this is in effect incompatible with existing
> ecosystem.
> > In case the application has different assumption that what the PMD
> > does, then byte swapping would have to be done in the application,
> > more likely I assume that la12xx has its own ecosystem with different
> > endianness required for other reasons.
> > The option you are suggesting would be to put the burden on the PMD
> > but I doubt there is an actual usecase for that. I assume they assume
> > different endianness for other specific reason, not necessary to be
> > compatible with existing ecosystem.
> > Niping, Hemant, feel free to comment back, from previous discussion I
> > believe this is what you wanted to do. Unsure of the reason, feel free
> > to share more details or not.
> 
> Akhil/Nicolas,
> 
> As Hemant mentioned on v4 (previously asked by Dave)
> 
> "---
> If we go back to the data providing source i.e. FAPI interface, it is
> implementation specific, as per SCF222.
> 
> Our customers do use BE data in network and at FAPI interface.
> 
> In LA12xx, at present, we use u8 Big-endian data for processing to FECA
> engine.  We do see that other drivers in DPDK are using Little Endian *(with
> u32 data)* but standards is open for both.
> "---
> 
> Standard is not specific to endianness and is open for implementation.
> So it does not makes a reason to have one endianness as default and other
> managed in the PMD, and the current change seems right.
> 
> Yes endianness assumption is taken in the test vector input/output data, but
> this should be acceptable as it does not impact the PMD's and end user
> applications in general.

I want clarify that this would impact the application in case user wanted to switch between 2 such hw accelator. 
Ie. you cannot switch the 2 solutions, they are incompatible except if you explicitly do the byteswap in the application (as is done in bbdev-test).
Not necessarily a problem in case they address 2 different ecosystems but capturing the implication to be explicit. Ie each device expose the assumptions expected by the application and it is up to the application the bbdev api to satisfy the relared assumptions. 

> 
> BTW Nicolos, my name is Nipun :)

My bad! 

I am marking this patch as obsolete since you have included it in your serie. 


> 
> >
> >
> > >
> > > >
> > > >  ABI Changes
> > > >  -----------
> > > > diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c
> > > > b/drivers/baseband/acc100/rte_acc100_pmd.c
> > > > index 4e2feef..eb2c6c1 100644
> > > > --- a/drivers/baseband/acc100/rte_acc100_pmd.c
> > > > +++ b/drivers/baseband/acc100/rte_acc100_pmd.c
> > > > @@ -1089,6 +1089,7 @@
> > > >  #else
> > > >  	dev_info->harq_buffer_size = 0;
> > > >  #endif
> > > > +	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> > > >  	acc100_check_ir(d);
> > > >  }
> > > >
> > > > diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > > > b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > > > index 6485cc8..c7f15c0 100644
> > > > --- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > > > +++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > > > @@ -372,6 +372,7 @@
> > > >  	dev_info->default_queue_conf = default_queue_conf;
> > > >  	dev_info->capabilities = bbdev_capabilities;
> > > >  	dev_info->cpu_flag_reqs = NULL;
> > > > +	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> > > >
> > > >  	/* Calculates number of queues assigned to device */
> > > >  	dev_info->max_num_queues = 0;
> > > > diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > > > b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > > > index 350c424..72e213e 100644
> > > > --- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > > > +++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > > > @@ -644,6 +644,7 @@ struct __rte_cache_aligned fpga_queue {
> > > >  	dev_info->default_queue_conf = default_queue_conf;
> > > >  	dev_info->capabilities = bbdev_capabilities;
> > > >  	dev_info->cpu_flag_reqs = NULL;
> > > > +	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> > > >
> > > >  	/* Calculates number of queues assigned to device */
> > > >  	dev_info->max_num_queues = 0;
> > > > diff --git a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > > > b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > > > index e1db2bf..0cab91a 100644
> > > > --- a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > > > +++ b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > > > @@ -253,6 +253,7 @@ struct turbo_sw_queue {
> > > >  	dev_info->capabilities = bbdev_capabilities;
> > > >  	dev_info->min_alignment = 64;
> > > >  	dev_info->harq_buffer_size = 0;
> > > > +	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> > > >
> > > >  	rte_bbdev_log_debug("got device info from %u\n", dev->data-
> > > > >dev_id);
> > > >  }
> > > > diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h index
> > > > 3ebf62e..b3f3000 100644
> > > > --- a/lib/bbdev/rte_bbdev.h
> > > > +++ b/lib/bbdev/rte_bbdev.h
> > > > @@ -49,6 +49,12 @@ enum rte_bbdev_state {
> > > >  	RTE_BBDEV_INITIALIZED
> > > >  };
> > > >
> > > > +/** Definitions of device data byte endianness types */ enum
> > > > +rte_bbdev_endianness {
> > > > +	RTE_BBDEV_BIG_ENDIAN,    /**< Data with byte-endianness BE */
> > > > +	RTE_BBDEV_LITTLE_ENDIAN, /**< Data with byte-endianness LE */ };
> > > If at all be need this dev_info field, as Tom suggested we should
> > > use RTE_BIG/LITTLE_ENDIAN.
> >
> > See separate comment on my reply to Tom:
> > I considered this but the usage is different, these are build time
> > #define, and really would bring confusion here.
> > Note that there are not really the endianness of the system itself but
> > specific to the bbdev data output going through signal processing.
> > I thought it was more explicit and less confusing this way, feel free
> > to comment back.
> > NXP would know best why a different endianness would be required in the
> PMD.
> 
> Please see previous comment for endianness support.
> I agree with the RTE_ prefix we can add it as it is for the application interface.
> 
> >
> > >
> > > > +
> > > >  /**
> > > >   * Get the total number of devices that have been successfully
> initialised.
> > > >   *
> > > > @@ -309,6 +315,8 @@ struct rte_bbdev_driver_info {
> > > >  	uint16_t min_alignment;
> > > >  	/** HARQ memory available in kB */
> > > >  	uint32_t harq_buffer_size;
> > > > +	/** Byte endianness assumption for input/output data */
> > > > +	enum rte_bbdev_endianness data_endianness;
> > >
> > > We should define how the input and output data are expected from the
> app.
> > > If need be, we can define a simple ``bool swap`` instead of an enum.
> >
> > This could be done as well. Default no swap, and swap required for the
> > new PMD.
> > I will let Nipin/Hemant comment back.
> 
> Again endianness is implementation specific and not standard for 5G
> processing, unlike it is for network packet.
> 
> Regards,
> Nipun
> 
> >
> > >
> > > >  	/** Default queue configuration used if none is supplied  */
> > > >  	struct rte_bbdev_queue_conf default_queue_conf;
> > > >  	/** Device operation capabilities */
> > > > --
> > > > 1.8.3.1


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v3 1/2] net: rename Ethernet header fields
  @ 2021-10-07 22:07  1%     ` Dmitry Kozlyuk
  0 siblings, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-07 22:07 UTC (permalink / raw)
  To: dev; +Cc: Dmitry Kozlyuk, Ferruh Yigit, Olivier Matz, Stephen Hemminger

Definition of `rte_ether_addr` structure used a workaround allowing DPDK
and Windows SDK headers to be used in the same file, because Windows SDK
defines `s_addr` as a macro. Rename `s_addr` to `src_addr` and `d_addr`
to `dst_addr` to avoid the conflict and remove the workaround.
Deprecation notice:
https://mails.dpdk.org/archives/dev/2021-July/215270.html

Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
---
 app/test-pmd/5tswap.c                         |  6 +-
 app/test-pmd/csumonly.c                       |  4 +-
 app/test-pmd/flowgen.c                        |  4 +-
 app/test-pmd/icmpecho.c                       | 16 ++---
 app/test-pmd/ieee1588fwd.c                    |  6 +-
 app/test-pmd/macfwd.c                         |  4 +-
 app/test-pmd/macswap.h                        |  6 +-
 app/test-pmd/txonly.c                         |  4 +-
 app/test-pmd/util.c                           |  4 +-
 app/test/packet_burst_generator.c             |  4 +-
 app/test/test_bpf.c                           |  4 +-
 app/test/test_link_bonding_mode4.c            | 15 ++--
 doc/guides/rel_notes/deprecation.rst          |  9 ++-
 doc/guides/rel_notes/release_21_11.rst        |  3 +
 drivers/net/avp/avp_ethdev.c                  |  6 +-
 drivers/net/bnx2x/bnx2x.c                     | 16 ++---
 drivers/net/bonding/rte_eth_bond_8023ad.c     |  6 +-
 drivers/net/bonding/rte_eth_bond_alb.c        |  4 +-
 drivers/net/bonding/rte_eth_bond_pmd.c        | 20 +++---
 drivers/net/enic/enic_flow.c                  |  8 +--
 drivers/net/mlx5/mlx5_txpp.c                  |  4 +-
 examples/bond/main.c                          | 20 +++---
 examples/ethtool/ethtool-app/main.c           |  4 +-
 examples/eventdev_pipeline/pipeline_common.h  |  4 +-
 examples/flow_filtering/main.c                |  4 +-
 examples/ioat/ioatfwd.c                       |  4 +-
 examples/ip_fragmentation/main.c              |  4 +-
 examples/ip_reassembly/main.c                 |  4 +-
 examples/ipsec-secgw/ipsec-secgw.c            |  4 +-
 examples/ipsec-secgw/ipsec_worker.c           |  4 +-
 examples/ipv4_multicast/main.c                |  4 +-
 examples/l2fwd-crypto/main.c                  |  4 +-
 examples/l2fwd-event/l2fwd_common.h           |  4 +-
 examples/l2fwd-jobstats/main.c                |  4 +-
 examples/l2fwd-keepalive/main.c               |  4 +-
 examples/l2fwd/main.c                         |  4 +-
 examples/l3fwd-acl/main.c                     | 19 ++---
 examples/l3fwd-power/main.c                   |  8 +--
 examples/l3fwd/l3fwd_em.h                     |  8 +--
 examples/l3fwd/l3fwd_fib.c                    |  4 +-
 examples/l3fwd/l3fwd_lpm.c                    |  4 +-
 examples/l3fwd/l3fwd_lpm.h                    |  8 +--
 examples/link_status_interrupt/main.c         |  4 +-
 .../performance-thread/l3fwd-thread/main.c    | 72 +++++++++----------
 examples/ptpclient/ptpclient.c                | 16 ++---
 examples/vhost/main.c                         | 11 +--
 examples/vmdq/main.c                          |  4 +-
 examples/vmdq_dcb/main.c                      |  4 +-
 lib/ethdev/rte_flow.h                         |  4 +-
 lib/gro/gro_tcp4.c                            |  4 +-
 lib/gro/gro_udp4.c                            |  4 +-
 lib/gro/gro_vxlan_tcp4.c                      |  8 +--
 lib/gro/gro_vxlan_udp4.c                      |  8 +--
 lib/net/rte_arp.c                             |  4 +-
 lib/net/rte_ether.h                           | 22 +-----
 lib/pipeline/rte_table_action.c               | 40 +++++------
 56 files changed, 241 insertions(+), 244 deletions(-)

diff --git a/app/test-pmd/5tswap.c b/app/test-pmd/5tswap.c
index e8cef9623b..629d3e0d31 100644
--- a/app/test-pmd/5tswap.c
+++ b/app/test-pmd/5tswap.c
@@ -27,9 +27,9 @@ swap_mac(struct rte_ether_hdr *eth_hdr)
 	struct rte_ether_addr addr;
 
 	/* Swap dest and src mac addresses. */
-	rte_ether_addr_copy(&eth_hdr->d_addr, &addr);
-	rte_ether_addr_copy(&eth_hdr->s_addr, &eth_hdr->d_addr);
-	rte_ether_addr_copy(&addr, &eth_hdr->s_addr);
+	rte_ether_addr_copy(&eth_hdr->dst_addr, &addr);
+	rte_ether_addr_copy(&eth_hdr->src_addr, &eth_hdr->dst_addr);
+	rte_ether_addr_copy(&addr, &eth_hdr->src_addr);
 }
 
 static inline void
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 38cc256533..090797318a 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -873,9 +873,9 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 
 		eth_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 		rte_ether_addr_copy(&peer_eth_addrs[fs->peer_addr],
-				&eth_hdr->d_addr);
+				&eth_hdr->dst_addr);
 		rte_ether_addr_copy(&ports[fs->tx_port].eth_addr,
-				&eth_hdr->s_addr);
+				&eth_hdr->src_addr);
 		parse_ethernet(eth_hdr, &info);
 		l3_hdr = (char *)eth_hdr + info.l2_len;
 
diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index 0d3664a64d..a96169e680 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -122,8 +122,8 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
 
 			/* Initialize Ethernet header. */
 			eth_hdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *);
-			rte_ether_addr_copy(&cfg_ether_dst, &eth_hdr->d_addr);
-			rte_ether_addr_copy(&cfg_ether_src, &eth_hdr->s_addr);
+			rte_ether_addr_copy(&cfg_ether_dst, &eth_hdr->dst_addr);
+			rte_ether_addr_copy(&cfg_ether_src, &eth_hdr->src_addr);
 			eth_hdr->ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 
 			/* Initialize IP header. */
diff --git a/app/test-pmd/icmpecho.c b/app/test-pmd/icmpecho.c
index 8948f28eb5..8f1d68a83a 100644
--- a/app/test-pmd/icmpecho.c
+++ b/app/test-pmd/icmpecho.c
@@ -319,8 +319,8 @@ reply_to_icmp_echo_rqsts(struct fwd_stream *fs)
 		if (verbose_level > 0) {
 			printf("\nPort %d pkt-len=%u nb-segs=%u\n",
 			       fs->rx_port, pkt->pkt_len, pkt->nb_segs);
-			ether_addr_dump("  ETH:  src=", &eth_h->s_addr);
-			ether_addr_dump(" dst=", &eth_h->d_addr);
+			ether_addr_dump("  ETH:  src=", &eth_h->src_addr);
+			ether_addr_dump(" dst=", &eth_h->dst_addr);
 		}
 		if (eth_type == RTE_ETHER_TYPE_VLAN) {
 			vlan_h = (struct rte_vlan_hdr *)
@@ -385,17 +385,17 @@ reply_to_icmp_echo_rqsts(struct fwd_stream *fs)
 			 */
 
 			/* Use source MAC address as destination MAC address. */
-			rte_ether_addr_copy(&eth_h->s_addr, &eth_h->d_addr);
+			rte_ether_addr_copy(&eth_h->src_addr, &eth_h->dst_addr);
 			/* Set source MAC address with MAC address of TX port */
 			rte_ether_addr_copy(&ports[fs->tx_port].eth_addr,
-					&eth_h->s_addr);
+					&eth_h->src_addr);
 
 			arp_h->arp_opcode = rte_cpu_to_be_16(RTE_ARP_OP_REPLY);
 			rte_ether_addr_copy(&arp_h->arp_data.arp_tha,
 					&eth_addr);
 			rte_ether_addr_copy(&arp_h->arp_data.arp_sha,
 					&arp_h->arp_data.arp_tha);
-			rte_ether_addr_copy(&eth_h->s_addr,
+			rte_ether_addr_copy(&eth_h->src_addr,
 					&arp_h->arp_data.arp_sha);
 
 			/* Swap IP addresses in ARP payload */
@@ -453,9 +453,9 @@ reply_to_icmp_echo_rqsts(struct fwd_stream *fs)
 		 * ICMP checksum is computed by assuming it is valid in the
 		 * echo request and not verified.
 		 */
-		rte_ether_addr_copy(&eth_h->s_addr, &eth_addr);
-		rte_ether_addr_copy(&eth_h->d_addr, &eth_h->s_addr);
-		rte_ether_addr_copy(&eth_addr, &eth_h->d_addr);
+		rte_ether_addr_copy(&eth_h->src_addr, &eth_addr);
+		rte_ether_addr_copy(&eth_h->dst_addr, &eth_h->src_addr);
+		rte_ether_addr_copy(&eth_addr, &eth_h->dst_addr);
 		ip_addr = ip_h->src_addr;
 		if (is_multicast_ipv4_addr(ip_h->dst_addr)) {
 			uint32_t ip_src;
diff --git a/app/test-pmd/ieee1588fwd.c b/app/test-pmd/ieee1588fwd.c
index 034f238c34..9cf10c1c50 100644
--- a/app/test-pmd/ieee1588fwd.c
+++ b/app/test-pmd/ieee1588fwd.c
@@ -178,9 +178,9 @@ ieee1588_packet_fwd(struct fwd_stream *fs)
 	port_ieee1588_rx_timestamp_check(fs->rx_port, timesync_index);
 
 	/* Swap dest and src mac addresses. */
-	rte_ether_addr_copy(&eth_hdr->d_addr, &addr);
-	rte_ether_addr_copy(&eth_hdr->s_addr, &eth_hdr->d_addr);
-	rte_ether_addr_copy(&addr, &eth_hdr->s_addr);
+	rte_ether_addr_copy(&eth_hdr->dst_addr, &addr);
+	rte_ether_addr_copy(&eth_hdr->src_addr, &eth_hdr->dst_addr);
+	rte_ether_addr_copy(&addr, &eth_hdr->src_addr);
 
 	/* Forward PTP packet with hardware TX timestamp */
 	mb->ol_flags |= PKT_TX_IEEE1588_TMST;
diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c
index 0568ea794d..ee76df7f03 100644
--- a/app/test-pmd/macfwd.c
+++ b/app/test-pmd/macfwd.c
@@ -85,9 +85,9 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
 		mb = pkts_burst[i];
 		eth_hdr = rte_pktmbuf_mtod(mb, struct rte_ether_hdr *);
 		rte_ether_addr_copy(&peer_eth_addrs[fs->peer_addr],
-				&eth_hdr->d_addr);
+				&eth_hdr->dst_addr);
 		rte_ether_addr_copy(&ports[fs->tx_port].eth_addr,
-				&eth_hdr->s_addr);
+				&eth_hdr->src_addr);
 		mb->ol_flags &= IND_ATTACHED_MBUF | EXT_ATTACHED_MBUF;
 		mb->ol_flags |= ol_flags;
 		mb->l2_len = sizeof(struct rte_ether_hdr);
diff --git a/app/test-pmd/macswap.h b/app/test-pmd/macswap.h
index 0138441566..29c252bb8f 100644
--- a/app/test-pmd/macswap.h
+++ b/app/test-pmd/macswap.h
@@ -29,9 +29,9 @@ do_macswap(struct rte_mbuf *pkts[], uint16_t nb,
 		eth_hdr = rte_pktmbuf_mtod(mb, struct rte_ether_hdr *);
 
 		/* Swap dest and src mac addresses. */
-		rte_ether_addr_copy(&eth_hdr->d_addr, &addr);
-		rte_ether_addr_copy(&eth_hdr->s_addr, &eth_hdr->d_addr);
-		rte_ether_addr_copy(&addr, &eth_hdr->s_addr);
+		rte_ether_addr_copy(&eth_hdr->dst_addr, &addr);
+		rte_ether_addr_copy(&eth_hdr->src_addr, &eth_hdr->dst_addr);
+		rte_ether_addr_copy(&addr, &eth_hdr->src_addr);
 
 		mbuf_field_set(mb, ol_flags);
 	}
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index aed820f5d3..40655801cc 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -362,8 +362,8 @@ pkt_burst_transmit(struct fwd_stream *fs)
 	/*
 	 * Initialize Ethernet header.
 	 */
-	rte_ether_addr_copy(&peer_eth_addrs[fs->peer_addr], &eth_hdr.d_addr);
-	rte_ether_addr_copy(&ports[fs->tx_port].eth_addr, &eth_hdr.s_addr);
+	rte_ether_addr_copy(&peer_eth_addrs[fs->peer_addr], &eth_hdr.dst_addr);
+	rte_ether_addr_copy(&ports[fs->tx_port].eth_addr, &eth_hdr.src_addr);
 	eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 
 	if (rte_mempool_get_bulk(mbp, (void **)pkts_burst,
diff --git a/app/test-pmd/util.c b/app/test-pmd/util.c
index 14a9a251fb..51506e4940 100644
--- a/app/test-pmd/util.c
+++ b/app/test-pmd/util.c
@@ -142,9 +142,9 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[],
 					  " - no miss group");
 			MKDUMPSTR(print_buf, buf_size, cur_len, "\n");
 		}
-		print_ether_addr("  src=", &eth_hdr->s_addr,
+		print_ether_addr("  src=", &eth_hdr->src_addr,
 				 print_buf, buf_size, &cur_len);
-		print_ether_addr(" - dst=", &eth_hdr->d_addr,
+		print_ether_addr(" - dst=", &eth_hdr->dst_addr,
 				 print_buf, buf_size, &cur_len);
 		MKDUMPSTR(print_buf, buf_size, cur_len,
 			  " - type=0x%04x - length=%u - nb_segs=%d",
diff --git a/app/test/packet_burst_generator.c b/app/test/packet_burst_generator.c
index 0fd7290b0e..8ac24577ba 100644
--- a/app/test/packet_burst_generator.c
+++ b/app/test/packet_burst_generator.c
@@ -56,8 +56,8 @@ initialize_eth_header(struct rte_ether_hdr *eth_hdr,
 		struct rte_ether_addr *dst_mac, uint16_t ether_type,
 		uint8_t vlan_enabled, uint16_t van_id)
 {
-	rte_ether_addr_copy(dst_mac, &eth_hdr->d_addr);
-	rte_ether_addr_copy(src_mac, &eth_hdr->s_addr);
+	rte_ether_addr_copy(dst_mac, &eth_hdr->dst_addr);
+	rte_ether_addr_copy(src_mac, &eth_hdr->src_addr);
 
 	if (vlan_enabled) {
 		struct rte_vlan_hdr *vhdr = (struct rte_vlan_hdr *)(
diff --git a/app/test/test_bpf.c b/app/test/test_bpf.c
index 527c06b807..8118a1849b 100644
--- a/app/test/test_bpf.c
+++ b/app/test/test_bpf.c
@@ -1008,9 +1008,9 @@ test_jump2_prepare(void *arg)
 	 * Initialize ether header.
 	 */
 	rte_ether_addr_copy((struct rte_ether_addr *)dst_mac,
-			    &dn->eth_hdr.d_addr);
+			    &dn->eth_hdr.dst_addr);
 	rte_ether_addr_copy((struct rte_ether_addr *)src_mac,
-			    &dn->eth_hdr.s_addr);
+			    &dn->eth_hdr.src_addr);
 	dn->eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 
 	/*
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 2c835fa7ad..f120b2e3be 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -502,8 +502,8 @@ make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
 	slow_hdr = rte_pktmbuf_mtod(pkt, struct slow_protocol_frame *);
 
 	/* Change source address to partner address */
-	rte_ether_addr_copy(&parnter_mac_default, &slow_hdr->eth_hdr.s_addr);
-	slow_hdr->eth_hdr.s_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
+	rte_ether_addr_copy(&parnter_mac_default, &slow_hdr->eth_hdr.src_addr);
+	slow_hdr->eth_hdr.src_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
 		slave->port_id;
 
 	lacp = (struct lacpdu *) &slow_hdr->slow_protocol;
@@ -870,7 +870,7 @@ test_mode4_rx(void)
 
 		for (i = 0; i < expected_pkts_cnt; i++) {
 			hdr = rte_pktmbuf_mtod(pkts[i], struct rte_ether_hdr *);
-			cnt[rte_is_same_ether_addr(&hdr->d_addr,
+			cnt[rte_is_same_ether_addr(&hdr->dst_addr,
 							&bonded_mac)]++;
 		}
 
@@ -918,7 +918,7 @@ test_mode4_rx(void)
 
 		for (i = 0; i < expected_pkts_cnt; i++) {
 			hdr = rte_pktmbuf_mtod(pkts[i], struct rte_ether_hdr *);
-			eq_cnt += rte_is_same_ether_addr(&hdr->d_addr,
+			eq_cnt += rte_is_same_ether_addr(&hdr->dst_addr,
 							&bonded_mac);
 		}
 
@@ -1163,11 +1163,12 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
 
 	/* Copy multicast destination address */
 	rte_ether_addr_copy(&slow_protocol_mac_addr,
-			&marker_hdr->eth_hdr.d_addr);
+			&marker_hdr->eth_hdr.dst_addr);
 
 	/* Init source address */
-	rte_ether_addr_copy(&parnter_mac_default, &marker_hdr->eth_hdr.s_addr);
-	marker_hdr->eth_hdr.s_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
+	rte_ether_addr_copy(&parnter_mac_default,
+			&marker_hdr->eth_hdr.src_addr);
+	marker_hdr->eth_hdr.src_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
 		slave->port_id;
 
 	marker_hdr->eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_SLOW);
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index a2fe766d4b..918ea3f403 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -164,8 +164,13 @@ Deprecation Notices
   consistent with existing outer header checksum status flag naming, which
   should help in reducing confusion about its usage.
 
-* net: ``s_addr`` and ``d_addr`` fields of ``rte_ether_hdr`` structure
-  will be renamed in DPDK 21.11 to avoid conflict with Windows Sockets headers.
+* i40e: As there are both i40evf and iavf pmd, the functions of them are
+  duplicated. And now more and more advanced features are developed on iavf.
+  To keep consistent with kernel driver's name
+  (https://patchwork.ozlabs.org/patch/970154/), i40evf is no need to maintain.
+  Starting from 21.05, the default VF driver of i40e will be iavf, but i40evf
+  can still be used if users specify the devarg "driver=i40evf". I40evf will
+  be deleted in DPDK 21.11.
 
 * net: The structure ``rte_ipv4_hdr`` will have two unions.
   The first union is for existing ``version_ihl`` byte
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index efeffe37a0..907e45c4e7 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -191,6 +191,9 @@ API Changes
   the crypto/security operation. This field will be used to communicate
   events such as soft expiry with IPsec in lookaside mode.
 
+* net: Renamed ``s_addr`` and ``d_addr`` fields of ``rte_ether_hdr`` structure
+  to ``src_addr`` and ``dst_addr``, respectively.
+
 
 ABI Changes
 -----------
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 623fa5e5ff..b5fafd32b0 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -1205,17 +1205,17 @@ _avp_mac_filter(struct avp_dev *avp, struct rte_mbuf *m)
 {
 	struct rte_ether_hdr *eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 
-	if (likely(_avp_cmp_ether_addr(&avp->ethaddr, &eth->d_addr) == 0)) {
+	if (likely(_avp_cmp_ether_addr(&avp->ethaddr, &eth->dst_addr) == 0)) {
 		/* allow all packets destined to our address */
 		return 0;
 	}
 
-	if (likely(rte_is_broadcast_ether_addr(&eth->d_addr))) {
+	if (likely(rte_is_broadcast_ether_addr(&eth->dst_addr))) {
 		/* allow all broadcast packets */
 		return 0;
 	}
 
-	if (likely(rte_is_multicast_ether_addr(&eth->d_addr))) {
+	if (likely(rte_is_multicast_ether_addr(&eth->dst_addr))) {
 		/* allow all multicast packets */
 		return 0;
 	}
diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c
index 9163b8b1fd..083deff1b1 100644
--- a/drivers/net/bnx2x/bnx2x.c
+++ b/drivers/net/bnx2x/bnx2x.c
@@ -2233,8 +2233,8 @@ int bnx2x_tx_encap(struct bnx2x_tx_queue *txq, struct rte_mbuf *m0)
 
 		tx_parse_bd =
 		    &txq->tx_ring[TX_BD(bd_prod, txq)].parse_bd_e2;
-		if (rte_is_multicast_ether_addr(&eh->d_addr)) {
-			if (rte_is_broadcast_ether_addr(&eh->d_addr))
+		if (rte_is_multicast_ether_addr(&eh->dst_addr)) {
+			if (rte_is_broadcast_ether_addr(&eh->dst_addr))
 				mac_type = BROADCAST_ADDRESS;
 			else
 				mac_type = MULTICAST_ADDRESS;
@@ -2243,17 +2243,17 @@ int bnx2x_tx_encap(struct bnx2x_tx_queue *txq, struct rte_mbuf *m0)
 		    (mac_type << ETH_TX_PARSE_BD_E2_ETH_ADDR_TYPE_SHIFT);
 
 		rte_memcpy(&tx_parse_bd->data.mac_addr.dst_hi,
-			   &eh->d_addr.addr_bytes[0], 2);
+			   &eh->dst_addr.addr_bytes[0], 2);
 		rte_memcpy(&tx_parse_bd->data.mac_addr.dst_mid,
-			   &eh->d_addr.addr_bytes[2], 2);
+			   &eh->dst_addr.addr_bytes[2], 2);
 		rte_memcpy(&tx_parse_bd->data.mac_addr.dst_lo,
-			   &eh->d_addr.addr_bytes[4], 2);
+			   &eh->dst_addr.addr_bytes[4], 2);
 		rte_memcpy(&tx_parse_bd->data.mac_addr.src_hi,
-			   &eh->s_addr.addr_bytes[0], 2);
+			   &eh->src_addr.addr_bytes[0], 2);
 		rte_memcpy(&tx_parse_bd->data.mac_addr.src_mid,
-			   &eh->s_addr.addr_bytes[2], 2);
+			   &eh->src_addr.addr_bytes[2], 2);
 		rte_memcpy(&tx_parse_bd->data.mac_addr.src_lo,
-			   &eh->s_addr.addr_bytes[4], 2);
+			   &eh->src_addr.addr_bytes[4], 2);
 
 		tx_parse_bd->data.mac_addr.dst_hi =
 		    rte_cpu_to_be_16(tx_parse_bd->data.mac_addr.dst_hi);
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 8b5b32fcaf..3558644232 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -587,8 +587,8 @@ tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
 	hdr = rte_pktmbuf_mtod(lacp_pkt, struct lacpdu_header *);
 
 	/* Source and destination MAC */
-	rte_ether_addr_copy(&lacp_mac_addr, &hdr->eth_hdr.d_addr);
-	rte_eth_macaddr_get(slave_id, &hdr->eth_hdr.s_addr);
+	rte_ether_addr_copy(&lacp_mac_addr, &hdr->eth_hdr.dst_addr);
+	rte_eth_macaddr_get(slave_id, &hdr->eth_hdr.src_addr);
 	hdr->eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_SLOW);
 
 	lacpdu = &hdr->lacpdu;
@@ -1346,7 +1346,7 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
 		} while (unlikely(retval == 0));
 
 		m_hdr->marker.tlv_type_marker = MARKER_TLV_TYPE_RESP;
-		rte_eth_macaddr_get(slave_id, &m_hdr->eth_hdr.s_addr);
+		rte_eth_macaddr_get(slave_id, &m_hdr->eth_hdr.src_addr);
 
 		if (internals->mode4.dedicated_queues.enabled == 0) {
 			if (rte_ring_enqueue(port->tx_ring, pkt) != 0) {
diff --git a/drivers/net/bonding/rte_eth_bond_alb.c b/drivers/net/bonding/rte_eth_bond_alb.c
index 1d36a4a4a2..86335a7971 100644
--- a/drivers/net/bonding/rte_eth_bond_alb.c
+++ b/drivers/net/bonding/rte_eth_bond_alb.c
@@ -213,8 +213,8 @@ bond_mode_alb_arp_upd(struct client_data *client_info,
 	rte_spinlock_lock(&internals->mode6.lock);
 	eth_h = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *);
 
-	rte_ether_addr_copy(&client_info->app_mac, &eth_h->s_addr);
-	rte_ether_addr_copy(&client_info->cli_mac, &eth_h->d_addr);
+	rte_ether_addr_copy(&client_info->app_mac, &eth_h->src_addr);
+	rte_ether_addr_copy(&client_info->cli_mac, &eth_h->dst_addr);
 	if (client_info->vlan_count > 0)
 		eth_h->ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN);
 	else
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 54987d96b3..6831fcb104 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -342,11 +342,11 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
 						 bufs[j])) ||
 				!collecting ||
 				(!promisc &&
-				 ((rte_is_unicast_ether_addr(&hdr->d_addr) &&
+				 ((rte_is_unicast_ether_addr(&hdr->dst_addr) &&
 				   !rte_is_same_ether_addr(bond_mac,
-						       &hdr->d_addr)) ||
+						       &hdr->dst_addr)) ||
 				  (!allmulti &&
-				   rte_is_multicast_ether_addr(&hdr->d_addr)))))) {
+				   rte_is_multicast_ether_addr(&hdr->dst_addr)))))) {
 
 				if (hdr->ether_type == ether_type_slow_be) {
 					bond_mode_8023ad_handle_slow_pkt(
@@ -477,9 +477,9 @@ update_client_stats(uint32_t addr, uint16_t port, uint32_t *TXorRXindicator)
 		"DstMAC:" RTE_ETHER_ADDR_PRT_FMT " DstIP:%s %s %d\n", \
 		info,							\
 		port,							\
-		RTE_ETHER_ADDR_BYTES(&eth_h->s_addr),                  \
+		RTE_ETHER_ADDR_BYTES(&eth_h->src_addr),                  \
 		src_ip,							\
-		RTE_ETHER_ADDR_BYTES(&eth_h->d_addr),                  \
+		RTE_ETHER_ADDR_BYTES(&eth_h->dst_addr),                  \
 		dst_ip,							\
 		arp_op, ++burstnumber)
 #endif
@@ -643,9 +643,9 @@ static inline uint16_t
 ether_hash(struct rte_ether_hdr *eth_hdr)
 {
 	unaligned_uint16_t *word_src_addr =
-		(unaligned_uint16_t *)eth_hdr->s_addr.addr_bytes;
+		(unaligned_uint16_t *)eth_hdr->src_addr.addr_bytes;
 	unaligned_uint16_t *word_dst_addr =
-		(unaligned_uint16_t *)eth_hdr->d_addr.addr_bytes;
+		(unaligned_uint16_t *)eth_hdr->dst_addr.addr_bytes;
 
 	return (word_src_addr[0] ^ word_dst_addr[0]) ^
 			(word_src_addr[1] ^ word_dst_addr[1]) ^
@@ -942,10 +942,10 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 
 			ether_hdr = rte_pktmbuf_mtod(bufs[j],
 						struct rte_ether_hdr *);
-			if (rte_is_same_ether_addr(&ether_hdr->s_addr,
+			if (rte_is_same_ether_addr(&ether_hdr->src_addr,
 							&primary_slave_addr))
 				rte_ether_addr_copy(&active_slave_addr,
-						&ether_hdr->s_addr);
+						&ether_hdr->src_addr);
 #if defined(RTE_LIBRTE_BOND_DEBUG_ALB) || defined(RTE_LIBRTE_BOND_DEBUG_ALB_L1)
 					mode6_debug("TX IPv4:", ether_hdr, slaves[i], &burstnumberTX);
 #endif
@@ -1017,7 +1017,7 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			slave_idx = bond_mode_alb_arp_xmit(eth_h, offset, internals);
 
 			/* Change src mac in eth header */
-			rte_eth_macaddr_get(slave_idx, &eth_h->s_addr);
+			rte_eth_macaddr_get(slave_idx, &eth_h->src_addr);
 
 			/* Add packet to slave tx buffer */
 			slave_bufs[slave_idx][slave_bufs_pkts[slave_idx]] = bufs[i];
diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c
index cdfdc904a6..33147169ba 100644
--- a/drivers/net/enic/enic_flow.c
+++ b/drivers/net/enic/enic_flow.c
@@ -656,14 +656,14 @@ enic_copy_item_eth_v2(struct copy_item_args *arg)
 	if (!mask)
 		mask = &rte_flow_item_eth_mask;
 
-	memcpy(enic_spec.d_addr.addr_bytes, spec->dst.addr_bytes,
+	memcpy(enic_spec.dst_addr.addr_bytes, spec->dst.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	memcpy(enic_spec.s_addr.addr_bytes, spec->src.addr_bytes,
+	memcpy(enic_spec.src_addr.addr_bytes, spec->src.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
 
-	memcpy(enic_mask.d_addr.addr_bytes, mask->dst.addr_bytes,
+	memcpy(enic_mask.dst_addr.addr_bytes, mask->dst.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
-	memcpy(enic_mask.s_addr.addr_bytes, mask->src.addr_bytes,
+	memcpy(enic_mask.src_addr.addr_bytes, mask->src.addr_bytes,
 	       RTE_ETHER_ADDR_LEN);
 	enic_spec.ether_type = spec->type;
 	enic_mask.ether_type = mask->type;
diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c
index 4f6da9f2d1..2be7e71f89 100644
--- a/drivers/net/mlx5/mlx5_txpp.c
+++ b/drivers/net/mlx5/mlx5_txpp.c
@@ -333,8 +333,8 @@ mlx5_txpp_fill_wqe_clock_queue(struct mlx5_dev_ctx_shared *sh)
 		/* Build test packet L2 header (Ethernet). */
 		dst = (uint8_t *)&es->inline_data;
 		eth_hdr = (struct rte_ether_hdr *)dst;
-		rte_eth_random_addr(&eth_hdr->d_addr.addr_bytes[0]);
-		rte_eth_random_addr(&eth_hdr->s_addr.addr_bytes[0]);
+		rte_eth_random_addr(&eth_hdr->dst_addr.addr_bytes[0]);
+		rte_eth_random_addr(&eth_hdr->src_addr.addr_bytes[0]);
 		eth_hdr->ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
 		/* Build test packet L3 header (IP v4). */
 		dst += sizeof(struct rte_ether_hdr);
diff --git a/examples/bond/main.c b/examples/bond/main.c
index a63ca70a7f..7adaa93cad 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -358,7 +358,7 @@ struct global_flag_stru_t *global_flag_stru_p = &global_flag_stru;
 static int lcore_main(__rte_unused void *arg1)
 {
 	struct rte_mbuf *pkts[MAX_PKT_BURST] __rte_cache_aligned;
-	struct rte_ether_addr d_addr;
+	struct rte_ether_addr dst_addr;
 
 	struct rte_ether_addr bond_mac_addr;
 	struct rte_ether_hdr *eth_hdr;
@@ -422,13 +422,13 @@ static int lcore_main(__rte_unused void *arg1)
 					if (arp_hdr->arp_opcode == rte_cpu_to_be_16(RTE_ARP_OP_REQUEST)) {
 						arp_hdr->arp_opcode = rte_cpu_to_be_16(RTE_ARP_OP_REPLY);
 						/* Switch src and dst data and set bonding MAC */
-						rte_ether_addr_copy(&eth_hdr->s_addr, &eth_hdr->d_addr);
-						rte_ether_addr_copy(&bond_mac_addr, &eth_hdr->s_addr);
+						rte_ether_addr_copy(&eth_hdr->src_addr, &eth_hdr->dst_addr);
+						rte_ether_addr_copy(&bond_mac_addr, &eth_hdr->src_addr);
 						rte_ether_addr_copy(&arp_hdr->arp_data.arp_sha,
 								&arp_hdr->arp_data.arp_tha);
 						arp_hdr->arp_data.arp_tip = arp_hdr->arp_data.arp_sip;
-						rte_ether_addr_copy(&bond_mac_addr, &d_addr);
-						rte_ether_addr_copy(&d_addr, &arp_hdr->arp_data.arp_sha);
+						rte_ether_addr_copy(&bond_mac_addr, &dst_addr);
+						rte_ether_addr_copy(&dst_addr, &arp_hdr->arp_data.arp_sha);
 						arp_hdr->arp_data.arp_sip = bond_ip;
 						rte_eth_tx_burst(BOND_PORT, 0, &pkts[i], 1);
 						is_free = 1;
@@ -443,8 +443,10 @@ static int lcore_main(__rte_unused void *arg1)
 				 }
 				ipv4_hdr = (struct rte_ipv4_hdr *)((char *)(eth_hdr + 1) + offset);
 				if (ipv4_hdr->dst_addr == bond_ip) {
-					rte_ether_addr_copy(&eth_hdr->s_addr, &eth_hdr->d_addr);
-					rte_ether_addr_copy(&bond_mac_addr, &eth_hdr->s_addr);
+					rte_ether_addr_copy(&eth_hdr->src_addr,
+							&eth_hdr->dst_addr);
+					rte_ether_addr_copy(&bond_mac_addr,
+							&eth_hdr->src_addr);
 					ipv4_hdr->dst_addr = ipv4_hdr->src_addr;
 					ipv4_hdr->src_addr = bond_ip;
 					rte_eth_tx_burst(BOND_PORT, 0, &pkts[i], 1);
@@ -519,8 +521,8 @@ static void cmd_obj_send_parsed(void *parsed_result,
 	created_pkt->pkt_len = pkt_size;
 
 	eth_hdr = rte_pktmbuf_mtod(created_pkt, struct rte_ether_hdr *);
-	rte_ether_addr_copy(&bond_mac_addr, &eth_hdr->s_addr);
-	memset(&eth_hdr->d_addr, 0xFF, RTE_ETHER_ADDR_LEN);
+	rte_ether_addr_copy(&bond_mac_addr, &eth_hdr->src_addr);
+	memset(&eth_hdr->dst_addr, 0xFF, RTE_ETHER_ADDR_LEN);
 	eth_hdr->ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_ARP);
 
 	arp_hdr = (struct rte_arp_hdr *)(
diff --git a/examples/ethtool/ethtool-app/main.c b/examples/ethtool/ethtool-app/main.c
index 21ed85c7d6..1bc675962b 100644
--- a/examples/ethtool/ethtool-app/main.c
+++ b/examples/ethtool/ethtool-app/main.c
@@ -172,8 +172,8 @@ static void process_frame(struct app_port *ptr_port,
 	struct rte_ether_hdr *ptr_mac_hdr;
 
 	ptr_mac_hdr = rte_pktmbuf_mtod(ptr_frame, struct rte_ether_hdr *);
-	rte_ether_addr_copy(&ptr_mac_hdr->s_addr, &ptr_mac_hdr->d_addr);
-	rte_ether_addr_copy(&ptr_port->mac_addr, &ptr_mac_hdr->s_addr);
+	rte_ether_addr_copy(&ptr_mac_hdr->src_addr, &ptr_mac_hdr->dst_addr);
+	rte_ether_addr_copy(&ptr_port->mac_addr, &ptr_mac_hdr->src_addr);
 }
 
 static int worker_main(__rte_unused void *ptr_data)
diff --git a/examples/eventdev_pipeline/pipeline_common.h b/examples/eventdev_pipeline/pipeline_common.h
index 6a4287602e..b12eb281e1 100644
--- a/examples/eventdev_pipeline/pipeline_common.h
+++ b/examples/eventdev_pipeline/pipeline_common.h
@@ -104,8 +104,8 @@ exchange_mac(struct rte_mbuf *m)
 
 	/* change mac addresses on packet (to use mbuf data) */
 	eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
-	rte_ether_addr_copy(&eth->d_addr, &addr);
-	rte_ether_addr_copy(&addr, &eth->d_addr);
+	rte_ether_addr_copy(&eth->dst_addr, &addr);
+	rte_ether_addr_copy(&addr, &eth->dst_addr);
 }
 
 static __rte_always_inline void
diff --git a/examples/flow_filtering/main.c b/examples/flow_filtering/main.c
index 29fb4b3d55..dd8a33d036 100644
--- a/examples/flow_filtering/main.c
+++ b/examples/flow_filtering/main.c
@@ -75,9 +75,9 @@ main_loop(void)
 					eth_hdr = rte_pktmbuf_mtod(m,
 							struct rte_ether_hdr *);
 					print_ether_addr("src=",
-							&eth_hdr->s_addr);
+							&eth_hdr->src_addr);
 					print_ether_addr(" - dst=",
-							&eth_hdr->d_addr);
+							&eth_hdr->dst_addr);
 					printf(" - queue=0x%x",
 							(unsigned int)i);
 					printf("\n");
diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index b3977a8be5..ff36aa7f1e 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -322,11 +322,11 @@ update_mac_addrs(struct rte_mbuf *m, uint32_t dest_portid)
 	/* 02:00:00:00:00:xx - overwriting 2 bytes of source address but
 	 * it's acceptable cause it gets overwritten by rte_ether_addr_copy
 	 */
-	tmp = &eth->d_addr.addr_bytes[0];
+	tmp = &eth->dst_addr.addr_bytes[0];
 	*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dest_portid << 40);
 
 	/* src addr */
-	rte_ether_addr_copy(&ioat_ports_eth_addr[dest_portid], &eth->s_addr);
+	rte_ether_addr_copy(&ioat_ports_eth_addr[dest_portid], &eth->src_addr);
 }
 
 /* Perform packet copy there is a user-defined function. 8< */
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index f245369720..a7f40970f2 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -362,13 +362,13 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
 		m->l2_len = sizeof(struct rte_ether_hdr);
 
 		/* 02:00:00:00:00:xx */
-		d_addr_bytes = &eth_hdr->d_addr.addr_bytes[0];
+		d_addr_bytes = &eth_hdr->dst_addr.addr_bytes[0];
 		*((uint64_t *)d_addr_bytes) = 0x000000000002 +
 			((uint64_t)port_out << 40);
 
 		/* src addr */
 		rte_ether_addr_copy(&ports_eth_addr[port_out],
-				&eth_hdr->s_addr);
+				&eth_hdr->src_addr);
 		eth_hdr->ether_type = ether_type;
 	}
 
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 8645ac790b..d611c7d016 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -413,11 +413,11 @@ reassemble(struct rte_mbuf *m, uint16_t portid, uint32_t queue,
 	/* if packet wasn't IPv4 or IPv6, it's forwarded to the port it came from */
 
 	/* 02:00:00:00:00:xx */
-	d_addr_bytes = &eth_hdr->d_addr.addr_bytes[0];
+	d_addr_bytes = &eth_hdr->dst_addr.addr_bytes[0];
 	*((uint64_t *)d_addr_bytes) = 0x000000000002 + ((uint64_t)dst_port << 40);
 
 	/* src addr */
-	rte_ether_addr_copy(&ports_eth_addr[dst_port], &eth_hdr->s_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port], &eth_hdr->src_addr);
 
 	send_single_packet(m, dst_port);
 }
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 7ad94cb822..7b01872c6f 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -545,9 +545,9 @@ prepare_tx_pkt(struct rte_mbuf *pkt, uint16_t port,
 		ethhdr->ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
 	}
 
-	memcpy(&ethhdr->s_addr, &ethaddr_tbl[port].src,
+	memcpy(&ethhdr->src_addr, &ethaddr_tbl[port].src,
 			sizeof(struct rte_ether_addr));
-	memcpy(&ethhdr->d_addr, &ethaddr_tbl[port].dst,
+	memcpy(&ethhdr->dst_addr, &ethaddr_tbl[port].dst,
 			sizeof(struct rte_ether_addr));
 }
 
diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c
index c545497cee..61cf9f57fb 100644
--- a/examples/ipsec-secgw/ipsec_worker.c
+++ b/examples/ipsec-secgw/ipsec_worker.c
@@ -49,8 +49,8 @@ update_mac_addrs(struct rte_mbuf *pkt, uint16_t portid)
 	struct rte_ether_hdr *ethhdr;
 
 	ethhdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *);
-	memcpy(&ethhdr->s_addr, &ethaddr_tbl[portid].src, RTE_ETHER_ADDR_LEN);
-	memcpy(&ethhdr->d_addr, &ethaddr_tbl[portid].dst, RTE_ETHER_ADDR_LEN);
+	memcpy(&ethhdr->src_addr, &ethaddr_tbl[portid].src, RTE_ETHER_ADDR_LEN);
+	memcpy(&ethhdr->dst_addr, &ethaddr_tbl[portid].dst, RTE_ETHER_ADDR_LEN);
 }
 
 static inline void
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index cc527d7f6b..d10de30ddb 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -283,8 +283,8 @@ mcast_send_pkt(struct rte_mbuf *pkt, struct rte_ether_addr *dest_addr,
 		rte_pktmbuf_prepend(pkt, (uint16_t)sizeof(*ethdr));
 	RTE_ASSERT(ethdr != NULL);
 
-	rte_ether_addr_copy(dest_addr, &ethdr->d_addr);
-	rte_ether_addr_copy(&ports_eth_addr[port], &ethdr->s_addr);
+	rte_ether_addr_copy(dest_addr, &ethdr->dst_addr);
+	rte_ether_addr_copy(&ports_eth_addr[port], &ethdr->src_addr);
 	ethdr->ether_type = rte_be_to_cpu_16(RTE_ETHER_TYPE_IPV4);
 
 	/* Put new packet into the output queue */
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index 66d1491bf7..c2ffbdd506 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -617,11 +617,11 @@ l2fwd_mac_updating(struct rte_mbuf *m, uint16_t dest_portid)
 	eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 
 	/* 02:00:00:00:00:xx */
-	tmp = &eth->d_addr.addr_bytes[0];
+	tmp = &eth->dst_addr.addr_bytes[0];
 	*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dest_portid << 40);
 
 	/* src addr */
-	rte_ether_addr_copy(&l2fwd_ports_eth_addr[dest_portid], &eth->s_addr);
+	rte_ether_addr_copy(&l2fwd_ports_eth_addr[dest_portid], &eth->src_addr);
 }
 
 static void
diff --git a/examples/l2fwd-event/l2fwd_common.h b/examples/l2fwd-event/l2fwd_common.h
index 939221d45a..cecbd9b70e 100644
--- a/examples/l2fwd-event/l2fwd_common.h
+++ b/examples/l2fwd-event/l2fwd_common.h
@@ -92,11 +92,11 @@ l2fwd_mac_updating(struct rte_mbuf *m, uint32_t dest_port_id,
 	eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 
 	/* 02:00:00:00:00:xx */
-	tmp = &eth->d_addr.addr_bytes[0];
+	tmp = &eth->dst_addr.addr_bytes[0];
 	*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dest_port_id << 40);
 
 	/* src addr */
-	rte_ether_addr_copy(addr, &eth->s_addr);
+	rte_ether_addr_copy(addr, &eth->src_addr);
 }
 
 static __rte_always_inline struct l2fwd_resources *
diff --git a/examples/l2fwd-jobstats/main.c b/examples/l2fwd-jobstats/main.c
index afe7fe6ead..06280321b1 100644
--- a/examples/l2fwd-jobstats/main.c
+++ b/examples/l2fwd-jobstats/main.c
@@ -351,11 +351,11 @@ l2fwd_simple_forward(struct rte_mbuf *m, unsigned portid)
 	eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 
 	/* 02:00:00:00:00:xx */
-	tmp = &eth->d_addr.addr_bytes[0];
+	tmp = &eth->dst_addr.addr_bytes[0];
 	*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dst_port << 40);
 
 	/* src addr */
-	rte_ether_addr_copy(&l2fwd_ports_eth_addr[dst_port], &eth->s_addr);
+	rte_ether_addr_copy(&l2fwd_ports_eth_addr[dst_port], &eth->src_addr);
 
 	buffer = tx_buffer[dst_port];
 	sent = rte_eth_tx_buffer(dst_port, 0, buffer, m);
diff --git a/examples/l2fwd-keepalive/main.c b/examples/l2fwd-keepalive/main.c
index d0d979f5ba..07271affb4 100644
--- a/examples/l2fwd-keepalive/main.c
+++ b/examples/l2fwd-keepalive/main.c
@@ -177,11 +177,11 @@ l2fwd_simple_forward(struct rte_mbuf *m, unsigned portid)
 	eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 
 	/* 02:00:00:00:00:xx */
-	tmp = &eth->d_addr.addr_bytes[0];
+	tmp = &eth->dst_addr.addr_bytes[0];
 	*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dst_port << 40);
 
 	/* src addr */
-	rte_ether_addr_copy(&l2fwd_ports_eth_addr[dst_port], &eth->s_addr);
+	rte_ether_addr_copy(&l2fwd_ports_eth_addr[dst_port], &eth->src_addr);
 
 	buffer = tx_buffer[dst_port];
 	sent = rte_eth_tx_buffer(dst_port, 0, buffer, m);
diff --git a/examples/l2fwd/main.c b/examples/l2fwd/main.c
index 05532551a5..f3deeba0a6 100644
--- a/examples/l2fwd/main.c
+++ b/examples/l2fwd/main.c
@@ -170,11 +170,11 @@ l2fwd_mac_updating(struct rte_mbuf *m, unsigned dest_portid)
 	eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 
 	/* 02:00:00:00:00:xx */
-	tmp = &eth->d_addr.addr_bytes[0];
+	tmp = &eth->dst_addr.addr_bytes[0];
 	*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dest_portid << 40);
 
 	/* src addr */
-	rte_ether_addr_copy(&l2fwd_ports_eth_addr[dest_portid], &eth->s_addr);
+	rte_ether_addr_copy(&l2fwd_ports_eth_addr[dest_portid], &eth->src_addr);
 }
 
 /* Simple forward. 8< */
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index a1f457b564..60545f3059 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -1375,7 +1375,8 @@ send_single_packet(struct rte_mbuf *m, uint16_t port)
 
 	/* update src and dst mac*/
 	eh = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
-	memcpy(eh, &port_l2hdr[port], sizeof(eh->d_addr) + sizeof(eh->s_addr));
+	memcpy(eh, &port_l2hdr[port],
+			sizeof(eh->dst_addr) + sizeof(eh->src_addr));
 
 	qconf = &lcore_conf[lcore_id];
 	rte_eth_tx_buffer(port, qconf->tx_queue_id[port],
@@ -1743,8 +1744,9 @@ parse_eth_dest(const char *optarg)
 		return "port value exceeds RTE_MAX_ETHPORTS("
 			RTE_STR(RTE_MAX_ETHPORTS) ")";
 
-	if (cmdline_parse_etheraddr(NULL, port_end, &port_l2hdr[portid].d_addr,
-			sizeof(port_l2hdr[portid].d_addr)) < 0)
+	if (cmdline_parse_etheraddr(NULL, port_end,
+			&port_l2hdr[portid].dst_addr,
+			sizeof(port_l2hdr[portid].dst_addr)) < 0)
 		return "Invalid ethernet address";
 	return NULL;
 }
@@ -2002,8 +2004,9 @@ set_default_dest_mac(void)
 	uint32_t i;
 
 	for (i = 0; i != RTE_DIM(port_l2hdr); i++) {
-		port_l2hdr[i].d_addr.addr_bytes[0] = RTE_ETHER_LOCAL_ADMIN_ADDR;
-		port_l2hdr[i].d_addr.addr_bytes[5] = i;
+		port_l2hdr[i].dst_addr.addr_bytes[0] =
+				RTE_ETHER_LOCAL_ADMIN_ADDR;
+		port_l2hdr[i].dst_addr.addr_bytes[5] = i;
 	}
 }
 
@@ -2109,14 +2112,14 @@ main(int argc, char **argv)
 				"rte_eth_dev_adjust_nb_rx_tx_desc: err=%d, port=%d\n",
 				ret, portid);
 
-		ret = rte_eth_macaddr_get(portid, &port_l2hdr[portid].s_addr);
+		ret = rte_eth_macaddr_get(portid, &port_l2hdr[portid].src_addr);
 		if (ret < 0)
 			rte_exit(EXIT_FAILURE,
 				"rte_eth_macaddr_get: err=%d, port=%d\n",
 				ret, portid);
 
-		print_ethaddr("Dst MAC:", &port_l2hdr[portid].d_addr);
-		print_ethaddr(", Src MAC:", &port_l2hdr[portid].s_addr);
+		print_ethaddr("Dst MAC:", &port_l2hdr[portid].dst_addr);
+		print_ethaddr(", Src MAC:", &port_l2hdr[portid].src_addr);
 		printf(", ");
 
 		/* init memory */
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index aa7b8db44a..73a3ab5bc0 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -717,7 +717,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint16_t portid,
 			dst_port = portid;
 
 		/* 02:00:00:00:00:xx */
-		d_addr_bytes = &eth_hdr->d_addr.addr_bytes[0];
+		d_addr_bytes = &eth_hdr->dst_addr.addr_bytes[0];
 		*((uint64_t *)d_addr_bytes) =
 			0x000000000002 + ((uint64_t)dst_port << 40);
 
@@ -729,7 +729,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint16_t portid,
 
 		/* src addr */
 		rte_ether_addr_copy(&ports_eth_addr[dst_port],
-				&eth_hdr->s_addr);
+				&eth_hdr->src_addr);
 
 		send_single_packet(m, dst_port);
 	} else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
@@ -749,13 +749,13 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint16_t portid,
 			dst_port = portid;
 
 		/* 02:00:00:00:00:xx */
-		d_addr_bytes = &eth_hdr->d_addr.addr_bytes[0];
+		d_addr_bytes = &eth_hdr->dst_addr.addr_bytes[0];
 		*((uint64_t *)d_addr_bytes) =
 			0x000000000002 + ((uint64_t)dst_port << 40);
 
 		/* src addr */
 		rte_ether_addr_copy(&ports_eth_addr[dst_port],
-				&eth_hdr->s_addr);
+				&eth_hdr->src_addr);
 
 		send_single_packet(m, dst_port);
 #else
diff --git a/examples/l3fwd/l3fwd_em.h b/examples/l3fwd/l3fwd_em.h
index b992a21da4..e67f5f328c 100644
--- a/examples/l3fwd/l3fwd_em.h
+++ b/examples/l3fwd/l3fwd_em.h
@@ -36,11 +36,11 @@ l3fwd_em_handle_ipv4(struct rte_mbuf *m, uint16_t portid,
 	++(ipv4_hdr->hdr_checksum);
 #endif
 	/* dst addr */
-	*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[dst_port];
+	*(uint64_t *)&eth_hdr->dst_addr = dest_eth_addr[dst_port];
 
 	/* src addr */
 	rte_ether_addr_copy(&ports_eth_addr[dst_port],
-			&eth_hdr->s_addr);
+			&eth_hdr->src_addr);
 
 	return dst_port;
 }
@@ -64,11 +64,11 @@ l3fwd_em_handle_ipv6(struct rte_mbuf *m, uint16_t portid,
 		dst_port = portid;
 
 	/* dst addr */
-	*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[dst_port];
+	*(uint64_t *)&eth_hdr->dst_addr = dest_eth_addr[dst_port];
 
 	/* src addr */
 	rte_ether_addr_copy(&ports_eth_addr[dst_port],
-			&eth_hdr->s_addr);
+			&eth_hdr->src_addr);
 
 	return dst_port;
 }
diff --git a/examples/l3fwd/l3fwd_fib.c b/examples/l3fwd/l3fwd_fib.c
index f8d6a3ac39..7fd7c1dd64 100644
--- a/examples/l3fwd/l3fwd_fib.c
+++ b/examples/l3fwd/l3fwd_fib.c
@@ -92,9 +92,9 @@ fib_send_single(int nb_tx, struct lcore_conf *qconf,
 		/* Set MAC addresses. */
 		eth_hdr = rte_pktmbuf_mtod(pkts_burst[j],
 				struct rte_ether_hdr *);
-		*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[hops[j]];
+		*(uint64_t *)&eth_hdr->dst_addr = dest_eth_addr[hops[j]];
 		rte_ether_addr_copy(&ports_eth_addr[hops[j]],
-				&eth_hdr->s_addr);
+				&eth_hdr->src_addr);
 
 		/* Send single packet. */
 		send_single_packet(qconf, pkts_burst[j], hops[j]);
diff --git a/examples/l3fwd/l3fwd_lpm.c b/examples/l3fwd/l3fwd_lpm.c
index 7200160164..232b606b54 100644
--- a/examples/l3fwd/l3fwd_lpm.c
+++ b/examples/l3fwd/l3fwd_lpm.c
@@ -256,11 +256,11 @@ lpm_process_event_pkt(const struct lcore_conf *lconf, struct rte_mbuf *mbuf)
 	}
 #endif
 	/* dst addr */
-	*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[mbuf->port];
+	*(uint64_t *)&eth_hdr->dst_addr = dest_eth_addr[mbuf->port];
 
 	/* src addr */
 	rte_ether_addr_copy(&ports_eth_addr[mbuf->port],
-			&eth_hdr->s_addr);
+			&eth_hdr->src_addr);
 #endif
 	return mbuf->port;
 }
diff --git a/examples/l3fwd/l3fwd_lpm.h b/examples/l3fwd/l3fwd_lpm.h
index d730d72a20..c61b969584 100644
--- a/examples/l3fwd/l3fwd_lpm.h
+++ b/examples/l3fwd/l3fwd_lpm.h
@@ -40,11 +40,11 @@ l3fwd_lpm_simple_forward(struct rte_mbuf *m, uint16_t portid,
 		++(ipv4_hdr->hdr_checksum);
 #endif
 		/* dst addr */
-		*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[dst_port];
+		*(uint64_t *)&eth_hdr->dst_addr = dest_eth_addr[dst_port];
 
 		/* src addr */
 		rte_ether_addr_copy(&ports_eth_addr[dst_port],
-				&eth_hdr->s_addr);
+				&eth_hdr->src_addr);
 
 		send_single_packet(qconf, m, dst_port);
 	} else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
@@ -62,11 +62,11 @@ l3fwd_lpm_simple_forward(struct rte_mbuf *m, uint16_t portid,
 			dst_port = portid;
 
 		/* dst addr */
-		*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[dst_port];
+		*(uint64_t *)&eth_hdr->dst_addr = dest_eth_addr[dst_port];
 
 		/* src addr */
 		rte_ether_addr_copy(&ports_eth_addr[dst_port],
-				&eth_hdr->s_addr);
+				&eth_hdr->src_addr);
 
 		send_single_packet(qconf, m, dst_port);
 	} else {
diff --git a/examples/link_status_interrupt/main.c b/examples/link_status_interrupt/main.c
index a0bc1e56d0..e4542df11f 100644
--- a/examples/link_status_interrupt/main.c
+++ b/examples/link_status_interrupt/main.c
@@ -182,11 +182,11 @@ lsi_simple_forward(struct rte_mbuf *m, unsigned portid)
 	eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 
 	/* 02:00:00:00:00:xx */
-	tmp = &eth->d_addr.addr_bytes[0];
+	tmp = &eth->dst_addr.addr_bytes[0];
 	*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dst_port << 40);
 
 	/* src addr */
-	rte_ether_addr_copy(&lsi_ports_eth_addr[dst_port], &eth->s_addr);
+	rte_ether_addr_copy(&lsi_ports_eth_addr[dst_port], &eth->src_addr);
 
 	buffer = tx_buffer[dst_port];
 	sent = rte_eth_tx_buffer(dst_port, 0, buffer, m);
diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
index 2f593abf26..2905199743 100644
--- a/examples/performance-thread/l3fwd-thread/main.c
+++ b/examples/performance-thread/l3fwd-thread/main.c
@@ -1068,24 +1068,24 @@ simple_ipv4_fwd_8pkts(struct rte_mbuf *m[8], uint16_t portid)
 #endif
 
 	/* dst addr */
-	*(uint64_t *)&eth_hdr[0]->d_addr = dest_eth_addr[dst_port[0]];
-	*(uint64_t *)&eth_hdr[1]->d_addr = dest_eth_addr[dst_port[1]];
-	*(uint64_t *)&eth_hdr[2]->d_addr = dest_eth_addr[dst_port[2]];
-	*(uint64_t *)&eth_hdr[3]->d_addr = dest_eth_addr[dst_port[3]];
-	*(uint64_t *)&eth_hdr[4]->d_addr = dest_eth_addr[dst_port[4]];
-	*(uint64_t *)&eth_hdr[5]->d_addr = dest_eth_addr[dst_port[5]];
-	*(uint64_t *)&eth_hdr[6]->d_addr = dest_eth_addr[dst_port[6]];
-	*(uint64_t *)&eth_hdr[7]->d_addr = dest_eth_addr[dst_port[7]];
+	*(uint64_t *)&eth_hdr[0]->dst_addr = dest_eth_addr[dst_port[0]];
+	*(uint64_t *)&eth_hdr[1]->dst_addr = dest_eth_addr[dst_port[1]];
+	*(uint64_t *)&eth_hdr[2]->dst_addr = dest_eth_addr[dst_port[2]];
+	*(uint64_t *)&eth_hdr[3]->dst_addr = dest_eth_addr[dst_port[3]];
+	*(uint64_t *)&eth_hdr[4]->dst_addr = dest_eth_addr[dst_port[4]];
+	*(uint64_t *)&eth_hdr[5]->dst_addr = dest_eth_addr[dst_port[5]];
+	*(uint64_t *)&eth_hdr[6]->dst_addr = dest_eth_addr[dst_port[6]];
+	*(uint64_t *)&eth_hdr[7]->dst_addr = dest_eth_addr[dst_port[7]];
 
 	/* src addr */
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[0]], &eth_hdr[0]->s_addr);
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[1]], &eth_hdr[1]->s_addr);
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[2]], &eth_hdr[2]->s_addr);
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[3]], &eth_hdr[3]->s_addr);
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[4]], &eth_hdr[4]->s_addr);
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[5]], &eth_hdr[5]->s_addr);
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[6]], &eth_hdr[6]->s_addr);
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[7]], &eth_hdr[7]->s_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[0]], &eth_hdr[0]->src_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[1]], &eth_hdr[1]->src_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[2]], &eth_hdr[2]->src_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[3]], &eth_hdr[3]->src_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[4]], &eth_hdr[4]->src_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[5]], &eth_hdr[5]->src_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[6]], &eth_hdr[6]->src_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[7]], &eth_hdr[7]->src_addr);
 
 	send_single_packet(m[0], (uint8_t)dst_port[0]);
 	send_single_packet(m[1], (uint8_t)dst_port[1]);
@@ -1203,24 +1203,24 @@ simple_ipv6_fwd_8pkts(struct rte_mbuf *m[8], uint16_t portid)
 		dst_port[7] = portid;
 
 	/* dst addr */
-	*(uint64_t *)&eth_hdr[0]->d_addr = dest_eth_addr[dst_port[0]];
-	*(uint64_t *)&eth_hdr[1]->d_addr = dest_eth_addr[dst_port[1]];
-	*(uint64_t *)&eth_hdr[2]->d_addr = dest_eth_addr[dst_port[2]];
-	*(uint64_t *)&eth_hdr[3]->d_addr = dest_eth_addr[dst_port[3]];
-	*(uint64_t *)&eth_hdr[4]->d_addr = dest_eth_addr[dst_port[4]];
-	*(uint64_t *)&eth_hdr[5]->d_addr = dest_eth_addr[dst_port[5]];
-	*(uint64_t *)&eth_hdr[6]->d_addr = dest_eth_addr[dst_port[6]];
-	*(uint64_t *)&eth_hdr[7]->d_addr = dest_eth_addr[dst_port[7]];
+	*(uint64_t *)&eth_hdr[0]->dst_addr = dest_eth_addr[dst_port[0]];
+	*(uint64_t *)&eth_hdr[1]->dst_addr = dest_eth_addr[dst_port[1]];
+	*(uint64_t *)&eth_hdr[2]->dst_addr = dest_eth_addr[dst_port[2]];
+	*(uint64_t *)&eth_hdr[3]->dst_addr = dest_eth_addr[dst_port[3]];
+	*(uint64_t *)&eth_hdr[4]->dst_addr = dest_eth_addr[dst_port[4]];
+	*(uint64_t *)&eth_hdr[5]->dst_addr = dest_eth_addr[dst_port[5]];
+	*(uint64_t *)&eth_hdr[6]->dst_addr = dest_eth_addr[dst_port[6]];
+	*(uint64_t *)&eth_hdr[7]->dst_addr = dest_eth_addr[dst_port[7]];
 
 	/* src addr */
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[0]], &eth_hdr[0]->s_addr);
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[1]], &eth_hdr[1]->s_addr);
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[2]], &eth_hdr[2]->s_addr);
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[3]], &eth_hdr[3]->s_addr);
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[4]], &eth_hdr[4]->s_addr);
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[5]], &eth_hdr[5]->s_addr);
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[6]], &eth_hdr[6]->s_addr);
-	rte_ether_addr_copy(&ports_eth_addr[dst_port[7]], &eth_hdr[7]->s_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[0]], &eth_hdr[0]->src_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[1]], &eth_hdr[1]->src_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[2]], &eth_hdr[2]->src_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[3]], &eth_hdr[3]->src_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[4]], &eth_hdr[4]->src_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[5]], &eth_hdr[5]->src_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[6]], &eth_hdr[6]->src_addr);
+	rte_ether_addr_copy(&ports_eth_addr[dst_port[7]], &eth_hdr[7]->src_addr);
 
 	send_single_packet(m[0], dst_port[0]);
 	send_single_packet(m[1], dst_port[1]);
@@ -1268,11 +1268,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint16_t portid)
 		++(ipv4_hdr->hdr_checksum);
 #endif
 		/* dst addr */
-		*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[dst_port];
+		*(uint64_t *)&eth_hdr->dst_addr = dest_eth_addr[dst_port];
 
 		/* src addr */
 		rte_ether_addr_copy(&ports_eth_addr[dst_port],
-				&eth_hdr->s_addr);
+				&eth_hdr->src_addr);
 
 		send_single_packet(m, dst_port);
 	} else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
@@ -1290,11 +1290,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint16_t portid)
 			dst_port = portid;
 
 		/* dst addr */
-		*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[dst_port];
+		*(uint64_t *)&eth_hdr->dst_addr = dest_eth_addr[dst_port];
 
 		/* src addr */
 		rte_ether_addr_copy(&ports_eth_addr[dst_port],
-				&eth_hdr->s_addr);
+				&eth_hdr->src_addr);
 
 		send_single_packet(m, dst_port);
 	} else
diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
index 4f32ade7fb..61e4ee0ea1 100644
--- a/examples/ptpclient/ptpclient.c
+++ b/examples/ptpclient/ptpclient.c
@@ -426,10 +426,10 @@ parse_fup(struct ptpv2_data_slave_ordinary *ptp_data)
 		created_pkt->data_len = pkt_size;
 		created_pkt->pkt_len = pkt_size;
 		eth_hdr = rte_pktmbuf_mtod(created_pkt, struct rte_ether_hdr *);
-		rte_ether_addr_copy(&eth_addr, &eth_hdr->s_addr);
+		rte_ether_addr_copy(&eth_addr, &eth_hdr->src_addr);
 
 		/* Set multicast address 01-1B-19-00-00-00. */
-		rte_ether_addr_copy(&eth_multicast, &eth_hdr->d_addr);
+		rte_ether_addr_copy(&eth_multicast, &eth_hdr->dst_addr);
 
 		eth_hdr->ether_type = htons(PTP_PROTOCOL);
 		ptp_msg = (struct ptp_message *)
@@ -449,14 +449,14 @@ parse_fup(struct ptpv2_data_slave_ordinary *ptp_data)
 		client_clkid =
 			&ptp_msg->delay_req.hdr.source_port_id.clock_id;
 
-		client_clkid->id[0] = eth_hdr->s_addr.addr_bytes[0];
-		client_clkid->id[1] = eth_hdr->s_addr.addr_bytes[1];
-		client_clkid->id[2] = eth_hdr->s_addr.addr_bytes[2];
+		client_clkid->id[0] = eth_hdr->src_addr.addr_bytes[0];
+		client_clkid->id[1] = eth_hdr->src_addr.addr_bytes[1];
+		client_clkid->id[2] = eth_hdr->src_addr.addr_bytes[2];
 		client_clkid->id[3] = 0xFF;
 		client_clkid->id[4] = 0xFE;
-		client_clkid->id[5] = eth_hdr->s_addr.addr_bytes[3];
-		client_clkid->id[6] = eth_hdr->s_addr.addr_bytes[4];
-		client_clkid->id[7] = eth_hdr->s_addr.addr_bytes[5];
+		client_clkid->id[5] = eth_hdr->src_addr.addr_bytes[3];
+		client_clkid->id[6] = eth_hdr->src_addr.addr_bytes[4];
+		client_clkid->id[7] = eth_hdr->src_addr.addr_bytes[5];
 
 		rte_memcpy(&ptp_data->client_clock_id,
 			   client_clkid,
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index d0bf1f31e3..b24fd82a6e 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -757,7 +757,7 @@ link_vmdq(struct vhost_dev *vdev, struct rte_mbuf *m)
 	/* Learn MAC address of guest device from packet */
 	pkt_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 
-	if (find_vhost_dev(&pkt_hdr->s_addr)) {
+	if (find_vhost_dev(&pkt_hdr->src_addr)) {
 		RTE_LOG(ERR, VHOST_DATA,
 			"(%d) device is using a registered MAC!\n",
 			vdev->vid);
@@ -765,7 +765,8 @@ link_vmdq(struct vhost_dev *vdev, struct rte_mbuf *m)
 	}
 
 	for (i = 0; i < RTE_ETHER_ADDR_LEN; i++)
-		vdev->mac_address.addr_bytes[i] = pkt_hdr->s_addr.addr_bytes[i];
+		vdev->mac_address.addr_bytes[i] =
+			pkt_hdr->src_addr.addr_bytes[i];
 
 	/* vlan_tag currently uses the device_id. */
 	vdev->vlan_tag = vlan_tags[vdev->vid];
@@ -945,7 +946,7 @@ virtio_tx_local(struct vhost_dev *vdev, struct rte_mbuf *m)
 	uint16_t lcore_id = rte_lcore_id();
 	pkt_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 
-	dst_vdev = find_vhost_dev(&pkt_hdr->d_addr);
+	dst_vdev = find_vhost_dev(&pkt_hdr->dst_addr);
 	if (!dst_vdev)
 		return -1;
 
@@ -993,7 +994,7 @@ find_local_dest(struct vhost_dev *vdev, struct rte_mbuf *m,
 	struct rte_ether_hdr *pkt_hdr =
 		rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 
-	dst_vdev = find_vhost_dev(&pkt_hdr->d_addr);
+	dst_vdev = find_vhost_dev(&pkt_hdr->dst_addr);
 	if (!dst_vdev)
 		return 0;
 
@@ -1076,7 +1077,7 @@ virtio_tx_route(struct vhost_dev *vdev, struct rte_mbuf *m, uint16_t vlan_tag)
 
 
 	nh = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
-	if (unlikely(rte_is_broadcast_ether_addr(&nh->d_addr))) {
+	if (unlikely(rte_is_broadcast_ether_addr(&nh->dst_addr))) {
 		struct vhost_dev *vdev2;
 
 		TAILQ_FOREACH(vdev2, &vhost_dev_list, global_vdev_entry) {
diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c
index 755dcafa2f..ee7f4324e1 100644
--- a/examples/vmdq/main.c
+++ b/examples/vmdq/main.c
@@ -461,11 +461,11 @@ update_mac_address(struct rte_mbuf *m, unsigned dst_port)
 	eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 
 	/* 02:00:00:00:00:xx */
-	tmp = &eth->d_addr.addr_bytes[0];
+	tmp = &eth->dst_addr.addr_bytes[0];
 	*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dst_port << 40);
 
 	/* src addr */
-	rte_ether_addr_copy(&vmdq_ports_eth_addr[dst_port], &eth->s_addr);
+	rte_ether_addr_copy(&vmdq_ports_eth_addr[dst_port], &eth->src_addr);
 }
 
 /* When we receive a HUP signal, print out our stats */
diff --git a/examples/vmdq_dcb/main.c b/examples/vmdq_dcb/main.c
index 6d3c918d6d..14c20e6a8b 100644
--- a/examples/vmdq_dcb/main.c
+++ b/examples/vmdq_dcb/main.c
@@ -512,11 +512,11 @@ update_mac_address(struct rte_mbuf *m, unsigned dst_port)
 	eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 
 	/* 02:00:00:00:00:xx */
-	tmp = &eth->d_addr.addr_bytes[0];
+	tmp = &eth->dst_addr.addr_bytes[0];
 	*((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dst_port << 40);
 
 	/* src addr */
-	rte_ether_addr_copy(&vmdq_ports_eth_addr[dst_port], &eth->s_addr);
+	rte_ether_addr_copy(&vmdq_ports_eth_addr[dst_port], &eth->src_addr);
 }
 
 /* When we receive a HUP signal, print out our stats */
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 7b1ed7f110..a89945061a 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -785,8 +785,8 @@ struct rte_flow_item_eth {
 /** Default mask for RTE_FLOW_ITEM_TYPE_ETH. */
 #ifndef __cplusplus
 static const struct rte_flow_item_eth rte_flow_item_eth_mask = {
-	.hdr.d_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
-	.hdr.s_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
+	.hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 	.hdr.ether_type = RTE_BE16(0x0000),
 };
 #endif
diff --git a/lib/gro/gro_tcp4.c b/lib/gro/gro_tcp4.c
index feb5855144..aff22178e3 100644
--- a/lib/gro/gro_tcp4.c
+++ b/lib/gro/gro_tcp4.c
@@ -243,8 +243,8 @@ gro_tcp4_reassemble(struct rte_mbuf *pkt,
 	ip_id = is_atomic ? 0 : rte_be_to_cpu_16(ipv4_hdr->packet_id);
 	sent_seq = rte_be_to_cpu_32(tcp_hdr->sent_seq);
 
-	rte_ether_addr_copy(&(eth_hdr->s_addr), &(key.eth_saddr));
-	rte_ether_addr_copy(&(eth_hdr->d_addr), &(key.eth_daddr));
+	rte_ether_addr_copy(&(eth_hdr->src_addr), &(key.eth_saddr));
+	rte_ether_addr_copy(&(eth_hdr->dst_addr), &(key.eth_daddr));
 	key.ip_src_addr = ipv4_hdr->src_addr;
 	key.ip_dst_addr = ipv4_hdr->dst_addr;
 	key.src_port = tcp_hdr->src_port;
diff --git a/lib/gro/gro_udp4.c b/lib/gro/gro_udp4.c
index b8301296df..e78dda7874 100644
--- a/lib/gro/gro_udp4.c
+++ b/lib/gro/gro_udp4.c
@@ -238,8 +238,8 @@ gro_udp4_reassemble(struct rte_mbuf *pkt,
 	is_last_frag = ((frag_offset & RTE_IPV4_HDR_MF_FLAG) == 0) ? 1 : 0;
 	frag_offset = (uint16_t)(frag_offset & RTE_IPV4_HDR_OFFSET_MASK) << 3;
 
-	rte_ether_addr_copy(&(eth_hdr->s_addr), &(key.eth_saddr));
-	rte_ether_addr_copy(&(eth_hdr->d_addr), &(key.eth_daddr));
+	rte_ether_addr_copy(&(eth_hdr->src_addr), &(key.eth_saddr));
+	rte_ether_addr_copy(&(eth_hdr->dst_addr), &(key.eth_daddr));
 	key.ip_src_addr = ipv4_hdr->src_addr;
 	key.ip_dst_addr = ipv4_hdr->dst_addr;
 	key.ip_id = ip_id;
diff --git a/lib/gro/gro_vxlan_tcp4.c b/lib/gro/gro_vxlan_tcp4.c
index f3b6e603b9..2005899afe 100644
--- a/lib/gro/gro_vxlan_tcp4.c
+++ b/lib/gro/gro_vxlan_tcp4.c
@@ -358,8 +358,8 @@ gro_vxlan_tcp4_reassemble(struct rte_mbuf *pkt,
 
 	sent_seq = rte_be_to_cpu_32(tcp_hdr->sent_seq);
 
-	rte_ether_addr_copy(&(eth_hdr->s_addr), &(key.inner_key.eth_saddr));
-	rte_ether_addr_copy(&(eth_hdr->d_addr), &(key.inner_key.eth_daddr));
+	rte_ether_addr_copy(&(eth_hdr->src_addr), &(key.inner_key.eth_saddr));
+	rte_ether_addr_copy(&(eth_hdr->dst_addr), &(key.inner_key.eth_daddr));
 	key.inner_key.ip_src_addr = ipv4_hdr->src_addr;
 	key.inner_key.ip_dst_addr = ipv4_hdr->dst_addr;
 	key.inner_key.recv_ack = tcp_hdr->recv_ack;
@@ -368,8 +368,8 @@ gro_vxlan_tcp4_reassemble(struct rte_mbuf *pkt,
 
 	key.vxlan_hdr.vx_flags = vxlan_hdr->vx_flags;
 	key.vxlan_hdr.vx_vni = vxlan_hdr->vx_vni;
-	rte_ether_addr_copy(&(outer_eth_hdr->s_addr), &(key.outer_eth_saddr));
-	rte_ether_addr_copy(&(outer_eth_hdr->d_addr), &(key.outer_eth_daddr));
+	rte_ether_addr_copy(&(outer_eth_hdr->src_addr), &(key.outer_eth_saddr));
+	rte_ether_addr_copy(&(outer_eth_hdr->dst_addr), &(key.outer_eth_daddr));
 	key.outer_ip_src_addr = outer_ipv4_hdr->src_addr;
 	key.outer_ip_dst_addr = outer_ipv4_hdr->dst_addr;
 	key.outer_src_port = udp_hdr->src_port;
diff --git a/lib/gro/gro_vxlan_udp4.c b/lib/gro/gro_vxlan_udp4.c
index 37476361d5..4767c910bb 100644
--- a/lib/gro/gro_vxlan_udp4.c
+++ b/lib/gro/gro_vxlan_udp4.c
@@ -338,16 +338,16 @@ gro_vxlan_udp4_reassemble(struct rte_mbuf *pkt,
 	is_last_frag = ((frag_offset & RTE_IPV4_HDR_MF_FLAG) == 0) ? 1 : 0;
 	frag_offset = (uint16_t)(frag_offset & RTE_IPV4_HDR_OFFSET_MASK) << 3;
 
-	rte_ether_addr_copy(&(eth_hdr->s_addr), &(key.inner_key.eth_saddr));
-	rte_ether_addr_copy(&(eth_hdr->d_addr), &(key.inner_key.eth_daddr));
+	rte_ether_addr_copy(&(eth_hdr->src_addr), &(key.inner_key.eth_saddr));
+	rte_ether_addr_copy(&(eth_hdr->dst_addr), &(key.inner_key.eth_daddr));
 	key.inner_key.ip_src_addr = ipv4_hdr->src_addr;
 	key.inner_key.ip_dst_addr = ipv4_hdr->dst_addr;
 	key.inner_key.ip_id = ip_id;
 
 	key.vxlan_hdr.vx_flags = vxlan_hdr->vx_flags;
 	key.vxlan_hdr.vx_vni = vxlan_hdr->vx_vni;
-	rte_ether_addr_copy(&(outer_eth_hdr->s_addr), &(key.outer_eth_saddr));
-	rte_ether_addr_copy(&(outer_eth_hdr->d_addr), &(key.outer_eth_daddr));
+	rte_ether_addr_copy(&(outer_eth_hdr->src_addr), &(key.outer_eth_saddr));
+	rte_ether_addr_copy(&(outer_eth_hdr->dst_addr), &(key.outer_eth_daddr));
 	key.outer_ip_src_addr = outer_ipv4_hdr->src_addr;
 	key.outer_ip_dst_addr = outer_ipv4_hdr->dst_addr;
 	/* Note: It is unnecessary to save outer_src_port here because it can
diff --git a/lib/net/rte_arp.c b/lib/net/rte_arp.c
index 5c1e27b8c0..9f7eb6b375 100644
--- a/lib/net/rte_arp.c
+++ b/lib/net/rte_arp.c
@@ -29,8 +29,8 @@ rte_net_make_rarp_packet(struct rte_mempool *mpool,
 	}
 
 	/* Ethernet header. */
-	memset(eth_hdr->d_addr.addr_bytes, 0xff, RTE_ETHER_ADDR_LEN);
-	rte_ether_addr_copy(mac, &eth_hdr->s_addr);
+	memset(eth_hdr->dst_addr.addr_bytes, 0xff, RTE_ETHER_ADDR_LEN);
+	rte_ether_addr_copy(mac, &eth_hdr->src_addr);
 	eth_hdr->ether_type = RTE_BE16(RTE_ETHER_TYPE_RARP);
 
 	/* RARP header. */
diff --git a/lib/net/rte_ether.h b/lib/net/rte_ether.h
index 5f38b41dd4..b83e0d3fce 100644
--- a/lib/net/rte_ether.h
+++ b/lib/net/rte_ether.h
@@ -266,34 +266,16 @@ rte_ether_format_addr(char *buf, uint16_t size,
 int
 rte_ether_unformat_addr(const char *str, struct rte_ether_addr *eth_addr);
 
-/* Windows Sockets headers contain `#define s_addr S_un.S_addr`.
- * Temporarily disable this macro to avoid conflict at definition.
- * Place source MAC address in both `s_addr` and `S_un.S_addr` fields,
- * so that access works either directly or through the macro.
- */
-#pragma push_macro("s_addr")
-#ifdef s_addr
-#undef s_addr
-#endif
-
 /**
  * Ethernet header: Contains the destination address, source address
  * and frame type.
  */
 struct rte_ether_hdr {
-	struct rte_ether_addr d_addr; /**< Destination address. */
-	RTE_STD_C11
-	union {
-		struct rte_ether_addr s_addr; /**< Source address. */
-		struct {
-			struct rte_ether_addr S_addr;
-		} S_un; /**< Do not use directly; use s_addr instead.*/
-	};
+	struct rte_ether_addr dst_addr; /**< Destination address. */
+	struct rte_ether_addr src_addr; /**< Source address. */
 	rte_be16_t ether_type; /**< Frame type. */
 } __rte_aligned(2);
 
-#pragma pop_macro("s_addr")
-
 /**
  * Ethernet VLAN Header.
  * Contains the 16-bit VLAN Tag Control Identifier and the Ethernet type
diff --git a/lib/pipeline/rte_table_action.c b/lib/pipeline/rte_table_action.c
index ad7904c0ee..4b0316bfed 100644
--- a/lib/pipeline/rte_table_action.c
+++ b/lib/pipeline/rte_table_action.c
@@ -615,8 +615,8 @@ encap_ether_apply(void *data,
 		RTE_ETHER_TYPE_IPV6;
 
 	/* Ethernet */
-	rte_ether_addr_copy(&p->ether.ether.da, &d->ether.d_addr);
-	rte_ether_addr_copy(&p->ether.ether.sa, &d->ether.s_addr);
+	rte_ether_addr_copy(&p->ether.ether.da, &d->ether.dst_addr);
+	rte_ether_addr_copy(&p->ether.ether.sa, &d->ether.src_addr);
 	d->ether.ether_type = rte_htons(ethertype);
 
 	return 0;
@@ -633,8 +633,8 @@ encap_vlan_apply(void *data,
 		RTE_ETHER_TYPE_IPV6;
 
 	/* Ethernet */
-	rte_ether_addr_copy(&p->vlan.ether.da, &d->ether.d_addr);
-	rte_ether_addr_copy(&p->vlan.ether.sa, &d->ether.s_addr);
+	rte_ether_addr_copy(&p->vlan.ether.da, &d->ether.dst_addr);
+	rte_ether_addr_copy(&p->vlan.ether.sa, &d->ether.src_addr);
 	d->ether.ether_type = rte_htons(RTE_ETHER_TYPE_VLAN);
 
 	/* VLAN */
@@ -657,8 +657,8 @@ encap_qinq_apply(void *data,
 		RTE_ETHER_TYPE_IPV6;
 
 	/* Ethernet */
-	rte_ether_addr_copy(&p->qinq.ether.da, &d->ether.d_addr);
-	rte_ether_addr_copy(&p->qinq.ether.sa, &d->ether.s_addr);
+	rte_ether_addr_copy(&p->qinq.ether.da, &d->ether.dst_addr);
+	rte_ether_addr_copy(&p->qinq.ether.sa, &d->ether.src_addr);
 	d->ether.ether_type = rte_htons(RTE_ETHER_TYPE_QINQ);
 
 	/* SVLAN */
@@ -683,8 +683,8 @@ encap_qinq_pppoe_apply(void *data,
 	struct encap_qinq_pppoe_data *d = data;
 
 	/* Ethernet */
-	rte_ether_addr_copy(&p->qinq.ether.da, &d->ether.d_addr);
-	rte_ether_addr_copy(&p->qinq.ether.sa, &d->ether.s_addr);
+	rte_ether_addr_copy(&p->qinq.ether.da, &d->ether.dst_addr);
+	rte_ether_addr_copy(&p->qinq.ether.sa, &d->ether.src_addr);
 	d->ether.ether_type = rte_htons(RTE_ETHER_TYPE_VLAN);
 
 	/* SVLAN */
@@ -719,8 +719,8 @@ encap_mpls_apply(void *data,
 	uint32_t i;
 
 	/* Ethernet */
-	rte_ether_addr_copy(&p->mpls.ether.da, &d->ether.d_addr);
-	rte_ether_addr_copy(&p->mpls.ether.sa, &d->ether.s_addr);
+	rte_ether_addr_copy(&p->mpls.ether.da, &d->ether.dst_addr);
+	rte_ether_addr_copy(&p->mpls.ether.sa, &d->ether.src_addr);
 	d->ether.ether_type = rte_htons(ethertype);
 
 	/* MPLS */
@@ -746,8 +746,8 @@ encap_pppoe_apply(void *data,
 	struct encap_pppoe_data *d = data;
 
 	/* Ethernet */
-	rte_ether_addr_copy(&p->pppoe.ether.da, &d->ether.d_addr);
-	rte_ether_addr_copy(&p->pppoe.ether.sa, &d->ether.s_addr);
+	rte_ether_addr_copy(&p->pppoe.ether.da, &d->ether.dst_addr);
+	rte_ether_addr_copy(&p->pppoe.ether.sa, &d->ether.src_addr);
 	d->ether.ether_type = rte_htons(RTE_ETHER_TYPE_PPPOE_SESSION);
 
 	/* PPPoE and PPP*/
@@ -777,9 +777,9 @@ encap_vxlan_apply(void *data,
 
 			/* Ethernet */
 			rte_ether_addr_copy(&p->vxlan.ether.da,
-					&d->ether.d_addr);
+					&d->ether.dst_addr);
 			rte_ether_addr_copy(&p->vxlan.ether.sa,
-					&d->ether.s_addr);
+					&d->ether.src_addr);
 			d->ether.ether_type = rte_htons(RTE_ETHER_TYPE_VLAN);
 
 			/* VLAN */
@@ -818,9 +818,9 @@ encap_vxlan_apply(void *data,
 
 			/* Ethernet */
 			rte_ether_addr_copy(&p->vxlan.ether.da,
-					&d->ether.d_addr);
+					&d->ether.dst_addr);
 			rte_ether_addr_copy(&p->vxlan.ether.sa,
-					&d->ether.s_addr);
+					&d->ether.src_addr);
 			d->ether.ether_type = rte_htons(RTE_ETHER_TYPE_IPV4);
 
 			/* IPv4*/
@@ -855,9 +855,9 @@ encap_vxlan_apply(void *data,
 
 			/* Ethernet */
 			rte_ether_addr_copy(&p->vxlan.ether.da,
-					&d->ether.d_addr);
+					&d->ether.dst_addr);
 			rte_ether_addr_copy(&p->vxlan.ether.sa,
-					&d->ether.s_addr);
+					&d->ether.src_addr);
 			d->ether.ether_type = rte_htons(RTE_ETHER_TYPE_VLAN);
 
 			/* VLAN */
@@ -896,9 +896,9 @@ encap_vxlan_apply(void *data,
 
 			/* Ethernet */
 			rte_ether_addr_copy(&p->vxlan.ether.da,
-					&d->ether.d_addr);
+					&d->ether.dst_addr);
 			rte_ether_addr_copy(&p->vxlan.ether.sa,
-					&d->ether.s_addr);
+					&d->ether.src_addr);
 			d->ether.ether_type = rte_htons(RTE_ETHER_TYPE_IPV6);
 
 			/* IPv6*/
-- 
2.29.3


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [PATCH v5 0/2] cmdline: reduce ABI
  2021-10-05 20:15  4%       ` [dpdk-dev] [PATCH v4 0/2] cmdline: reduce ABI Dmitry Kozlyuk
  2021-10-05 20:15  4%         ` [dpdk-dev] [PATCH v4 1/2] cmdline: make struct cmdline opaque Dmitry Kozlyuk
  2021-10-05 20:15  3%         ` [dpdk-dev] [PATCH v4 2/2] cmdline: make struct rdline opaque Dmitry Kozlyuk
@ 2021-10-07 22:10  4%         ` Dmitry Kozlyuk
  2021-10-07 22:10  4%           ` [dpdk-dev] [PATCH v5 1/2] cmdline: make struct cmdline opaque Dmitry Kozlyuk
  2021-10-07 22:10  3%           ` [dpdk-dev] [PATCH v5 2/2] cmdline: make struct rdline opaque Dmitry Kozlyuk
  2 siblings, 2 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-07 22:10 UTC (permalink / raw)
  To: dev; +Cc: Dmitry Kozlyuk

Hide struct cmdline following the deprecation notice.
Hide struct rdline following the v1 discussion.

v5: fix API documentation (Olivier),
    remove useless NULL assignment (Stephen).
v4: rdline_create -> rdline_new, restore empty line (Olivier).
v3: add experimental tags and releae notes for rdline.
v2: also hide struct rdline (David, Olivier).

Dmitry Kozlyuk (2):
  cmdline: make struct cmdline opaque
  cmdline: make struct rdline opaque

 app/test-cmdline/commands.c            |  2 +-
 app/test/test_cmdline_lib.c            | 22 ++++---
 doc/guides/rel_notes/deprecation.rst   |  4 --
 doc/guides/rel_notes/release_21_11.rst |  5 ++
 lib/cmdline/cmdline.c                  |  3 +-
 lib/cmdline/cmdline.h                  | 19 ------
 lib/cmdline/cmdline_private.h          | 57 ++++++++++++++++-
 lib/cmdline/cmdline_rdline.c           | 43 ++++++++++++-
 lib/cmdline/cmdline_rdline.h           | 86 ++++++++++----------------
 lib/cmdline/version.map                |  8 ++-
 10 files changed, 156 insertions(+), 93 deletions(-)

-- 
2.29.3


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v5 1/2] cmdline: make struct cmdline opaque
  2021-10-07 22:10  4%         ` [dpdk-dev] [PATCH v5 0/2] cmdline: reduce ABI Dmitry Kozlyuk
@ 2021-10-07 22:10  4%           ` Dmitry Kozlyuk
  2021-10-07 22:10  3%           ` [dpdk-dev] [PATCH v5 2/2] cmdline: make struct rdline opaque Dmitry Kozlyuk
  1 sibling, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-07 22:10 UTC (permalink / raw)
  To: dev; +Cc: Dmitry Kozlyuk, David Marchand, Olivier Matz, Ray Kinsella

Remove the definition of `struct cmdline` from public header.
Deprecation notice:
https://mails.dpdk.org/archives/dev/2020-September/183310.html

Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Acked-by: David Marchand <david.marchand@redhat.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 doc/guides/rel_notes/deprecation.rst   |  4 ----
 doc/guides/rel_notes/release_21_11.rst |  2 ++
 lib/cmdline/cmdline.h                  | 19 -------------------
 lib/cmdline/cmdline_private.h          |  8 +++++++-
 4 files changed, 9 insertions(+), 24 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 76a4abfd6b..a404276fa2 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -275,10 +275,6 @@ Deprecation Notices
 * metrics: The function ``rte_metrics_init`` will have a non-void return
   in order to notify errors instead of calling ``rte_exit``.
 
-* cmdline: ``cmdline`` structure will be made opaque to hide platform-specific
-  content. On Linux and FreeBSD, supported prior to DPDK 20.11,
-  original structure will be kept until DPDK 21.11.
-
 * security: The functions ``rte_security_set_pkt_metadata`` and
   ``rte_security_get_userdata`` will be made inline functions and additional
   flags will be added in structure ``rte_security_ctx`` in DPDK 21.11.
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index b55900936d..18377e5813 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -101,6 +101,8 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
+* cmdline: Made ``cmdline`` structure definition hidden on Linux and FreeBSD.
+
 
 ABI Changes
 -----------
diff --git a/lib/cmdline/cmdline.h b/lib/cmdline/cmdline.h
index c29762ddae..96674dfda2 100644
--- a/lib/cmdline/cmdline.h
+++ b/lib/cmdline/cmdline.h
@@ -7,10 +7,6 @@
 #ifndef _CMDLINE_H_
 #define _CMDLINE_H_
 
-#ifndef RTE_EXEC_ENV_WINDOWS
-#include <termios.h>
-#endif
-
 #include <rte_common.h>
 #include <rte_compat.h>
 
@@ -27,23 +23,8 @@
 extern "C" {
 #endif
 
-#ifndef RTE_EXEC_ENV_WINDOWS
-
-struct cmdline {
-	int s_in;
-	int s_out;
-	cmdline_parse_ctx_t *ctx;
-	struct rdline rdl;
-	char prompt[RDLINE_PROMPT_SIZE];
-	struct termios oldterm;
-};
-
-#else
-
 struct cmdline;
 
-#endif /* RTE_EXEC_ENV_WINDOWS */
-
 struct cmdline *cmdline_new(cmdline_parse_ctx_t *ctx, const char *prompt, int s_in, int s_out);
 void cmdline_set_prompt(struct cmdline *cl, const char *prompt);
 void cmdline_free(struct cmdline *cl);
diff --git a/lib/cmdline/cmdline_private.h b/lib/cmdline/cmdline_private.h
index a87c45275c..2e93674c66 100644
--- a/lib/cmdline/cmdline_private.h
+++ b/lib/cmdline/cmdline_private.h
@@ -11,6 +11,8 @@
 #include <rte_os_shim.h>
 #ifdef RTE_EXEC_ENV_WINDOWS
 #include <rte_windows.h>
+#else
+#include <termios.h>
 #endif
 
 #include <cmdline.h>
@@ -22,6 +24,7 @@ struct terminal {
 	int is_console_input;
 	int is_console_output;
 };
+#endif
 
 struct cmdline {
 	int s_in;
@@ -29,11 +32,14 @@ struct cmdline {
 	cmdline_parse_ctx_t *ctx;
 	struct rdline rdl;
 	char prompt[RDLINE_PROMPT_SIZE];
+#ifdef RTE_EXEC_ENV_WINDOWS
 	struct terminal oldterm;
 	char repeated_char;
 	WORD repeat_count;
-};
+#else
+	struct termios oldterm;
 #endif
+};
 
 /* Disable buffering and echoing, save previous settings to oldterm. */
 void terminal_adjust(struct cmdline *cl);
-- 
2.29.3


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v5 2/2] cmdline: make struct rdline opaque
  2021-10-07 22:10  4%         ` [dpdk-dev] [PATCH v5 0/2] cmdline: reduce ABI Dmitry Kozlyuk
  2021-10-07 22:10  4%           ` [dpdk-dev] [PATCH v5 1/2] cmdline: make struct cmdline opaque Dmitry Kozlyuk
@ 2021-10-07 22:10  3%           ` Dmitry Kozlyuk
  1 sibling, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-07 22:10 UTC (permalink / raw)
  To: dev
  Cc: Dmitry Kozlyuk, Ali Alnubani, Gregory Etelson, David Marchand,
	Olivier Matz, Ray Kinsella

Hide struct rdline definition and some RDLINE_* constants in order
to be able to change internal buffer sizes transparently to the user.
Add new functions:

* rdline_new(): allocate and initialize struct rdline.
  This function replaces rdline_init() and takes an extra parameter:
  opaque user data for the callbacks.
* rdline_free(): deallocate struct rdline.
* rdline_get_history_buffer_size(): for use in tests.
* rdline_get_opaque(): to obtain user data in callback functions.

Remove rdline_init() function from library headers and export list,
because using it requires the knowledge of sizeof(struct rdline).

Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
 app/test-cmdline/commands.c            |  2 +-
 app/test/test_cmdline_lib.c            | 22 ++++---
 doc/guides/rel_notes/release_21_11.rst |  3 +
 lib/cmdline/cmdline.c                  |  3 +-
 lib/cmdline/cmdline_private.h          | 49 +++++++++++++++
 lib/cmdline/cmdline_rdline.c           | 43 ++++++++++++-
 lib/cmdline/cmdline_rdline.h           | 86 ++++++++++----------------
 lib/cmdline/version.map                |  8 ++-
 8 files changed, 147 insertions(+), 69 deletions(-)

diff --git a/app/test-cmdline/commands.c b/app/test-cmdline/commands.c
index d732976f08..a13e1d1afd 100644
--- a/app/test-cmdline/commands.c
+++ b/app/test-cmdline/commands.c
@@ -297,7 +297,7 @@ cmd_get_history_bufsize_parsed(__rte_unused void *parsed_result,
 	struct rdline *rdl = cmdline_get_rdline(cl);
 
 	cmdline_printf(cl, "History buffer size: %zu\n",
-			sizeof(rdl->history_buf));
+			rdline_get_history_buffer_size(rdl));
 }
 
 cmdline_parse_token_string_t cmd_get_history_bufsize_tok =
diff --git a/app/test/test_cmdline_lib.c b/app/test/test_cmdline_lib.c
index d5a09b4541..054ebf5e9d 100644
--- a/app/test/test_cmdline_lib.c
+++ b/app/test/test_cmdline_lib.c
@@ -83,18 +83,19 @@ test_cmdline_parse_fns(void)
 static int
 test_cmdline_rdline_fns(void)
 {
-	struct rdline rdl;
+	struct rdline *rdl;
 	rdline_write_char_t *wc = &cmdline_write_char;
 	rdline_validate_t *v = &valid_buffer;
 	rdline_complete_t *c = &complete_buffer;
 
-	if (rdline_init(NULL, wc, v, c) >= 0)
+	rdl = rdline_new(NULL, v, c, NULL);
+	if (rdl != NULL)
 		goto error;
-	if (rdline_init(&rdl, NULL, v, c) >= 0)
+	rdl = rdline_new(wc, NULL, c, NULL);
+	if (rdl != NULL)
 		goto error;
-	if (rdline_init(&rdl, wc, NULL, c) >= 0)
-		goto error;
-	if (rdline_init(&rdl, wc, v, NULL) >= 0)
+	rdl = rdline_new(wc, v, NULL, NULL);
+	if (rdl != NULL)
 		goto error;
 	if (rdline_char_in(NULL, 0) >= 0)
 		goto error;
@@ -102,25 +103,30 @@ test_cmdline_rdline_fns(void)
 		goto error;
 	if (rdline_add_history(NULL, "history") >= 0)
 		goto error;
-	if (rdline_add_history(&rdl, NULL) >= 0)
+	if (rdline_add_history(rdl, NULL) >= 0)
 		goto error;
 	if (rdline_get_history_item(NULL, 0) != NULL)
 		goto error;
 
 	/* void functions */
+	rdline_get_history_buffer_size(NULL);
+	rdline_get_opaque(NULL);
 	rdline_newline(NULL, "prompt");
-	rdline_newline(&rdl, NULL);
+	rdline_newline(rdl, NULL);
 	rdline_stop(NULL);
 	rdline_quit(NULL);
 	rdline_restart(NULL);
 	rdline_redisplay(NULL);
 	rdline_reset(NULL);
 	rdline_clear_history(NULL);
+	rdline_free(NULL);
 
+	rdline_free(rdl);
 	return 0;
 
 error:
 	printf("Error: function accepted null parameter!\n");
+	rdline_free(rdl);
 	return -1;
 }
 
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 18377e5813..af11f4a656 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -103,6 +103,9 @@ API Changes
 
 * cmdline: Made ``cmdline`` structure definition hidden on Linux and FreeBSD.
 
+* cmdline: Made ``rdline`` structure definition hidden. Functions are added
+  to dynamically allocate and free it, and to access user data in callbacks.
+
 
 ABI Changes
 -----------
diff --git a/lib/cmdline/cmdline.c b/lib/cmdline/cmdline.c
index a176d15130..8f1854cb0b 100644
--- a/lib/cmdline/cmdline.c
+++ b/lib/cmdline/cmdline.c
@@ -85,13 +85,12 @@ cmdline_new(cmdline_parse_ctx_t *ctx, const char *prompt, int s_in, int s_out)
 	cl->ctx = ctx;
 
 	ret = rdline_init(&cl->rdl, cmdline_write_char, cmdline_valid_buffer,
-			cmdline_complete_buffer);
+			cmdline_complete_buffer, cl);
 	if (ret != 0) {
 		free(cl);
 		return NULL;
 	}
 
-	cl->rdl.opaque = cl;
 	cmdline_set_prompt(cl, prompt);
 	rdline_newline(&cl->rdl, cl->prompt);
 
diff --git a/lib/cmdline/cmdline_private.h b/lib/cmdline/cmdline_private.h
index 2e93674c66..c2e906d8de 100644
--- a/lib/cmdline/cmdline_private.h
+++ b/lib/cmdline/cmdline_private.h
@@ -17,6 +17,49 @@
 
 #include <cmdline.h>
 
+#define RDLINE_BUF_SIZE 512
+#define RDLINE_PROMPT_SIZE  32
+#define RDLINE_VT100_BUF_SIZE  8
+#define RDLINE_HISTORY_BUF_SIZE BUFSIZ
+#define RDLINE_HISTORY_MAX_LINE 64
+
+enum rdline_status {
+	RDLINE_INIT,
+	RDLINE_RUNNING,
+	RDLINE_EXITED
+};
+
+struct rdline {
+	enum rdline_status status;
+	/* rdline bufs */
+	struct cirbuf left;
+	struct cirbuf right;
+	char left_buf[RDLINE_BUF_SIZE+2]; /* reserve 2 chars for the \n\0 */
+	char right_buf[RDLINE_BUF_SIZE];
+
+	char prompt[RDLINE_PROMPT_SIZE];
+	unsigned int prompt_size;
+
+	char kill_buf[RDLINE_BUF_SIZE];
+	unsigned int kill_size;
+
+	/* history */
+	struct cirbuf history;
+	char history_buf[RDLINE_HISTORY_BUF_SIZE];
+	int history_cur_line;
+
+	/* callbacks and func pointers */
+	rdline_write_char_t *write_char;
+	rdline_validate_t *validate;
+	rdline_complete_t *complete;
+
+	/* vt100 parser */
+	struct cmdline_vt100 vt100;
+
+	/* opaque pointer */
+	void *opaque;
+};
+
 #ifdef RTE_EXEC_ENV_WINDOWS
 struct terminal {
 	DWORD input_mode;
@@ -57,4 +100,10 @@ ssize_t cmdline_read_char(struct cmdline *cl, char *c);
 __rte_format_printf(2, 0)
 int cmdline_vdprintf(int fd, const char *format, va_list op);
 
+int rdline_init(struct rdline *rdl,
+		rdline_write_char_t *write_char,
+		rdline_validate_t *validate,
+		rdline_complete_t *complete,
+		void *opaque);
+
 #endif
diff --git a/lib/cmdline/cmdline_rdline.c b/lib/cmdline/cmdline_rdline.c
index 2cb53e38f2..d92b1cda53 100644
--- a/lib/cmdline/cmdline_rdline.c
+++ b/lib/cmdline/cmdline_rdline.c
@@ -13,6 +13,7 @@
 #include <ctype.h>
 
 #include "cmdline_cirbuf.h"
+#include "cmdline_private.h"
 #include "cmdline_rdline.h"
 
 static void rdline_puts(struct rdline *rdl, const char *buf);
@@ -37,9 +38,10 @@ isblank2(char c)
 
 int
 rdline_init(struct rdline *rdl,
-		 rdline_write_char_t *write_char,
-		 rdline_validate_t *validate,
-		 rdline_complete_t *complete)
+	    rdline_write_char_t *write_char,
+	    rdline_validate_t *validate,
+	    rdline_complete_t *complete,
+	    void *opaque)
 {
 	if (!rdl || !write_char || !validate || !complete)
 		return -EINVAL;
@@ -47,10 +49,33 @@ rdline_init(struct rdline *rdl,
 	rdl->validate = validate;
 	rdl->complete = complete;
 	rdl->write_char = write_char;
+	rdl->opaque = opaque;
 	rdl->status = RDLINE_INIT;
 	return cirbuf_init(&rdl->history, rdl->history_buf, 0, RDLINE_HISTORY_BUF_SIZE);
 }
 
+struct rdline *
+rdline_new(rdline_write_char_t *write_char,
+	   rdline_validate_t *validate,
+	   rdline_complete_t *complete,
+	   void *opaque)
+{
+	struct rdline *rdl;
+
+	rdl = malloc(sizeof(*rdl));
+	if (rdline_init(rdl, write_char, validate, complete, opaque) < 0) {
+		free(rdl);
+		rdl = NULL;
+	}
+	return rdl;
+}
+
+void
+rdline_free(struct rdline *rdl)
+{
+	free(rdl);
+}
+
 void
 rdline_newline(struct rdline *rdl, const char *prompt)
 {
@@ -564,6 +589,18 @@ rdline_get_history_item(struct rdline * rdl, unsigned int idx)
 	return NULL;
 }
 
+size_t
+rdline_get_history_buffer_size(struct rdline *rdl)
+{
+	return sizeof(rdl->history_buf);
+}
+
+void *
+rdline_get_opaque(struct rdline *rdl)
+{
+	return rdl != NULL ? rdl->opaque : NULL;
+}
+
 int
 rdline_add_history(struct rdline * rdl, const char * buf)
 {
diff --git a/lib/cmdline/cmdline_rdline.h b/lib/cmdline/cmdline_rdline.h
index d2170293de..1b4cc7ce57 100644
--- a/lib/cmdline/cmdline_rdline.h
+++ b/lib/cmdline/cmdline_rdline.h
@@ -10,9 +10,7 @@
 /**
  * This file is a small equivalent to the GNU readline library, but it
  * was originally designed for small systems, like Atmel AVR
- * microcontrollers (8 bits). Indeed, we don't use any malloc that is
- * sometimes not implemented (or just not recommended) on such
- * systems.
+ * microcontrollers (8 bits). It only uses malloc() on object creation.
  *
  * Obviously, it does not support as many things as the GNU readline,
  * but at least it supports some interesting features like a kill
@@ -31,6 +29,7 @@
  */
 
 #include <stdio.h>
+#include <rte_compat.h>
 #include <cmdline_cirbuf.h>
 #include <cmdline_vt100.h>
 
@@ -38,19 +37,6 @@
 extern "C" {
 #endif
 
-/* configuration */
-#define RDLINE_BUF_SIZE 512
-#define RDLINE_PROMPT_SIZE  32
-#define RDLINE_VT100_BUF_SIZE  8
-#define RDLINE_HISTORY_BUF_SIZE BUFSIZ
-#define RDLINE_HISTORY_MAX_LINE 64
-
-enum rdline_status {
-	RDLINE_INIT,
-	RDLINE_RUNNING,
-	RDLINE_EXITED
-};
-
 struct rdline;
 
 typedef int (rdline_write_char_t)(struct rdline *rdl, char);
@@ -60,52 +46,32 @@ typedef int (rdline_complete_t)(struct rdline *rdl, const char *buf,
 				char *dstbuf, unsigned int dstsize,
 				int *state);
 
-struct rdline {
-	enum rdline_status status;
-	/* rdline bufs */
-	struct cirbuf left;
-	struct cirbuf right;
-	char left_buf[RDLINE_BUF_SIZE+2]; /* reserve 2 chars for the \n\0 */
-	char right_buf[RDLINE_BUF_SIZE];
-
-	char prompt[RDLINE_PROMPT_SIZE];
-	unsigned int prompt_size;
-
-	char kill_buf[RDLINE_BUF_SIZE];
-	unsigned int kill_size;
-
-	/* history */
-	struct cirbuf history;
-	char history_buf[RDLINE_HISTORY_BUF_SIZE];
-	int history_cur_line;
-
-	/* callbacks and func pointers */
-	rdline_write_char_t *write_char;
-	rdline_validate_t *validate;
-	rdline_complete_t *complete;
-
-	/* vt100 parser */
-	struct cmdline_vt100 vt100;
-
-	/* opaque pointer */
-	void *opaque;
-};
-
 /**
- * Init fields for a struct rdline. Call this only once at the beginning
- * of your program.
- * \param rdl A pointer to an uninitialized struct rdline
+ * Allocate and initialize a new rdline instance.
+ *
  * \param write_char The function used by the function to write a character
  * \param validate A pointer to the function to execute when the
  *                 user validates the buffer.
  * \param complete A pointer to the function to execute when the
  *                 user completes the buffer.
+ * \param opaque User data for use in the callbacks.
+ *
+ * \return New rdline object on success, NULL on failure.
  */
-int rdline_init(struct rdline *rdl,
-		 rdline_write_char_t *write_char,
-		 rdline_validate_t *validate,
-		 rdline_complete_t *complete);
+__rte_experimental
+struct rdline *rdline_new(rdline_write_char_t *write_char,
+			  rdline_validate_t *validate,
+			  rdline_complete_t *complete,
+			  void *opaque);
 
+/**
+ * Free an rdline instance.
+ *
+ * \param rdl A pointer to an initialized struct rdline.
+ *            If NULL, this function is a no-op.
+ */
+__rte_experimental
+void rdline_free(struct rdline *rdl);
 
 /**
  * Init the current buffer, and display a prompt.
@@ -194,6 +160,18 @@ void rdline_clear_history(struct rdline *rdl);
  */
 char *rdline_get_history_item(struct rdline *rdl, unsigned int i);
 
+/**
+ * Get maximum history buffer size.
+ */
+__rte_experimental
+size_t rdline_get_history_buffer_size(struct rdline *rdl);
+
+/**
+ * Get the opaque pointer supplied on struct rdline creation.
+ */
+__rte_experimental
+void *rdline_get_opaque(struct rdline *rdl);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/cmdline/version.map b/lib/cmdline/version.map
index 980adb4f23..b9bbb87510 100644
--- a/lib/cmdline/version.map
+++ b/lib/cmdline/version.map
@@ -57,7 +57,6 @@ DPDK_22 {
 	rdline_clear_history;
 	rdline_get_buffer;
 	rdline_get_history_item;
-	rdline_init;
 	rdline_newline;
 	rdline_quit;
 	rdline_redisplay;
@@ -73,7 +72,14 @@ DPDK_22 {
 EXPERIMENTAL {
 	global:
 
+	# added in 20.11
 	cmdline_get_rdline;
 
+	# added in 21.11
+	rdline_new;
+	rdline_free;
+	rdline_get_history_buffer_size;
+	rdline_get_opaque;
+
 	local: *;
 };
-- 
2.29.3


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [EXT] [PATCH v9] bbdev: add device info related to data endianness assumption
  2021-10-07 18:58  0%         ` Chautru, Nicolas
@ 2021-10-08  4:34  0%           ` Nipun Gupta
  0 siblings, 0 replies; 200+ results
From: Nipun Gupta @ 2021-10-08  4:34 UTC (permalink / raw)
  To: Chautru, Nicolas, Akhil Goyal, dev, trix
  Cc: thomas, Zhang, Mingshan, Joshi, Arun, Hemant Agrawal, david.marchand



> -----Original Message-----
> From: Chautru, Nicolas <nicolas.chautru@intel.com>
> Sent: Friday, October 8, 2021 12:29 AM
> To: Nipun Gupta <nipun.gupta@nxp.com>; Akhil Goyal <gakhil@marvell.com>;
> dev@dpdk.org; trix@redhat.com
> Cc: thomas@monjalon.net; Zhang, Mingshan <mingshan.zhang@intel.com>;
> Joshi, Arun <arun.joshi@intel.com>; Hemant Agrawal
> <hemant.agrawal@nxp.com>; david.marchand@redhat.com
> Subject: RE: [EXT] [PATCH v9] bbdev: add device info related to data endianness
> assumption
> 
> Hi Nipun,
> 
> > -----Original Message-----
> > From: Nipun Gupta <nipun.gupta@nxp.com>
> > Sent: Thursday, October 7, 2021 9:49 AM
> > To: Chautru, Nicolas <nicolas.chautru@intel.com>; Akhil Goyal
> > <gakhil@marvell.com>; dev@dpdk.org; trix@redhat.com
> > Cc: thomas@monjalon.net; Zhang, Mingshan <mingshan.zhang@intel.com>;
> > Joshi, Arun <arun.joshi@intel.com>; Hemant Agrawal
> > <hemant.agrawal@nxp.com>; david.marchand@redhat.com
> > Subject: RE: [EXT] [PATCH v9] bbdev: add device info related to data
> > endianness assumption
> >
> >
> >
> > > -----Original Message-----
> > > From: Chautru, Nicolas <nicolas.chautru@intel.com>
> > > Sent: Thursday, October 7, 2021 9:12 PM
> > > To: Akhil Goyal <gakhil@marvell.com>; dev@dpdk.org; Nipun Gupta
> > > <nipun.gupta@nxp.com>; trix@redhat.com
> > > Cc: thomas@monjalon.net; Zhang, Mingshan
> > <mingshan.zhang@intel.com>;
> > > Joshi, Arun <arun.joshi@intel.com>; Hemant Agrawal
> > > <hemant.agrawal@nxp.com>; david.marchand@redhat.com
> > > Subject: RE: [EXT] [PATCH v9] bbdev: add device info related to data
> > > endianness assumption
> > >
> > > Hi Akhil,
> > >
> > >
> > > > -----Original Message-----
> > > > From: Akhil Goyal <gakhil@marvell.com>
> > > > Sent: Thursday, October 7, 2021 6:14 AM
> > > > To: Chautru, Nicolas <nicolas.chautru@intel.com>; dev@dpdk.org;
> > > > nipun.gupta@nxp.com; trix@redhat.com
> > > > Cc: thomas@monjalon.net; Zhang, Mingshan
> > <mingshan.zhang@intel.com>;
> > > > Joshi, Arun <arun.joshi@intel.com>; hemant.agrawal@nxp.com;
> > > > david.marchand@redhat.com
> > > > Subject: RE: [EXT] [PATCH v9] bbdev: add device info related to data
> > > > endianness assumption
> > > >
> > > > > Subject: [EXT] [PATCH v9] bbdev: add device info related to data
> > > > > endianness assumption
> > > > >
> > > > Title is too long.
> > > > bbdev: add dev info for data endianness
> > >
> > > OK
> > >
> > > >
> > > > > Adding device information to capture explicitly the assumption of
> > > > > the input/output data byte endianness being processed.
> > > > >
> > > > > Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
> > > > > ---
> > > > >  doc/guides/rel_notes/release_21_11.rst             | 1 +
> > > > >  drivers/baseband/acc100/rte_acc100_pmd.c           | 1 +
> > > > >  drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 1 +
> > > > >  drivers/baseband/fpga_lte_fec/fpga_lte_fec.c       | 1 +
> > > > >  drivers/baseband/turbo_sw/bbdev_turbo_software.c   | 1 +
> > > > >  lib/bbdev/rte_bbdev.h                              | 8 ++++++++
> > > > >  6 files changed, 13 insertions(+)
> > > > >
> > > > > diff --git a/doc/guides/rel_notes/release_21_11.rst
> > > > > b/doc/guides/rel_notes/release_21_11.rst
> > > > > index a8900a3..f0b3006 100644
> > > > > --- a/doc/guides/rel_notes/release_21_11.rst
> > > > > +++ b/doc/guides/rel_notes/release_21_11.rst
> > > > > @@ -191,6 +191,7 @@ API Changes
> > > > >
> > > > >  * bbdev: Added capability related to more comprehensive CRC
> > options.
> > > > >
> > > > > +* bbdev: Added device info related to data byte endianness
> > > > > +processing
> > > > > assumption.
> > > >
> > > > It is not clear from the description or the release notes, what the
> > > > application is supposed to do based on the new dev_info field set
> > > > and how the driver determine what value to set?
> > > > Isn't there a standard from the application stand point that the
> > > > input/output data Should be in BE or in LE like in case of IP packets which
> > are always in BE?
> > > > I mean why is it dependent on the PMD which is processing it?
> > > > Whatever application understands, PMD should comply with that and do
> > > > internal Swapping if it does not support it.
> > > > Am I missing something?
> > >
> > > This is really to allow Nipin to add his own NXP la12xx PMD, which
> > > appears to have different assumption on endianness.
> > > All existing processing is done in LE by default by the existing PMDs
> > > and the existing ecosystem.
> > > I cannot comment on why they would want to do that for the la12xx
> > > specifically, I could only speculate but here trying to help to find
> > > the best way for the new PMD to be supported.
> > > So here this suggested change is purely about exposing different
> > > assumption for the PMDs, so that this new PMD can still be supported
> > > under this API even though this is in effect incompatible with existing
> > ecosystem.
> > > In case the application has different assumption that what the PMD
> > > does, then byte swapping would have to be done in the application,
> > > more likely I assume that la12xx has its own ecosystem with different
> > > endianness required for other reasons.
> > > The option you are suggesting would be to put the burden on the PMD
> > > but I doubt there is an actual usecase for that. I assume they assume
> > > different endianness for other specific reason, not necessary to be
> > > compatible with existing ecosystem.
> > > Niping, Hemant, feel free to comment back, from previous discussion I
> > > believe this is what you wanted to do. Unsure of the reason, feel free
> > > to share more details or not.
> >
> > Akhil/Nicolas,
> >
> > As Hemant mentioned on v4 (previously asked by Dave)
> >
> > "---
> > If we go back to the data providing source i.e. FAPI interface, it is
> > implementation specific, as per SCF222.
> >
> > Our customers do use BE data in network and at FAPI interface.
> >
> > In LA12xx, at present, we use u8 Big-endian data for processing to FECA
> > engine.  We do see that other drivers in DPDK are using Little Endian *(with
> > u32 data)* but standards is open for both.
> > "---
> >
> > Standard is not specific to endianness and is open for implementation.
> > So it does not makes a reason to have one endianness as default and other
> > managed in the PMD, and the current change seems right.
> >
> > Yes endianness assumption is taken in the test vector input/output data, but
> > this should be acceptable as it does not impact the PMD's and end user
> > applications in general.
> 
> I want clarify that this would impact the application in case user wanted to
> switch between 2 such hw accelator.
> Ie. you cannot switch the 2 solutions, they are incompatible except if you
> explicitly do the byteswap in the application (as is done in bbdev-test).
> Not necessarily a problem in case they address 2 different ecosystems but
> capturing the implication to be explicit. Ie each device expose the assumptions
> expected by the application and it is up to the application the bbdev api to
> satisfy the relared assumptions.

Hi Nicolas,

Bbdev-test is one of a test application and it is using test vectors. Consider bbdev
example application, the packets are in the network order (i.e. big-endian) and no
swapping is being done in this application. It is probably assumed that the packets
which are arriving/going out at the network are in the endianness order which
are to be processed by the device.

Similarly in the real world usage/applications, the endianness handling would be done
on the other end (the originator/consumer of the data), which again could be done in
hw accelerators without impacting the real world applications.

Also, as standard is open for both, the current change makes sense and swapping in
any of the PMD does not seems suitable.

Regards,
Nipun

> 
> >
> > BTW Nicolos, my name is Nipun :)
> 
> My bad!
> 
> I am marking this patch as obsolete since you have included it in your serie.
> 
> 
> >
> > >
> > >
> > > >
> > > > >
> > > > >  ABI Changes
> > > > >  -----------
> > > > > diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c
> > > > > b/drivers/baseband/acc100/rte_acc100_pmd.c
> > > > > index 4e2feef..eb2c6c1 100644
> > > > > --- a/drivers/baseband/acc100/rte_acc100_pmd.c
> > > > > +++ b/drivers/baseband/acc100/rte_acc100_pmd.c
> > > > > @@ -1089,6 +1089,7 @@
> > > > >  #else
> > > > >  	dev_info->harq_buffer_size = 0;
> > > > >  #endif
> > > > > +	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> > > > >  	acc100_check_ir(d);
> > > > >  }
> > > > >
> > > > > diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > > > > b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > > > > index 6485cc8..c7f15c0 100644
> > > > > --- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > > > > +++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> > > > > @@ -372,6 +372,7 @@
> > > > >  	dev_info->default_queue_conf = default_queue_conf;
> > > > >  	dev_info->capabilities = bbdev_capabilities;
> > > > >  	dev_info->cpu_flag_reqs = NULL;
> > > > > +	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> > > > >
> > > > >  	/* Calculates number of queues assigned to device */
> > > > >  	dev_info->max_num_queues = 0;
> > > > > diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > > > > b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > > > > index 350c424..72e213e 100644
> > > > > --- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > > > > +++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
> > > > > @@ -644,6 +644,7 @@ struct __rte_cache_aligned fpga_queue {
> > > > >  	dev_info->default_queue_conf = default_queue_conf;
> > > > >  	dev_info->capabilities = bbdev_capabilities;
> > > > >  	dev_info->cpu_flag_reqs = NULL;
> > > > > +	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> > > > >
> > > > >  	/* Calculates number of queues assigned to device */
> > > > >  	dev_info->max_num_queues = 0;
> > > > > diff --git a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > > > > b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > > > > index e1db2bf..0cab91a 100644
> > > > > --- a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > > > > +++ b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
> > > > > @@ -253,6 +253,7 @@ struct turbo_sw_queue {
> > > > >  	dev_info->capabilities = bbdev_capabilities;
> > > > >  	dev_info->min_alignment = 64;
> > > > >  	dev_info->harq_buffer_size = 0;
> > > > > +	dev_info->data_endianness = RTE_BBDEV_LITTLE_ENDIAN;
> > > > >
> > > > >  	rte_bbdev_log_debug("got device info from %u\n", dev->data-
> > > > > >dev_id);
> > > > >  }
> > > > > diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h index
> > > > > 3ebf62e..b3f3000 100644
> > > > > --- a/lib/bbdev/rte_bbdev.h
> > > > > +++ b/lib/bbdev/rte_bbdev.h
> > > > > @@ -49,6 +49,12 @@ enum rte_bbdev_state {
> > > > >  	RTE_BBDEV_INITIALIZED
> > > > >  };
> > > > >
> > > > > +/** Definitions of device data byte endianness types */ enum
> > > > > +rte_bbdev_endianness {
> > > > > +	RTE_BBDEV_BIG_ENDIAN,    /**< Data with byte-endianness BE
> */
> > > > > +	RTE_BBDEV_LITTLE_ENDIAN, /**< Data with byte-endianness
> LE */ };
> > > > If at all be need this dev_info field, as Tom suggested we should
> > > > use RTE_BIG/LITTLE_ENDIAN.
> > >
> > > See separate comment on my reply to Tom:
> > > I considered this but the usage is different, these are build time
> > > #define, and really would bring confusion here.
> > > Note that there are not really the endianness of the system itself but
> > > specific to the bbdev data output going through signal processing.
> > > I thought it was more explicit and less confusing this way, feel free
> > > to comment back.
> > > NXP would know best why a different endianness would be required in the
> > PMD.
> >
> > Please see previous comment for endianness support.
> > I agree with the RTE_ prefix we can add it as it is for the application interface.
> >
> > >
> > > >
> > > > > +
> > > > >  /**
> > > > >   * Get the total number of devices that have been successfully
> > initialised.
> > > > >   *
> > > > > @@ -309,6 +315,8 @@ struct rte_bbdev_driver_info {
> > > > >  	uint16_t min_alignment;
> > > > >  	/** HARQ memory available in kB */
> > > > >  	uint32_t harq_buffer_size;
> > > > > +	/** Byte endianness assumption for input/output data */
> > > > > +	enum rte_bbdev_endianness data_endianness;
> > > >
> > > > We should define how the input and output data are expected from the
> > app.
> > > > If need be, we can define a simple ``bool swap`` instead of an enum.
> > >
> > > This could be done as well. Default no swap, and swap required for the
> > > new PMD.
> > > I will let Nipin/Hemant comment back.
> >
> > Again endianness is implementation specific and not standard for 5G
> > processing, unlike it is for network packet.
> >
> > Regards,
> > Nipun
> >
> > >
> > > >
> > > > >  	/** Default queue configuration used if none is supplied  */
> > > > >  	struct rte_bbdev_queue_conf default_queue_conf;
> > > > >  	/** Device operation capabilities */
> > > > > --
> > > > > 1.8.3.1


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v2 0/7] Removal of PCI bus ABIs
  @ 2021-10-08  6:15  4%             ` Liu, Changpeng
  2021-10-08  7:08  0%               ` David Marchand
  0 siblings, 1 reply; 200+ results
From: Liu, Changpeng @ 2021-10-08  6:15 UTC (permalink / raw)
  To: Xia, Chenbo, Harris, James R, David Marchand
  Cc: dev, ci, Aaron Conole, dpdklab, Zawadzki, Tomasz, alexeymar

I tried the above DPDK patches, and got the following errors:

pci.c:115:7: error: call to ‘rte_pci_read_config’ declared with attribute error: Symbol is not public ABI
  115 |  rc = rte_pci_read_config(dev->dev_handle, value, len, offset);
      |       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
pci.c: In function ‘cfg_write_rte’:
pci.c:125:7: error: call to ‘rte_pci_write_config’ declared with attribute error: Symbol is not public ABI
  125 |  rc = rte_pci_write_config(dev->dev_handle, value, len, offset);
      |       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
pci.c: In function ‘register_rte_driver’:
pci.c:375:2: error: call to ‘rte_pci_register’ declared with attribute error: Symbol is not public ABI
  375 |  rte_pci_register(&driver->driver); 

We may use the new added API to replace rte_pci_write_config and rte_pci_read_config, but SPDK
do require rte_pci_register().


> -----Original Message-----
> From: Xia, Chenbo <chenbo.xia@intel.com>
> Sent: Wednesday, October 6, 2021 12:26 PM
> To: Harris, James R <james.r.harris@intel.com>; David Marchand
> <david.marchand@redhat.com>; Liu, Changpeng <changpeng.liu@intel.com>
> Cc: dev@dpdk.org; ci@dpdk.org; Aaron Conole <aconole@redhat.com>; dpdklab
> <dpdklab@iol.unh.edu>; Zawadzki, Tomasz <tomasz.zawadzki@intel.com>;
> alexeymar@mellanox.com
> Subject: RE: [dpdk-dev] [PATCH v2 0/7] Removal of PCI bus ABIs
> 
> Thanks David for helping check this and including SPDK folks!
> 
> Hi Changpeng,
> 
> Although we have synced about this during last release's deprecation notice,
> I’d like to summarize two points for SPDK to change if this patchset applied.
> 
> 1. The pci bus header for drivers will only be exposed if meson option
> 'enable_driver_sdk' is added, so SPDK need this DPDK meson option to build.
> 
> 2. As some functions in pci bus is needed for apps and the rest for drivers,
> the header for driver is renamed to pci_driver.h (header for app is rte_bus_pci.h).
> So SPDK drivers will need pci_driver.h instead of rte_bus_pci.h starting from
> DPDK
> 21.11. David showed some tests he did below.
> 
> Could you help check above two updates are fine to SPDK?
> 
> Thanks,
> Chenbo
> 
> > -----Original Message-----
> > From: Harris, James R <james.r.harris@intel.com>
> > Sent: Monday, October 4, 2021 11:56 PM
> > To: David Marchand <david.marchand@redhat.com>; Xia, Chenbo
> > <chenbo.xia@intel.com>; Liu, Changpeng <changpeng.liu@intel.com>
> > Cc: dev@dpdk.org; ci@dpdk.org; Aaron Conole <aconole@redhat.com>;
> dpdklab
> > <dpdklab@iol.unh.edu>; Zawadzki, Tomasz <tomasz.zawadzki@intel.com>;
> > alexeymar@mellanox.com
> > Subject: Re: [dpdk-dev] [PATCH v2 0/7] Removal of PCI bus ABIs
> >
> > Adding Changpeng Liu from SPDK side.
> >
> > On 10/4/21, 6:48 AM, "David Marchand" <david.marchand@redhat.com>
> wrote:
> >
> >     On Thu, Sep 30, 2021 at 10:45 AM David Marchand
> >     <david.marchand@redhat.com> wrote:
> >     > On Wed, Sep 29, 2021 at 9:38 AM Xia, Chenbo <chenbo.xia@intel.com>
> > wrote:
> >     > > @David, could you help me understand what is the compile error in
> > Fedora 31?
> >     > > DPDK_compile_spdk failure is expected as the header name for SPDK
> > is changed,
> >     > > I am not sure if it's the same error...
> >     >
> >     > The error log is odd (no compilation "backtrace").
> >     > You'll need to test spdk manually I guess.
> >
> >     Tried your series with SPDK (w/o and w/ enable_driver_sdk).
> >     I think the same, and the error is likely due to the file rename.
> >
> >     $ make
> >       CC lib/env_dpdk/env.o
> >     In file included from env.c:39:0:
> >     env_internal.h:64:25: error: field ‘driver’ has incomplete type
> >       struct rte_pci_driver  driver;
> >                              ^
> >     env_internal.h:75:59: warning: ‘struct rte_pci_device’ declared inside
> >     parameter list [enabled by default]
> >      int pci_device_init(struct rte_pci_driver *driver, struct
> >     rte_pci_device *device);
> >                                                                ^
> >     env_internal.h:75:59: warning: its scope is only this definition or
> >     declaration, which is probably not what you want [enabled by default]
> >     env_internal.h:76:28: warning: ‘struct rte_pci_device’ declared inside
> >     parameter list [enabled by default]
> >      int pci_device_fini(struct rte_pci_device *device);
> >                                 ^
> >     env_internal.h:89:38: warning: ‘struct rte_pci_device’ declared inside
> >     parameter list [enabled by default]
> >      void vtophys_pci_device_added(struct rte_pci_device *pci_device);
> >                                           ^
> >     env_internal.h:96:40: warning: ‘struct rte_pci_device’ declared inside
> >     parameter list [enabled by default]
> >      void vtophys_pci_device_removed(struct rte_pci_device *pci_device);
> >                                             ^
> >     make[2]: *** [env.o] Error 1
> >     make[1]: *** [env_dpdk] Error 2
> >     make: *** [lib] Error 2
> >
> >
> >
> >     So basically, SPDK needs some updates since it has its own pci drivers.
> >     I copied some SPDK folks for info.
> >
> >     *Disclaimer* I only checked it links fine against my 21.11 dpdk env,
> >     and did not test the other cases:
> >
> >     diff --git a/dpdkbuild/Makefile b/dpdkbuild/Makefile
> >     index d51b1a6e5..0e666735d 100644
> >     --- a/dpdkbuild/Makefile
> >     +++ b/dpdkbuild/Makefile
> >     @@ -166,6 +166,7 @@ all: $(SPDK_ROOT_DIR)/dpdk/build-tmp
> >      $(SPDK_ROOT_DIR)/dpdk/build-tmp: $(SPDK_ROOT_DIR)/mk/cc.mk
> >     $(SPDK_ROOT_DIR)/include/spdk/config.h
> >             $(Q)rm -rf $(SPDK_ROOT_DIR)/dpdk/build
> > $(SPDK_ROOT_DIR)/dpdk/build-tmp
> >             $(Q)cd "$(SPDK_ROOT_DIR)/dpdk"; CC="$(SUB_CC)" meson
> >     --prefix="$(MESON_PREFIX)" --libdir lib -Dc_args="$(DPDK_CFLAGS)"
> >     -Dc_link_args="$(DPDK_LDFLAGS)" $(DPDK_OPTS)
> >     -Ddisable_drivers="$(shell echo $(DPDK_DISABLED_DRVERS) | sed -E "s/
> >     +/,/g")" build-tmp
> >     +       $(Q)! meson configure build-tmp | grep -qw enable_driver_sdk
> >     || meson configure build-tmp -Denable_driver_sdk=true
> >             $(Q)sed $(SED_INPLACE_FLAG) 's/#define RTE_EAL_PMD_PATH
> >     .*/#define RTE_EAL_PMD_PATH ""/g'
> >     $(SPDK_ROOT_DIR)/dpdk/build-tmp/rte_build_config.h
> >             $(Q) \
> >             # TODO Meson build adds libbsd dependency when it's available.
> >     This means any app will be \
> >     diff --git a/lib/env_dpdk/env.mk b/lib/env_dpdk/env.mk
> >     index cc7db8aab..e24c6942f 100644bits with an embedded dpdk
> >     --- a/lib/env_dpdk/env.mk
> >     +++ b/lib/env_dpdk/env.mk
> >     @@ -172,6 +172,12 @@ DPDK_PRIVATE_LINKER_ARGS += -lnuma
> >      endif
> >      endif
> >
> >     +ifneq (,$(wildcard $(DPDK_INC_DIR)/rte_build_config.h))
> >     +ifneq (,$(shell grep -e "define RTE_HAS_LIBARCHIVE 1"
> >     $(DPDK_INC_DIR)/rte_build_config.h))
> >     +DPDK_PRIVATE_LINKER_ARGS += -larchive
> >     +endif
> >     +endif
> >     +
> >      ifeq ($(OS),Linux)
> >      DPDK_PRIVATE_LINKER_ARGS += -ldl
> >      endif
> >     diff --git a/lib/env_dpdk/env_internal.h b/lib/env_dpdk/env_internal.h
> >     index 2303f432c..24b377545 100644
> >     --- a/lib/env_dpdk/env_internal.h
> >     +++ b/lib/env_dpdk/env_internal.h
> >     @@ -43,13 +43,18 @@
> >      #include <rte_eal.h>
> >      #include <rte_bus.h>
> >      #include <rte_pci.h>
> >     -#include <rte_bus_pci.h>
> >      #include <rte_dev.h>
> >
> >      #if RTE_VERSION < RTE_VERSION_NUM(19, 11, 0, 0)
> >      #error RTE_VERSION is too old! Minimum 19.11 is required.
> >      #endif
> >
> >     +#if RTE_VERSION < RTE_VERSION_NUM(21, 11, 0, 0)
> >     +#include <rte_bus_pci.h>
> >     +#else
> >     +#include <pci_driver.h>
> >     +#endif
> >     +
> >      /* x86-64 and ARM userspace virtual addresses use only the low 48
> > bits [0..47],
> >       * which is enough to cover 256 TB.
> >       */
> >
> >
> >
> >     --
> >     David Marchand
> >


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v3 1/2] lib/cmdline: release cl when cmdline exit
  @ 2021-10-08  6:41  4% ` zhihongx.peng
  2021-10-11  5:20  0%   ` Peng, ZhihongX
  2021-10-11  8:25  0%   ` Dmitry Kozlyuk
  0 siblings, 2 replies; 200+ results
From: zhihongx.peng @ 2021-10-08  6:41 UTC (permalink / raw)
  To: olivier.matz, dmitry.kozliuk; +Cc: dev, Zhihong Peng, stable

From: Zhihong Peng <zhihongx.peng@intel.com>

Malloc cl in the cmdline_stdin_new function, so release in the
cmdline_stdin_exit function is logical, so that cl will not be
released alone.

Fixes: af75078fece3 (first public release)
Cc: stable@dpdk.org

Signed-off-by: Zhihong Peng <zhihongx.peng@intel.com>
---
 doc/guides/rel_notes/release_21_11.rst | 5 +++++
 lib/cmdline/cmdline_socket.c           | 1 +
 2 files changed, 6 insertions(+)

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index efeffe37a0..be24925d16 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -191,6 +191,11 @@ API Changes
   the crypto/security operation. This field will be used to communicate
   events such as soft expiry with IPsec in lookaside mode.
 
+* cmdline: The API cmdline_stdin_exit has added cmdline_free function.
+  Malloc cl in the cmdline_stdin_new function, so release in the
+  cmdline_stdin_exit function is logical. The application code
+  that calls cmdline_free needs to be deleted.
+
 
 ABI Changes
 -----------
diff --git a/lib/cmdline/cmdline_socket.c b/lib/cmdline/cmdline_socket.c
index 998e8ade25..ebd5343754 100644
--- a/lib/cmdline/cmdline_socket.c
+++ b/lib/cmdline/cmdline_socket.c
@@ -53,4 +53,5 @@ cmdline_stdin_exit(struct cmdline *cl)
 		return;
 
 	terminal_restore(cl);
+	cmdline_free(cl);
 }
-- 
2.25.1


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v2 0/7] Removal of PCI bus ABIs
  2021-10-08  6:15  4%             ` Liu, Changpeng
@ 2021-10-08  7:08  0%               ` David Marchand
  2021-10-08  7:44  0%                 ` Liu, Changpeng
  0 siblings, 1 reply; 200+ results
From: David Marchand @ 2021-10-08  7:08 UTC (permalink / raw)
  To: Liu, Changpeng
  Cc: Xia, Chenbo, Harris, James R, dev, ci, Aaron Conole, dpdklab,
	Zawadzki, Tomasz, alexeymar

Hello,

On Fri, Oct 8, 2021 at 8:15 AM Liu, Changpeng <changpeng.liu@intel.com> wrote:
>
> I tried the above DPDK patches, and got the following errors:
>
> pci.c:115:7: error: call to ‘rte_pci_read_config’ declared with attribute error: Symbol is not public ABI
>   115 |  rc = rte_pci_read_config(dev->dev_handle, value, len, offset);
>       |       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> pci.c: In function ‘cfg_write_rte’:
> pci.c:125:7: error: call to ‘rte_pci_write_config’ declared with attribute error: Symbol is not public ABI
>   125 |  rc = rte_pci_write_config(dev->dev_handle, value, len, offset);
>       |       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> pci.c: In function ‘register_rte_driver’:
> pci.c:375:2: error: call to ‘rte_pci_register’ declared with attribute error: Symbol is not public ABI
>   375 |  rte_pci_register(&driver->driver);

I should have got this warning... but compilation passed fine for me.
Happy you tested it.

>
> We may use the new added API to replace rte_pci_write_config and rte_pci_read_config, but SPDK
> do require rte_pci_register().

Since SPDK has a PCI driver, you'll need to compile code that calls
those PCI driver internal API with ALLOW_INTERNAL_API defined.
You can probably add a #define ALLOW_INTERNAL_API first thing (it's
important to have it defined before including any dpdk header) in
pci.c

Another option, is to add it to lib/env_dpdk/env.mk:ENV_CFLAGS =
$(DPDK_INC) -DALLOW_EXPERIMENTAL_API.

Can someone from SPDK take over this and sync with Chenbo?


Thanks.

-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v2 0/7] Removal of PCI bus ABIs
  2021-10-08  7:08  0%               ` David Marchand
@ 2021-10-08  7:44  0%                 ` Liu, Changpeng
  2021-10-11  6:58  0%                   ` Xia, Chenbo
  0 siblings, 1 reply; 200+ results
From: Liu, Changpeng @ 2021-10-08  7:44 UTC (permalink / raw)
  To: David Marchand, Harris, James R
  Cc: Xia, Chenbo, dev, ci, Aaron Conole, dpdklab, Zawadzki, Tomasz, alexeymar

Thanks, I have worked with Chenbo to address this issue before.  After enable the `ALLOW_INTERNAL_API` option, it works now with SPDK.

Another issue raised by Jim Harris is that for distro packaged DPDK, since this option isn't enabled by default, this will not allow SPDK
to use the distro packaged DPDK after this release.

> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Friday, October 8, 2021 3:08 PM
> To: Liu, Changpeng <changpeng.liu@intel.com>
> Cc: Xia, Chenbo <chenbo.xia@intel.com>; Harris, James R
> <james.r.harris@intel.com>; dev@dpdk.org; ci@dpdk.org; Aaron Conole
> <aconole@redhat.com>; dpdklab <dpdklab@iol.unh.edu>; Zawadzki, Tomasz
> <tomasz.zawadzki@intel.com>; alexeymar@mellanox.com
> Subject: Re: [dpdk-dev] [PATCH v2 0/7] Removal of PCI bus ABIs
> 
> Hello,
> 
> On Fri, Oct 8, 2021 at 8:15 AM Liu, Changpeng <changpeng.liu@intel.com> wrote:
> >
> > I tried the above DPDK patches, and got the following errors:
> >
> > pci.c:115:7: error: call to ‘rte_pci_read_config’ declared with attribute error:
> Symbol is not public ABI
> >   115 |  rc = rte_pci_read_config(dev->dev_handle, value, len, offset);
> >       |       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > pci.c: In function ‘cfg_write_rte’:
> > pci.c:125:7: error: call to ‘rte_pci_write_config’ declared with attribute error:
> Symbol is not public ABI
> >   125 |  rc = rte_pci_write_config(dev->dev_handle, value, len, offset);
> >       |       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > pci.c: In function ‘register_rte_driver’:
> > pci.c:375:2: error: call to ‘rte_pci_register’ declared with attribute error:
> Symbol is not public ABI
> >   375 |  rte_pci_register(&driver->driver);
> 
> I should have got this warning... but compilation passed fine for me.
> Happy you tested it.
> 
> >
> > We may use the new added API to replace rte_pci_write_config and
> rte_pci_read_config, but SPDK
> > do require rte_pci_register().
> 
> Since SPDK has a PCI driver, you'll need to compile code that calls
> those PCI driver internal API with ALLOW_INTERNAL_API defined.
> You can probably add a #define ALLOW_INTERNAL_API first thing (it's
> important to have it defined before including any dpdk header) in
> pci.c
> 
> Another option, is to add it to lib/env_dpdk/env.mk:ENV_CFLAGS =
> $(DPDK_INC) -DALLOW_EXPERIMENTAL_API.
> 
> Can someone from SPDK take over this and sync with Chenbo?
> 
> 
> Thanks.
> 
> --
> David Marchand


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v5] ethdev: fix representor port ID search by name
  2021-10-01 11:39  0%   ` Andrew Rybchenko
@ 2021-10-08  8:39  0%     ` Xueming(Steven) Li
  0 siblings, 0 replies; 200+ results
From: Xueming(Steven) Li @ 2021-10-08  8:39 UTC (permalink / raw)
  To: johndale, qi.z.zhang, Slava Ovsiienko, somnath.kotur,
	ajit.khaparde, andrew.rybchenko, Matan Azrad, hyonkim,
	qiming.yang
  Cc: beilei.xing, NBU-Contact-Thomas Monjalon, dev, haiyue.wang,
	viacheslav.galaktionov, ferruh.yigit

On Fri, 2021-10-01 at 14:39 +0300, Andrew Rybchenko wrote:
> Hello PMD maintainers,
> 
> please, review the patch.
> 
> It is especially important for net/mlx5 since changes there are
> not trivial.
> 
> Thanks,
> Andrew.
> 
> On 9/13/21 2:26 PM, Andrew Rybchenko wrote:
> > From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
> > 
> > Getting a list of representors from a representor does not make sense.
> > Instead, a parent device should be used.
> > 
> > To this end, extend the rte_eth_dev_data structure to include the port ID
> > of the backing device for representors.
> > 
> > Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
> > Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> > Acked-by: Haiyue Wang <haiyue.wang@intel.com>
> > Acked-by: Beilei Xing <beilei.xing@intel.com>
> > ---
> > The new field is added into the hole in rte_eth_dev_data structure.
> > The patch does not change ABI, but extra care is required since ABI
> > check is disabled for the structure because of the libabigail bug [1].
> > It should not be a problem anyway since 21.11 is a ABI breaking release.
> > 
> > Potentially it is bad for out-of-tree drivers which implement
> > representors but do not fill in a new parert_port_id field in
> > rte_eth_dev_data structure. Get ID by name will not work.
> > 
> > mlx5 changes should be reviwed by maintainers very carefully, since
> > we are not sure if we patch it correctly.
> > 
> > [1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060
> > 
> > v5:
> >     - try to improve name: backer_port_id instead of parent_port_id
> >     - init new field to RTE_MAX_ETHPORTS on allocation to avoid
> >       zero port usage by default
> > 
> > v4:
> >     - apply mlx5 review notes: remove fallback from generic ethdev
> >       code and add fallback to mlx5 code to handle legacy usecase
> > 
> > v3:
> >     - fix mlx5 build breakage
> > 
> > v2:
> >     - fix mlx5 review notes
> >     - try device port ID first before parent in order to address
> >       backward compatibility issue
> > 
> >  drivers/net/bnxt/bnxt_reps.c             |  1 +
> >  drivers/net/enic/enic_vf_representor.c   |  1 +
> >  drivers/net/i40e/i40e_vf_representor.c   |  1 +
> >  drivers/net/ice/ice_dcf_vf_representor.c |  1 +
> >  drivers/net/ixgbe/ixgbe_vf_representor.c |  1 +
> >  drivers/net/mlx5/linux/mlx5_os.c         | 13 +++++++++++++
> >  drivers/net/mlx5/windows/mlx5_os.c       | 13 +++++++++++++
> >  lib/ethdev/ethdev_driver.h               |  6 +++---
> >  lib/ethdev/rte_class_eth.c               |  2 +-
> >  lib/ethdev/rte_ethdev.c                  |  9 +++++----
> >  lib/ethdev/rte_ethdev_core.h             |  6 ++++++
> >  11 files changed, 46 insertions(+), 8 deletions(-)
> > 
> > diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
> > index bdbad53b7d..0d50c0f1da 100644
> > --- a/drivers/net/bnxt/bnxt_reps.c
> > +++ b/drivers/net/bnxt/bnxt_reps.c
> > @@ -187,6 +187,7 @@ int bnxt_representor_init(struct rte_eth_dev *eth_dev, void *params)
> >  	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
> >  					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
> >  	eth_dev->data->representor_id = rep_params->vf_id;
> > +	eth_dev->data->backer_port_id = rep_params->parent_dev->data->port_id;
> >  
> >  	rte_eth_random_addr(vf_rep_bp->dflt_mac_addr);
> >  	memcpy(vf_rep_bp->mac_addr, vf_rep_bp->dflt_mac_addr,
> > diff --git a/drivers/net/enic/enic_vf_representor.c b/drivers/net/enic/enic_vf_representor.c
> > index 79dd6e5640..fedb09ecd6 100644
> > --- a/drivers/net/enic/enic_vf_representor.c
> > +++ b/drivers/net/enic/enic_vf_representor.c
> > @@ -662,6 +662,7 @@ int enic_vf_representor_init(struct rte_eth_dev *eth_dev, void *init_params)
> >  	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
> >  					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
> >  	eth_dev->data->representor_id = vf->vf_id;
> > +	eth_dev->data->backer_port_id = pf->port_id;
> >  	eth_dev->data->mac_addrs = rte_zmalloc("enic_mac_addr_vf",
> >  		sizeof(struct rte_ether_addr) *
> >  		ENIC_UNICAST_PERFECT_FILTERS, 0);
> > diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
> > index 0481b55381..d65b821a01 100644
> > --- a/drivers/net/i40e/i40e_vf_representor.c
> > +++ b/drivers/net/i40e/i40e_vf_representor.c
> > @@ -514,6 +514,7 @@ i40e_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
> >  	ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
> >  					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
> >  	ethdev->data->representor_id = representor->vf_id;
> > +	ethdev->data->backer_port_id = pf->dev_data->port_id;
> >  
> >  	/* Setting the number queues allocated to the VF */
> >  	ethdev->data->nb_rx_queues = vf->vsi->nb_qps;
> > diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
> > index 970461f3e9..e51d0aa6b9 100644
> > --- a/drivers/net/ice/ice_dcf_vf_representor.c
> > +++ b/drivers/net/ice/ice_dcf_vf_representor.c
> > @@ -418,6 +418,7 @@ ice_dcf_vf_repr_init(struct rte_eth_dev *vf_rep_eth_dev, void *init_param)
> >  
> >  	vf_rep_eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
> >  	vf_rep_eth_dev->data->representor_id = repr->vf_id;
> > +	vf_rep_eth_dev->data->backer_port_id = repr->dcf_eth_dev->data->port_id;
> >  
> >  	vf_rep_eth_dev->data->mac_addrs = &repr->mac_addr;
> >  
> > diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
> > index d5b636a194..9fa75984fb 100644
> > --- a/drivers/net/ixgbe/ixgbe_vf_representor.c
> > +++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
> > @@ -197,6 +197,7 @@ ixgbe_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
> >  
> >  	ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
> >  	ethdev->data->representor_id = representor->vf_id;
> > +	ethdev->data->backer_port_id = representor->pf_ethdev->data->port_id;
> >  
> >  	/* Set representor device ops */
> >  	ethdev->dev_ops = &ixgbe_vf_representor_dev_ops;
> > diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
> > index 470b16cb9a..1cddaaba1a 100644
> > --- a/drivers/net/mlx5/linux/mlx5_os.c
> > +++ b/drivers/net/mlx5/linux/mlx5_os.c
> > @@ -1677,6 +1677,19 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
> >  	if (priv->representor) {
> >  		eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
> >  		eth_dev->data->representor_id = priv->representor_id;
> > +		MLX5_ETH_FOREACH_DEV(port_id, &priv->pci_dev->device) {
> > +			struct mlx5_priv *opriv =
> > +				rte_eth_devices[port_id].data->dev_private;
> > +			if (opriv &&
> > +			    opriv->master &&
> > +			    opriv->domain_id == priv->domain_id &&
> > +			    opriv->sh == priv->sh) {
> > +				eth_dev->data->backer_port_id = port_id;
> > +				break;
> > +			}
> > +		}
> > +		if (port_id >= RTE_MAX_ETHPORTS)
> > +			eth_dev->data->backer_port_id = eth_dev->data->port_id;
> >  	}
> >  	priv->mp_id.port_id = eth_dev->data->port_id;
> >  	strlcpy(priv->mp_id.name, MLX5_MP_NAME, RTE_MP_MAX_NAME_LEN);
> > diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
> > index 26fa927039..a9c244c7dc 100644
> > --- a/drivers/net/mlx5/windows/mlx5_os.c
> > +++ b/drivers/net/mlx5/windows/mlx5_os.c
> > @@ -543,6 +543,19 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
> >  	if (priv->representor) {
> >  		eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
> >  		eth_dev->data->representor_id = priv->representor_id;
> > +		MLX5_ETH_FOREACH_DEV(port_id, &priv->pci_dev->device) {
> > +			struct mlx5_priv *opriv =
> > +				rte_eth_devices[port_id].data->dev_private;
> > +			if (opriv &&
> > +			    opriv->master &&
> > +			    opriv->domain_id == priv->domain_id &&
> > +			    opriv->sh == priv->sh) {
> > +				eth_dev->data->backer_port_id = port_id;
> > +				break;
> > +			}
> > +		}
> > +		if (port_id >= RTE_MAX_ETHPORTS)
> > +			eth_dev->data->backer_port_id = eth_dev->data->port_id;
> >  	}
> >  	/*
> >  	 * Store associated network device interface index. This index
> > diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
> > index 40e474aa7e..b940e6cb38 100644
> > --- a/lib/ethdev/ethdev_driver.h
> > +++ b/lib/ethdev/ethdev_driver.h
> > @@ -1248,8 +1248,8 @@ struct rte_eth_devargs {
> >   * For backward compatibility, if no representor info, direct
> >   * map legacy VF (no controller and pf).
> >   *
> > - * @param ethdev
> > - *  Handle of ethdev port.
> > + * @param port_id
> > + *  Port ID of the backing device.
> >   * @param type
> >   *  Representor type.
> >   * @param controller
> > @@ -1266,7 +1266,7 @@ struct rte_eth_devargs {
> >   */
> >  __rte_internal
> >  int
> > -rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
> > +rte_eth_representor_id_get(uint16_t port_id,
> >  			   enum rte_eth_representor_type type,
> >  			   int controller, int pf, int representor_port,
> >  			   uint16_t *repr_id);
> > diff --git a/lib/ethdev/rte_class_eth.c b/lib/ethdev/rte_class_eth.c
> > index 1fe5fa1f36..eda216ced5 100644
> > --- a/lib/ethdev/rte_class_eth.c
> > +++ b/lib/ethdev/rte_class_eth.c
> > @@ -95,7 +95,7 @@ eth_representor_cmp(const char *key __rte_unused,
> >  		c = i / (np * nf);
> >  		p = (i / nf) % np;
> >  		f = i % nf;
> > -		if (rte_eth_representor_id_get(edev,
> > +		if (rte_eth_representor_id_get(edev->data->backer_port_id,
> >  			eth_da.type,
> >  			eth_da.nb_mh_controllers == 0 ? -1 :
> >  					eth_da.mh_controllers[c],
> > diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> > index daf5ca9242..7c9b0d6b3b 100644
> > --- a/lib/ethdev/rte_ethdev.c
> > +++ b/lib/ethdev/rte_ethdev.c
> > @@ -524,6 +524,7 @@ rte_eth_dev_allocate(const char *name)
> >  	eth_dev = eth_dev_get(port_id);
> >  	strlcpy(eth_dev->data->name, name, sizeof(eth_dev->data->name));
> >  	eth_dev->data->port_id = port_id;
> > +	eth_dev->data->backer_port_id = RTE_MAX_ETHPORTS;
> >  	eth_dev->data->mtu = RTE_ETHER_MTU;
> >  	pthread_mutex_init(&eth_dev->data->flow_ops_mutex, NULL);
> >  
> > @@ -5996,7 +5997,7 @@ rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da)
> >  }
> >  
> >  int
> > -rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
> > +rte_eth_representor_id_get(uint16_t port_id,
> >  			   enum rte_eth_representor_type type,
> >  			   int controller, int pf, int representor_port,
> >  			   uint16_t *repr_id)
> > @@ -6012,7 +6013,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
> >  		return -EINVAL;
> >  
> >  	/* Get PMD representor range info. */
> > -	ret = rte_eth_representor_info_get(ethdev->data->port_id, NULL);
> > +	ret = rte_eth_representor_info_get(port_id, NULL);
> >  	if (ret == -ENOTSUP && type == RTE_ETH_REPRESENTOR_VF &&
> >  	    controller == -1 && pf == -1) {
> >  		/* Direct mapping for legacy VF representor. */
> > @@ -6027,7 +6028,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
> >  	if (info == NULL)
> >  		return -ENOMEM;
> >  	info->nb_ranges_alloc = n;
> > -	ret = rte_eth_representor_info_get(ethdev->data->port_id, info);
> > +	ret = rte_eth_representor_info_get(port_id, info);
> >  	if (ret < 0)
> >  		goto out;
> >  
> > @@ -6046,7 +6047,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
> >  			continue;
> >  		if (info->ranges[i].id_end < info->ranges[i].id_base) {
> >  			RTE_LOG(WARNING, EAL, "Port %hu invalid representor ID Range %u - %u, entry %d\n",
> > -				ethdev->data->port_id, info->ranges[i].id_base,
> > +				port_id, info->ranges[i].id_base,
> >  				info->ranges[i].id_end, i);
> >  			continue;
> >  
> > diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
> > index edf96de2dc..48b814e8a1 100644
> > --- a/lib/ethdev/rte_ethdev_core.h
> > +++ b/lib/ethdev/rte_ethdev_core.h
> > @@ -185,6 +185,12 @@ struct rte_eth_dev_data {
> >  			/**< Switch-specific identifier.
> >  			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
> >  			 */
> > +	uint16_t backer_port_id;
> > +			/**< Port ID of the backing device.
> > +			 *   This device will be used to query representor
> > +			 *   info and calculate representor IDs.
> > +			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
> > +			 */
> >  
> >  	pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
> >  	uint64_t reserved_64s[4]; /**< Reserved for future fields */
> > 
> 

Reviewed-by: Xueming Li <xuemingl@nvidia.com>

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v6] ethdev: fix representor port ID search by name
    @ 2021-10-08  9:27  4% ` Andrew Rybchenko
  2021-10-11 12:30  4% ` [dpdk-dev] [PATCH v7] " Andrew Rybchenko
  2021-10-11 12:53  4% ` [dpdk-dev] [PATCH v8] " Andrew Rybchenko
  3 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-10-08  9:27 UTC (permalink / raw)
  To: Ajit Khaparde, Somnath Kotur, John Daley, Hyong Youb Kim,
	Beilei Xing, Qiming Yang, Qi Zhang, Haiyue Wang, Matan Azrad,
	Viacheslav Ovsiienko, Thomas Monjalon, Ferruh Yigit
  Cc: dev, Viacheslav Galaktionov, Xueming Li

From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>

The patch is required for all PMDs which do not provide representors
info on the representor itself.

The function, rte_eth_representor_id_get(), is used in
eth_representor_cmp() which is required in ethdev class iterator to
search ethdev port ID by name (representor case). Before the patch
the function is called on the representor itself and tries to get
representors info to match.

Search of port ID by name is used after hotplug to find out port ID
of the just plugged device.

Getting a list of representors from a representor does not make sense.
Instead, a backer device should be used.

To this end, extend the rte_eth_dev_data structure to include the port ID
of the backing device for representors.

Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Reviewed-by: Xueming Li <xuemingl@nvidia.com>
---
The new field is added into the hole in rte_eth_dev_data structure.
The patch does not change ABI, but extra care is required since ABI
check is disabled for the structure because of the libabigail bug [1].
It should not be a problem anyway since 21.11 is a ABI breaking release.

Potentially it is bad for out-of-tree drivers which implement
representors but do not fill in a new backer_port_id field in
rte_eth_dev_data structure. Get ID by name will not work.

mlx5 changes should be reviwed by maintainers very carefully, since
we are not sure if we patch it correctly.

[1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060

v6:
    - provide more information in the changeset description

v5:
    - try to improve name: backer_port_id instead of parent_port_id
    - init new field to RTE_MAX_ETHPORTS on allocation to avoid
      zero port usage by default

v4:
    - apply mlx5 review notes: remove fallback from generic ethdev
      code and add fallback to mlx5 code to handle legacy usecase

v3:
    - fix mlx5 build breakage

v2:
    - fix mlx5 review notes
    - try device port ID first before parent in order to address
      backward compatibility issue

 drivers/net/bnxt/bnxt_reps.c             |  1 +
 drivers/net/enic/enic_vf_representor.c   |  1 +
 drivers/net/i40e/i40e_vf_representor.c   |  1 +
 drivers/net/ice/ice_dcf_vf_representor.c |  1 +
 drivers/net/ixgbe/ixgbe_vf_representor.c |  1 +
 drivers/net/mlx5/linux/mlx5_os.c         | 13 +++++++++++++
 drivers/net/mlx5/windows/mlx5_os.c       | 13 +++++++++++++
 lib/ethdev/ethdev_driver.h               |  6 +++---
 lib/ethdev/rte_class_eth.c               |  2 +-
 lib/ethdev/rte_ethdev.c                  |  9 +++++----
 lib/ethdev/rte_ethdev_core.h             |  6 ++++++
 11 files changed, 46 insertions(+), 8 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index df05619c3f..b7e88e013a 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -187,6 +187,7 @@ int bnxt_representor_init(struct rte_eth_dev *eth_dev, void *params)
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
 					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 	eth_dev->data->representor_id = rep_params->vf_id;
+	eth_dev->data->backer_port_id = rep_params->parent_dev->data->port_id;
 
 	rte_eth_random_addr(vf_rep_bp->dflt_mac_addr);
 	memcpy(vf_rep_bp->mac_addr, vf_rep_bp->dflt_mac_addr,
diff --git a/drivers/net/enic/enic_vf_representor.c b/drivers/net/enic/enic_vf_representor.c
index cfd02c03cc..1a4411844a 100644
--- a/drivers/net/enic/enic_vf_representor.c
+++ b/drivers/net/enic/enic_vf_representor.c
@@ -666,6 +666,7 @@ int enic_vf_representor_init(struct rte_eth_dev *eth_dev, void *init_params)
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
 					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 	eth_dev->data->representor_id = vf->vf_id;
+	eth_dev->data->backer_port_id = pf->port_id;
 	eth_dev->data->mac_addrs = rte_zmalloc("enic_mac_addr_vf",
 		sizeof(struct rte_ether_addr) *
 		ENIC_UNICAST_PERFECT_FILTERS, 0);
diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
index 0481b55381..d65b821a01 100644
--- a/drivers/net/i40e/i40e_vf_representor.c
+++ b/drivers/net/i40e/i40e_vf_representor.c
@@ -514,6 +514,7 @@ i40e_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
 	ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
 					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 	ethdev->data->representor_id = representor->vf_id;
+	ethdev->data->backer_port_id = pf->dev_data->port_id;
 
 	/* Setting the number queues allocated to the VF */
 	ethdev->data->nb_rx_queues = vf->vsi->nb_qps;
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index b547c42f91..c5335ac3cc 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -426,6 +426,7 @@ ice_dcf_vf_repr_init(struct rte_eth_dev *vf_rep_eth_dev, void *init_param)
 
 	vf_rep_eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
 	vf_rep_eth_dev->data->representor_id = repr->vf_id;
+	vf_rep_eth_dev->data->backer_port_id = repr->dcf_eth_dev->data->port_id;
 
 	vf_rep_eth_dev->data->mac_addrs = &repr->mac_addr;
 
diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
index d5b636a194..9fa75984fb 100644
--- a/drivers/net/ixgbe/ixgbe_vf_representor.c
+++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
@@ -197,6 +197,7 @@ ixgbe_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
 
 	ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
 	ethdev->data->representor_id = representor->vf_id;
+	ethdev->data->backer_port_id = representor->pf_ethdev->data->port_id;
 
 	/* Set representor device ops */
 	ethdev->dev_ops = &ixgbe_vf_representor_dev_ops;
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 3746057673..612340b3b6 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1677,6 +1677,19 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	if (priv->representor) {
 		eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
 		eth_dev->data->representor_id = priv->representor_id;
+		MLX5_ETH_FOREACH_DEV(port_id, &priv->pci_dev->device) {
+			struct mlx5_priv *opriv =
+				rte_eth_devices[port_id].data->dev_private;
+			if (opriv &&
+			    opriv->master &&
+			    opriv->domain_id == priv->domain_id &&
+			    opriv->sh == priv->sh) {
+				eth_dev->data->backer_port_id = port_id;
+				break;
+			}
+		}
+		if (port_id >= RTE_MAX_ETHPORTS)
+			eth_dev->data->backer_port_id = eth_dev->data->port_id;
 	}
 	priv->mp_id.port_id = eth_dev->data->port_id;
 	strlcpy(priv->mp_id.name, MLX5_MP_NAME, RTE_MP_MAX_NAME_LEN);
diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
index 26fa927039..a9c244c7dc 100644
--- a/drivers/net/mlx5/windows/mlx5_os.c
+++ b/drivers/net/mlx5/windows/mlx5_os.c
@@ -543,6 +543,19 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	if (priv->representor) {
 		eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
 		eth_dev->data->representor_id = priv->representor_id;
+		MLX5_ETH_FOREACH_DEV(port_id, &priv->pci_dev->device) {
+			struct mlx5_priv *opriv =
+				rte_eth_devices[port_id].data->dev_private;
+			if (opriv &&
+			    opriv->master &&
+			    opriv->domain_id == priv->domain_id &&
+			    opriv->sh == priv->sh) {
+				eth_dev->data->backer_port_id = port_id;
+				break;
+			}
+		}
+		if (port_id >= RTE_MAX_ETHPORTS)
+			eth_dev->data->backer_port_id = eth_dev->data->port_id;
 	}
 	/*
 	 * Store associated network device interface index. This index
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 7ce0f7729a..c4ea735732 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -1266,8 +1266,8 @@ struct rte_eth_devargs {
  * For backward compatibility, if no representor info, direct
  * map legacy VF (no controller and pf).
  *
- * @param ethdev
- *  Handle of ethdev port.
+ * @param port_id
+ *  Port ID of the backing device.
  * @param type
  *  Representor type.
  * @param controller
@@ -1284,7 +1284,7 @@ struct rte_eth_devargs {
  */
 __rte_internal
 int
-rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
+rte_eth_representor_id_get(uint16_t port_id,
 			   enum rte_eth_representor_type type,
 			   int controller, int pf, int representor_port,
 			   uint16_t *repr_id);
diff --git a/lib/ethdev/rte_class_eth.c b/lib/ethdev/rte_class_eth.c
index 1fe5fa1f36..eda216ced5 100644
--- a/lib/ethdev/rte_class_eth.c
+++ b/lib/ethdev/rte_class_eth.c
@@ -95,7 +95,7 @@ eth_representor_cmp(const char *key __rte_unused,
 		c = i / (np * nf);
 		p = (i / nf) % np;
 		f = i % nf;
-		if (rte_eth_representor_id_get(edev,
+		if (rte_eth_representor_id_get(edev->data->backer_port_id,
 			eth_da.type,
 			eth_da.nb_mh_controllers == 0 ? -1 :
 					eth_da.mh_controllers[c],
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 028907bc4b..ed7b43a99f 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -524,6 +524,7 @@ rte_eth_dev_allocate(const char *name)
 	eth_dev = eth_dev_get(port_id);
 	strlcpy(eth_dev->data->name, name, sizeof(eth_dev->data->name));
 	eth_dev->data->port_id = port_id;
+	eth_dev->data->backer_port_id = RTE_MAX_ETHPORTS;
 	eth_dev->data->mtu = RTE_ETHER_MTU;
 	pthread_mutex_init(&eth_dev->data->flow_ops_mutex, NULL);
 
@@ -5915,7 +5916,7 @@ rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da)
 }
 
 int
-rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
+rte_eth_representor_id_get(uint16_t port_id,
 			   enum rte_eth_representor_type type,
 			   int controller, int pf, int representor_port,
 			   uint16_t *repr_id)
@@ -5931,7 +5932,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
 		return -EINVAL;
 
 	/* Get PMD representor range info. */
-	ret = rte_eth_representor_info_get(ethdev->data->port_id, NULL);
+	ret = rte_eth_representor_info_get(port_id, NULL);
 	if (ret == -ENOTSUP && type == RTE_ETH_REPRESENTOR_VF &&
 	    controller == -1 && pf == -1) {
 		/* Direct mapping for legacy VF representor. */
@@ -5946,7 +5947,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
 	if (info == NULL)
 		return -ENOMEM;
 	info->nb_ranges_alloc = n;
-	ret = rte_eth_representor_info_get(ethdev->data->port_id, info);
+	ret = rte_eth_representor_info_get(port_id, info);
 	if (ret < 0)
 		goto out;
 
@@ -5965,7 +5966,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
 			continue;
 		if (info->ranges[i].id_end < info->ranges[i].id_base) {
 			RTE_LOG(WARNING, EAL, "Port %hu invalid representor ID Range %u - %u, entry %d\n",
-				ethdev->data->port_id, info->ranges[i].id_base,
+				port_id, info->ranges[i].id_base,
 				info->ranges[i].id_end, i);
 			continue;
 
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index d2c9ec42c7..66ad8b13c8 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -185,6 +185,12 @@ struct rte_eth_dev_data {
 			/**< Switch-specific identifier.
 			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
 			 */
+	uint16_t backer_port_id;
+			/**< Port ID of the backing device.
+			 *   This device will be used to query representor
+			 *   info and calculate representor IDs.
+			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
+			 */
 
 	pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
 	uint64_t reserved_64s[4]; /**< Reserved for future fields */
-- 
2.30.2


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v5 0/7] hide eth dev related structures
  2021-10-07 11:27  4%       ` [dpdk-dev] [PATCH v5 " Konstantin Ananyev
                           ` (3 preceding siblings ...)
  2021-10-07 11:27  9%         ` [dpdk-dev] [PATCH v5 7/7] ethdev: hide eth dev related structures Konstantin Ananyev
@ 2021-10-08 18:13  0%         ` Slava Ovsiienko
  2021-10-11  9:22  0%         ` Andrew Rybchenko
  5 siblings, 0 replies; 200+ results
From: Slava Ovsiienko @ 2021-10-08 18:13 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	Matan Azrad, sthemmin, NBU-Contact-longli, heinrich.kuhn,
	kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
	maxime.coquelin, chenbo.xia, NBU-Contact-Thomas Monjalon,
	ferruh.yigit, mdr, jay.jayatheerthan

Hi,

I've reviewed the series, and it looks good to me.
I see we did not introduce new indirect referencing on the datapath
(just replaced rte_eth_devices[] being hidden with the new rte_eth_fp_ops[].)
My only concern - we'll get two places where pointers to the PMDs routines are stored,
and it means potential unsynchro between them, but I do not see the actual scenario for that.
For example, mlx5 PMD proposes multiple tx/rx_burst routines, and actual routine
selection happens on dev_start(), but then it is not supposed to be changed till dev_stop().
The internal PMD checks like this:
"if (dev->rx_pkt_burst == mlx5_rx_burst)"
supposed to continue working OK as well.

Hence, for series:
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

With best regards,
Slava

> -----Original Message-----
> From: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Sent: Thursday, October 7, 2021 14:28
> To: dev@dpdk.org
> Cc: xiaoyun.li@intel.com; anoobj@marvell.com; jerinj@marvell.com;
> ndabilpuram@marvell.com; adwivedi@marvell.com;
> shepard.siegel@atomicrules.com; ed.czeck@atomicrules.com;
> john.miller@atomicrules.com; irusskikh@marvell.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com;
> rahul.lakkireddy@chelsio.com; hemant.agrawal@nxp.com;
> sachin.saxena@oss.nxp.com; haiyue.wang@intel.com; johndale@cisco.com;
> hyonkim@cisco.com; qi.z.zhang@intel.com; xiao.w.wang@intel.com;
> humin29@huawei.com; yisen.zhuang@huawei.com; oulijun@huawei.com;
> beilei.xing@intel.com; jingjing.wu@intel.com; qiming.yang@intel.com; Matan
> Azrad <matan@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>;
> sthemmin@microsoft.com; NBU-Contact-longli <longli@microsoft.com>;
> heinrich.kuhn@corigine.com; kirankumark@marvell.com;
> andrew.rybchenko@oktetlabs.ru; mczekaj@marvell.com;
> jiawenwu@trustnetic.com; jianwang@trustnetic.com;
> maxime.coquelin@redhat.com; chenbo.xia@intel.com; NBU-Contact-Thomas
> Monjalon <thomas@monjalon.net>; ferruh.yigit@intel.com; mdr@ashroe.eu;
> jay.jayatheerthan@intel.com; Konstantin Ananyev
> <konstantin.ananyev@intel.com>
> Subject: [PATCH v5 0/7] hide eth dev related structures
> 
> v5 changes:
> - Fix spelling (Thomas/David)
> - Rename internal helper functions (David)
> - Reorder patches and update commit messages (Thomas)
> - Update comments (Thomas)
> - Changed layout in rte_eth_fp_ops, to group functions and
>    related data based on their functionality:
>    first 64B line for Rx, second one for Tx.
>    Didn't observe any real performance difference comparing to
>    original layout. Though decided to keep a new one, as it seems
>    a bit more plausible.
> 
> v4 changes:
>  - Fix secondary process attach (Pavan)
>  - Fix build failure (Ferruh)
>  - Update lib/ethdev/verion.map (Ferruh)
>    Note that moving newly added symbols from EXPERIMENTAL to DPDK_22
>    section makes checkpatch.sh to complain.
> 
> v3 changes:
>  - Changes in public struct naming (Jerin/Haiyue)
>  - Split patches
>  - Update docs
>  - Shamelessly included Andrew's patch:
>    https://patches.dpdk.org/project/dpdk/patch/20210928154856.1015020-
> 1-andrew.rybchenko@oktetlabs.ru/
>    into these series.
>    I have to do similar thing here, so decided to avoid duplicated effort.
> 
> The aim of these patch series is to make rte_ethdev core data structures
> (rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback, etc.) internal to DPDK
> and not visible to the user.
> That should allow future possible changes to core ethdev related structures to
> be transparent to the user and help to improve ABI/API stability.
> Note that current ethdev API is preserved, but it is a formal ABI break.
> 
> The work is based on previous discussions at:
> https://www.mail-archive.com/dev@dpdk.org/msg211405.html
> https://www.mail-archive.com/dev@dpdk.org/msg216685.html
> and consists of the following main points:
> 1. Copy public 'fast' function pointers (rx_pkt_burst(), etc.) and
>    related data pointer from rte_eth_dev into a separate flat array.
>    We keep it public to still be able to use inline functions for these
>    'fast' calls (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
>    Note that apart from function pointers itself, each element of this
>    flat array also contains two opaque pointers for each ethdev:
>    1) a pointer to an array of internal queue data pointers
>    2)  points to array of queue callback data pointers.
>    Note that exposing this extra information allows us to avoid extra
>    changes inside PMD level, plus should help to avoid possible
>    performance degradation.
> 2. Change implementation of 'fast' inline ethdev functions
>    (rte_eth_rx_burst(), etc.) to use new public flat array.
>    While it is an ABI breakage, this change is intended to be transparent
>    for both users (no changes in user app is required) and PMD developers
>    (no changes in PMD is required).
>    One extra note - with new implementation RX/TX callback invocation
>    will cost one extra function call with this changes. That might cause
>    some slowdown for code-path with RX/TX callbacks heavily involved.
>    Hope such trade-off is acceptable for the community.
> 3. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
>    things into internal header: <ethdev_driver.h>.
> 
> That approach was selected to:
>   - Avoid(/minimize) possible performance losses.
>   - Minimize required changes inside PMDs.
> 
> Performance testing results (ICX 2.0GHz, E810 (ice)):
>  - testpmd macswap fwd mode, plus
>    a) no RX/TX callbacks:
>       no actual slowdown observed
>    b) bpf-load rx 0 0 JM ./dpdk.org/examples/bpf/t3.o:
>       ~2% slowdown
>  - l3fwd: no actual slowdown observed
> 
> Would like to thank everyone who already reviewed and tested previous
> versions of these series. All other interested parties please don't be shy and
> provide your feedback.
> 
> Andrew Rybchenko (1):
>   ethdev: remove legacy Rx descriptor done API
> 
> Konstantin Ananyev (6):
>   ethdev: allocate max space for internal queue array
>   ethdev: change input parameters for rx_queue_count
>   ethdev: copy fast-path API into separate structure
>   ethdev: make fast-path functions to use new flat array
>   ethdev: add API to retrieve multiple ethernet addresses
>   ethdev: hide eth dev related structures
> 
>  app/test-pmd/config.c                         |  23 +-
>  doc/guides/nics/features.rst                  |   6 +-
>  doc/guides/rel_notes/deprecation.rst          |   5 -
>  doc/guides/rel_notes/release_21_11.rst        |  21 ++
>  drivers/common/octeontx2/otx2_sec_idev.c      |   2 +-
>  drivers/crypto/octeontx2/otx2_cryptodev_ops.c |   2 +-
>  drivers/net/ark/ark_ethdev_rx.c               |   4 +-
>  drivers/net/ark/ark_ethdev_rx.h               |   3 +-
>  drivers/net/atlantic/atl_ethdev.h             |   2 +-
>  drivers/net/atlantic/atl_rxtx.c               |   9 +-
>  drivers/net/bnxt/bnxt_ethdev.c                |   8 +-
>  drivers/net/cxgbe/base/adapter.h              |   2 +-
>  drivers/net/dpaa/dpaa_ethdev.c                |   9 +-
>  drivers/net/dpaa2/dpaa2_ethdev.c              |   9 +-
>  drivers/net/dpaa2/dpaa2_ptp.c                 |   2 +-
>  drivers/net/e1000/e1000_ethdev.h              |  10 +-
>  drivers/net/e1000/em_ethdev.c                 |   1 -
>  drivers/net/e1000/em_rxtx.c                   |  21 +-
>  drivers/net/e1000/igb_ethdev.c                |   2 -
>  drivers/net/e1000/igb_rxtx.c                  |  21 +-
>  drivers/net/enic/enic_ethdev.c                |  12 +-
>  drivers/net/fm10k/fm10k.h                     |   5 +-
>  drivers/net/fm10k/fm10k_ethdev.c              |   1 -
>  drivers/net/fm10k/fm10k_rxtx.c                |  29 +-
>  drivers/net/hns3/hns3_rxtx.c                  |   7 +-
>  drivers/net/hns3/hns3_rxtx.h                  |   2 +-
>  drivers/net/i40e/i40e_ethdev.c                |   1 -
>  drivers/net/i40e/i40e_rxtx.c                  |  30 +-
>  drivers/net/i40e/i40e_rxtx.h                  |   4 +-
>  drivers/net/iavf/iavf_rxtx.c                  |   4 +-
>  drivers/net/iavf/iavf_rxtx.h                  |   2 +-
>  drivers/net/ice/ice_rxtx.c                    |   4 +-
>  drivers/net/ice/ice_rxtx.h                    |   2 +-
>  drivers/net/igc/igc_ethdev.c                  |   1 -
>  drivers/net/igc/igc_txrx.c                    |  23 +-
>  drivers/net/igc/igc_txrx.h                    |   5 +-
>  drivers/net/ixgbe/ixgbe_ethdev.c              |   2 -
>  drivers/net/ixgbe/ixgbe_ethdev.h              |   5 +-
>  drivers/net/ixgbe/ixgbe_rxtx.c                |  22 +-
>  drivers/net/mlx5/mlx5_rx.c                    |  26 +-
>  drivers/net/mlx5/mlx5_rx.h                    |   2 +-
>  drivers/net/netvsc/hn_rxtx.c                  |   4 +-
>  drivers/net/netvsc/hn_var.h                   |   3 +-
>  drivers/net/nfp/nfp_rxtx.c                    |   4 +-
>  drivers/net/nfp/nfp_rxtx.h                    |   3 +-
>  drivers/net/octeontx2/otx2_ethdev.c           |   1 -
>  drivers/net/octeontx2/otx2_ethdev.h           |   3 +-
>  drivers/net/octeontx2/otx2_ethdev_ops.c       |  20 +-
>  drivers/net/sfc/sfc_ethdev.c                  |  29 +-
>  drivers/net/thunderx/nicvf_ethdev.c           |   3 +-
>  drivers/net/thunderx/nicvf_rxtx.c             |   4 +-
>  drivers/net/thunderx/nicvf_rxtx.h             |   2 +-
>  drivers/net/txgbe/txgbe_ethdev.h              |   3 +-
>  drivers/net/txgbe/txgbe_rxtx.c                |   4 +-
>  drivers/net/vhost/rte_eth_vhost.c             |   4 +-
>  drivers/net/virtio/virtio_ethdev.c            |   1 -
>  lib/ethdev/ethdev_driver.h                    | 148 +++++++++
>  lib/ethdev/ethdev_private.c                   |  83 +++++
>  lib/ethdev/ethdev_private.h                   |   7 +
>  lib/ethdev/rte_ethdev.c                       |  89 ++++--
>  lib/ethdev/rte_ethdev.h                       | 288 ++++++++++++------
>  lib/ethdev/rte_ethdev_core.h                  | 171 +++--------
>  lib/ethdev/version.map                        |   8 +-
>  lib/eventdev/rte_event_eth_rx_adapter.c       |   2 +-
>  lib/eventdev/rte_event_eth_tx_adapter.c       |   2 +-
>  lib/eventdev/rte_eventdev.c                   |   2 +-
>  lib/metrics/rte_metrics_telemetry.c           |   2 +-
>  67 files changed, 677 insertions(+), 564 deletions(-)
> 
> --
> 2.26.3


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v2 1/3] cryptodev: remove LIST_END enumerators
  @ 2021-10-08 20:45  3% ` Akhil Goyal
  2021-10-08 20:45  3%   ` [dpdk-dev] [PATCH v2 3/3] security: add reserved bitfields Akhil Goyal
  2021-10-11 10:46  0%   ` [dpdk-dev] [PATCH v2 1/3] cryptodev: remove LIST_END enumerators Zhang, Roy Fan
  0 siblings, 2 replies; 200+ results
From: Akhil Goyal @ 2021-10-08 20:45 UTC (permalink / raw)
  To: dev
  Cc: thomas, david.marchand, hemant.agrawal, anoobj,
	pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
	g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang,
	konstantin.ananyev, radu.nicolau, ajit.khaparde, rnagadheeraj,
	adwivedi, ciara.power, Akhil Goyal

Remove *_LIST_END enumerators from asymmetric crypto
lib to avoid ABI breakage for every new addition in
enums.

Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
v2: no change

 app/test/test_cryptodev_asym.c  | 4 ++--
 drivers/crypto/qat/qat_asym.c   | 2 +-
 lib/cryptodev/rte_crypto_asym.h | 4 ----
 3 files changed, 3 insertions(+), 7 deletions(-)

diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index 9d19a6d6d9..603b2e4609 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -541,7 +541,7 @@ test_one_case(const void *test_case, int sessionless)
 		printf("  %u) TestCase %s %s\n", test_index++,
 			tc.modex.description, test_msg);
 	} else {
-		for (i = 0; i < RTE_CRYPTO_ASYM_OP_LIST_END; i++) {
+		for (i = 0; i <= RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE; i++) {
 			if (tc.modex.xform_type == RTE_CRYPTO_ASYM_XFORM_RSA) {
 				if (tc.rsa_data.op_type_flags & (1 << i)) {
 					if (tc.rsa_data.key_exp) {
@@ -1027,7 +1027,7 @@ static inline void print_asym_capa(
 			rte_crypto_asym_xform_strings[capa->xform_type]);
 	printf("operation supported -");
 
-	for (i = 0; i < RTE_CRYPTO_ASYM_OP_LIST_END; i++) {
+	for (i = 0; i <= RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE; i++) {
 		/* check supported operations */
 		if (rte_cryptodev_asym_xform_capability_check_optype(capa, i))
 			printf(" %s",
diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c
index 85973812a8..026625a4d2 100644
--- a/drivers/crypto/qat/qat_asym.c
+++ b/drivers/crypto/qat/qat_asym.c
@@ -742,7 +742,7 @@ qat_asym_session_configure(struct rte_cryptodev *dev,
 			err = -EINVAL;
 			goto error;
 		}
-	} else if (xform->xform_type >= RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
+	} else if (xform->xform_type > RTE_CRYPTO_ASYM_XFORM_ECPM
 			|| xform->xform_type <= RTE_CRYPTO_ASYM_XFORM_NONE) {
 		QAT_LOG(ERR, "Invalid asymmetric crypto xform");
 		err = -EINVAL;
diff --git a/lib/cryptodev/rte_crypto_asym.h b/lib/cryptodev/rte_crypto_asym.h
index 9c866f553f..5edf658572 100644
--- a/lib/cryptodev/rte_crypto_asym.h
+++ b/lib/cryptodev/rte_crypto_asym.h
@@ -94,8 +94,6 @@ enum rte_crypto_asym_xform_type {
 	 */
 	RTE_CRYPTO_ASYM_XFORM_ECPM,
 	/**< Elliptic Curve Point Multiplication */
-	RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
-	/**< End of list */
 };
 
 /**
@@ -116,7 +114,6 @@ enum rte_crypto_asym_op_type {
 	/**< DH Public Key generation operation */
 	RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE,
 	/**< DH Shared Secret compute operation */
-	RTE_CRYPTO_ASYM_OP_LIST_END
 };
 
 /**
@@ -133,7 +130,6 @@ enum rte_crypto_rsa_padding_type {
 	/**< RSA PKCS#1 OAEP padding scheme */
 	RTE_CRYPTO_RSA_PADDING_PSS,
 	/**< RSA PKCS#1 PSS padding scheme */
-	RTE_CRYPTO_RSA_PADDING_TYPE_LIST_END
 };
 
 /**
-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v2 3/3] security: add reserved bitfields
  2021-10-08 20:45  3% ` [dpdk-dev] [PATCH v2 1/3] cryptodev: remove LIST_END enumerators Akhil Goyal
@ 2021-10-08 20:45  3%   ` Akhil Goyal
  2021-10-11  8:31  0%     ` Thomas Monjalon
  2021-10-11 10:46  0%   ` [dpdk-dev] [PATCH v2 1/3] cryptodev: remove LIST_END enumerators Zhang, Roy Fan
  1 sibling, 1 reply; 200+ results
From: Akhil Goyal @ 2021-10-08 20:45 UTC (permalink / raw)
  To: dev
  Cc: thomas, david.marchand, hemant.agrawal, anoobj,
	pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
	g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang,
	konstantin.ananyev, radu.nicolau, ajit.khaparde, rnagadheeraj,
	adwivedi, ciara.power, Akhil Goyal

In struct rte_security_ipsec_sa_options, for every new option
added, there is an ABI breakage, to avoid, a reserved_opts
bitfield is added to for the remaining bits available in the
structure.
Now for every new sa option, these reserved_opts can be reduced
and new option can be added.

Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
v2: rebase and removed libabigail.abignore change.
    Exception may be added when there is a need for change.

 lib/security/rte_security.h | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 7eb9f109ae..c0ea13892e 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -258,6 +258,12 @@ struct rte_security_ipsec_sa_options {
 	 * PKT_TX_UDP_CKSUM or PKT_TX_L4_MASK in mbuf.
 	 */
 	uint32_t l4_csum_enable : 1;
+
+	/** Reserved bit fields for future extension
+	 *
+	 * Note: reduce number of bits in reserved_opts for every new option
+	 */
+	uint32_t reserved_opts : 18;
 };
 
 /** IPSec security association direction */
-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v15 0/9] eal: Add EAL API for threading
  @ 2021-10-08 22:40  3% ` Narcisa Ana Maria Vasile
  2021-10-09  7:41  3%   ` [dpdk-dev] [PATCH v16 " Narcisa Ana Maria Vasile
  0 siblings, 1 reply; 200+ results
From: Narcisa Ana Maria Vasile @ 2021-10-08 22:40 UTC (permalink / raw)
  To: dev, thomas, dmitry.kozliuk, khot, navasile, dmitrym, roretzla,
	talshn, ocardona
  Cc: bruce.richardson, david.marchand, pallavi.kadam

From: Narcisa Vasile <navasile@microsoft.com>

EAL thread API

**Problem Statement**
DPDK currently uses the pthread interface to create and manage threads.
Windows does not support the POSIX thread programming model,
so it currently
relies on a header file that hides the Windows calls under
pthread matched interfaces. Given that EAL should isolate the environment
specifics from the applications and libraries and mediate
all the communication with the operating systems, a new EAL interface
is needed for thread management.

**Goals**
* Introduce a generic EAL API for threading support that will remove
  the current Windows pthread.h shim.
* Replace references to pthread_* across the DPDK codebase with the new
  RTE_THREAD_* API.
* Allow users to choose between using the RTE_THREAD_* API or a
  3rd party thread library through a configuration option.

**Design plan**
New API main files:
* rte_thread.h (librte_eal/include)
* rte_thread.c (librte_eal/windows)
* rte_thread.c (librte_eal/common)

**A schematic example of the design**
--------------------------------------------------
lib/librte_eal/include/rte_thread.h
int rte_thread_create();

lib/librte_eal/common/rte_thread.c
int rte_thread_create() 
{
	return pthread_create();
}

lib/librte_eal/windows/rte_thread.c
int rte_thread_create() 
{
	return CreateThread();
}
-----------------------------------------------------

**Thread attributes**

When or after a thread is created, specific characteristics of the thread
can be adjusted. Given that the thread characteristics that are of interest
for DPDK applications are affinity and priority, the following structure
that represents thread attributes has been defined:

typedef struct
{
	enum rte_thread_priority priority;
	rte_cpuset_t cpuset;
} rte_thread_attr_t;

The *rte_thread_create()* function can optionally receive
an rte_thread_attr_t
object that will cause the thread to be created with the
affinity and priority
described by the attributes object. If no rte_thread_attr_t is passed
(parameter is NULL), the default affinity and priority are used.
An rte_thread_attr_t object can also be set to the default values
by calling *rte_thread_attr_init()*.

*Priority* is represented through an enum that currently advertises
two values for priority:
	- RTE_THREAD_PRIORITY_NORMAL
	- RTE_THREAD_PRIORITY_REALTIME_CRITICAL
The enum can be extended to allow for multiple priority levels.
rte_thread_set_priority      - sets the priority of a thread
rte_thread_get_priority      - retrieves the priority of a thread
                               from the OS
rte_thread_attr_set_priority - updates an rte_thread_attr_t object
                               with a new value for priority

The user can choose thread priority through an EAL parameter,
when starting an application.  If EAL parameter is not used,
the per-platform default value for thread priority is used.
Otherwise administrator has an option to set one of available options:
 --thread-prio normal
 --thread-prio realtime

Example:
./dpdk-l2fwd -l 0-3 -n 4 –thread-prio normal -- -q 8 -p ffff

*Affinity* is described by the already known “rte_cpuset_t” type.
rte_thread_attr_set/get_affinity - sets/gets the affinity field in a
                                   rte_thread_attr_t object
rte_thread_set/get_affinity      – sets/gets the affinity of a thread

**Errors**
A translation function that maps Windows error codes to errno-style
error codes is provided. 

**Future work**
The long term plan is for EAL to provide full threading support:
* Add support for conditional variables
* Additional functionality offered by pthread_*
  (such as pthread_setname_np, etc.)

v15:
- Add try_lock mutex functionality. If the mutex is already owned by a
  different thread, the function returns immediately. Otherwise, 
  the mutex will be acquired.
- Add function for getting the priority of a thread.
  An auxiliary function that translates the OS priority to the
  EAL accepted ones is added.
- Fix unit tests logging, add descriptive asserts that mark test failures.
  Verify mutex locking, verify barrier return values. Add test for
  statically initialized mutexes.
- Fix Alpine build by removing the use of pthread_attr_set_affinity() and
  using pthread_set_affinity() after the thread is created.

v14:
- Remove patch "eal: add EAL argument for setting thread priority"
  This will be added later when enabling the new threading API.
- Remove priority enum value "_UNDEFINED". NORMAL is used
  as the default.
- Fix issue with thread return value.

v13:
 - Fix syntax error in unit tests

v12:
 - Fix freebsd warning about initializer in unit tests

v11:
 - Add unit tests for thread API
 - Rebase

v10:
 - Remove patch no. 10. It will be broken down in subpatches 
   and sent as a different patchset that depends on this one.
   This is done due to the ABI breaks that would be caused by patch 10.
 - Replace unix/rte_thread.c with common/rte_thread.c
 - Remove initializations that may prevent compiler from issuing useful
   warnings.
 - Remove rte_thread_types.h and rte_windows_thread_types.h
 - Remove unneeded priority macros (EAL_THREAD_PRIORITY*)
 - Remove functions that retrieves thread handle from process handle
 - Remove rte_thread_cancel() until same behavior is obtained on
   all platforms.
 - Fix rte_thread_detach() function description,
   return value and remove empty line.
 - Reimplement mutex functions. Add compatible representation for mutex
   identifier. Add macro to replace static mutex initialization instances.
 - Fix commit messages (lines too long, remove unicode symbols)

v9:
- Sign patches

v8:
- Rebase
- Add rte_thread_detach() API
- Set default priority, when user did not specify a value

v7:
Based on DmitryK's review:
- Change thread id representation
- Change mutex id representation
- Implement static mutex inititalizer for Windows
- Change barrier identifier representation
- Improve commit messages
- Add missing doxygen comments
- Split error translation function
- Improve name for affinity function
- Remove cpuset_size parameter
- Fix eal_create_cpu_map function
- Map EAL priority values to OS specific values
- Add thread wrapper for start routine
- Do not export rte_thread_cancel() on Windows
- Cleanup, fix comments, fix typos.

v6:
- improve error-translation function
- call the error translation function in rte_thread_value_get()

v5:
- update cover letter with more details on the priority argument

v4:
- fix function description
- rebase

v3:
- rebase

v2:
- revert changes that break ABI 
- break up changes into smaller patches
- fix coding style issues
- fix issues with errors
- fix parameter type in examples/kni.c

Narcisa Vasile (9):
  eal: add basic threading functions
  eal: add thread attributes
  eal/windows: translate Windows errors to errno-style errors
  eal: implement functions for thread affinity management
  eal: implement thread priority management functions
  eal: add thread lifetime management
  eal: implement functions for mutex management
  eal: implement functions for thread barrier management
  Add unit tests for thread API

 app/test/meson.build            |   2 +
 app/test/test_threads.c         | 359 +++++++++++++++++
 lib/eal/common/meson.build      |   1 +
 lib/eal/common/rte_thread.c     | 496 ++++++++++++++++++++++++
 lib/eal/include/rte_thread.h    | 435 ++++++++++++++++++++-
 lib/eal/unix/meson.build        |   1 -
 lib/eal/unix/rte_thread.c       |  92 -----
 lib/eal/version.map             |  22 ++
 lib/eal/windows/eal_lcore.c     | 176 ++++++---
 lib/eal/windows/eal_windows.h   |  10 +
 lib/eal/windows/include/sched.h |   2 +-
 lib/eal/windows/rte_thread.c    | 656 ++++++++++++++++++++++++++++++--
 12 files changed, 2079 insertions(+), 173 deletions(-)
 create mode 100644 app/test/test_threads.c
 create mode 100644 lib/eal/common/rte_thread.c
 delete mode 100644 lib/eal/unix/rte_thread.c

-- 
2.31.0.vfs.0.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v16 0/9] eal: Add EAL API for threading
  2021-10-08 22:40  3% ` [dpdk-dev] [PATCH v15 " Narcisa Ana Maria Vasile
@ 2021-10-09  7:41  3%   ` Narcisa Ana Maria Vasile
  0 siblings, 0 replies; 200+ results
From: Narcisa Ana Maria Vasile @ 2021-10-09  7:41 UTC (permalink / raw)
  To: dev, thomas, dmitry.kozliuk, khot, navasile, dmitrym, roretzla,
	talshn, ocardona
  Cc: bruce.richardson, david.marchand, pallavi.kadam

From: Narcisa Vasile <navasile@microsoft.com>

EAL thread API

**Problem Statement**
DPDK currently uses the pthread interface to create and manage threads.
Windows does not support the POSIX thread programming model,
so it currently
relies on a header file that hides the Windows calls under
pthread matched interfaces. Given that EAL should isolate the environment
specifics from the applications and libraries and mediate
all the communication with the operating systems, a new EAL interface
is needed for thread management.

**Goals**
* Introduce a generic EAL API for threading support that will remove
  the current Windows pthread.h shim.
* Replace references to pthread_* across the DPDK codebase with the new
  RTE_THREAD_* API.
* Allow users to choose between using the RTE_THREAD_* API or a
  3rd party thread library through a configuration option.

**Design plan**
New API main files:
* rte_thread.h (librte_eal/include)
* rte_thread.c (librte_eal/windows)
* rte_thread.c (librte_eal/common)

**A schematic example of the design**
--------------------------------------------------
lib/librte_eal/include/rte_thread.h
int rte_thread_create();

lib/librte_eal/common/rte_thread.c
int rte_thread_create() 
{
	return pthread_create();
}

lib/librte_eal/windows/rte_thread.c
int rte_thread_create() 
{
	return CreateThread();
}
-----------------------------------------------------

**Thread attributes**

When or after a thread is created, specific characteristics of the thread
can be adjusted. Given that the thread characteristics that are of interest
for DPDK applications are affinity and priority, the following structure
that represents thread attributes has been defined:

typedef struct
{
	enum rte_thread_priority priority;
	rte_cpuset_t cpuset;
} rte_thread_attr_t;

The *rte_thread_create()* function can optionally receive
an rte_thread_attr_t
object that will cause the thread to be created with the
affinity and priority
described by the attributes object. If no rte_thread_attr_t is passed
(parameter is NULL), the default affinity and priority are used.
An rte_thread_attr_t object can also be set to the default values
by calling *rte_thread_attr_init()*.

*Priority* is represented through an enum that currently advertises
two values for priority:
	- RTE_THREAD_PRIORITY_NORMAL
	- RTE_THREAD_PRIORITY_REALTIME_CRITICAL
The enum can be extended to allow for multiple priority levels.
rte_thread_set_priority      - sets the priority of a thread
rte_thread_get_priority      - retrieves the priority of a thread
                               from the OS
rte_thread_attr_set_priority - updates an rte_thread_attr_t object
                               with a new value for priority

The user can choose thread priority through an EAL parameter,
when starting an application.  If EAL parameter is not used,
the per-platform default value for thread priority is used.
Otherwise administrator has an option to set one of available options:
 --thread-prio normal
 --thread-prio realtime

Example:
./dpdk-l2fwd -l 0-3 -n 4 –thread-prio normal -- -q 8 -p ffff

*Affinity* is described by the already known “rte_cpuset_t” type.
rte_thread_attr_set/get_affinity - sets/gets the affinity field in a
                                   rte_thread_attr_t object
rte_thread_set/get_affinity      – sets/gets the affinity of a thread

**Errors**
A translation function that maps Windows error codes to errno-style
error codes is provided. 

**Future work**
The long term plan is for EAL to provide full threading support:
* Add support for conditional variables
* Additional functionality offered by pthread_*
  (such as pthread_setname_np, etc.)

v16:
- Fix warning on freebsd by adding cast
- Change affinity unit test to consider ases when the requested CPU
  are not available on the system.
- Fix priority unit test to avoid termination of thread before the
  priority is checked.

v15:
- Add try_lock mutex functionality. If the mutex is already owned by a
  different thread, the function returns immediately. Otherwise, 
  the mutex will be acquired.
- Add function for getting the priority of a thread.
  An auxiliary function that translates the OS priority to the
  EAL accepted ones is added.
- Fix unit tests logging, add descriptive asserts that mark test failures.
  Verify mutex locking, verify barrier return values. Add test for
  statically initialized mutexes.
- Fix Alpine build by removing the use of pthread_attr_set_affinity() and
  using pthread_set_affinity() after the thread is created.

v14:
- Remove patch "eal: add EAL argument for setting thread priority"
  This will be added later when enabling the new threading API.
- Remove priority enum value "_UNDEFINED". NORMAL is used
  as the default.
- Fix issue with thread return value.

v13:
 - Fix syntax error in unit tests

v12:
 - Fix freebsd warning about initializer in unit tests

v11:
 - Add unit tests for thread API
 - Rebase

v10:
 - Remove patch no. 10. It will be broken down in subpatches 
   and sent as a different patchset that depends on this one.
   This is done due to the ABI breaks that would be caused by patch 10.
 - Replace unix/rte_thread.c with common/rte_thread.c
 - Remove initializations that may prevent compiler from issuing useful
   warnings.
 - Remove rte_thread_types.h and rte_windows_thread_types.h
 - Remove unneeded priority macros (EAL_THREAD_PRIORITY*)
 - Remove functions that retrieves thread handle from process handle
 - Remove rte_thread_cancel() until same behavior is obtained on
   all platforms.
 - Fix rte_thread_detach() function description,
   return value and remove empty line.
 - Reimplement mutex functions. Add compatible representation for mutex
   identifier. Add macro to replace static mutex initialization instances.
 - Fix commit messages (lines too long, remove unicode symbols)

v9:
- Sign patches

v8:
- Rebase
- Add rte_thread_detach() API
- Set default priority, when user did not specify a value

v7:
Based on DmitryK's review:
- Change thread id representation
- Change mutex id representation
- Implement static mutex inititalizer for Windows
- Change barrier identifier representation
- Improve commit messages
- Add missing doxygen comments
- Split error translation function
- Improve name for affinity function
- Remove cpuset_size parameter
- Fix eal_create_cpu_map function
- Map EAL priority values to OS specific values
- Add thread wrapper for start routine
- Do not export rte_thread_cancel() on Windows
- Cleanup, fix comments, fix typos.

v6:
- improve error-translation function
- call the error translation function in rte_thread_value_get()

v5:
- update cover letter with more details on the priority argument

v4:
- fix function description
- rebase

v3:
- rebase

v2:
- revert changes that break ABI 
- break up changes into smaller patches
- fix coding style issues
- fix issues with errors
- fix parameter type in examples/kni.c

Narcisa Vasile (9):
  eal: add basic threading functions
  eal: add thread attributes
  eal/windows: translate Windows errors to errno-style errors
  eal: implement functions for thread affinity management
  eal: implement thread priority management functions
  eal: add thread lifetime management
  eal: implement functions for mutex management
  eal: implement functions for thread barrier management
  Add unit tests for thread API

 app/test/meson.build            |   2 +
 app/test/test_threads.c         | 372 ++++++++++++++++++
 lib/eal/common/meson.build      |   1 +
 lib/eal/common/rte_thread.c     | 497 ++++++++++++++++++++++++
 lib/eal/include/rte_thread.h    | 435 ++++++++++++++++++++-
 lib/eal/unix/meson.build        |   1 -
 lib/eal/unix/rte_thread.c       |  92 -----
 lib/eal/version.map             |  22 ++
 lib/eal/windows/eal_lcore.c     | 176 ++++++---
 lib/eal/windows/eal_windows.h   |  10 +
 lib/eal/windows/include/sched.h |   2 +-
 lib/eal/windows/rte_thread.c    | 656 ++++++++++++++++++++++++++++++--
 12 files changed, 2093 insertions(+), 173 deletions(-)
 create mode 100644 app/test/test_threads.c
 create mode 100644 lib/eal/common/rte_thread.c
 delete mode 100644 lib/eal/unix/rte_thread.c

-- 
2.31.0.vfs.0.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v24 0/6] support dmadev
    2021-09-24 10:53  3% ` [dpdk-dev] [PATCH v23 0/6] support dmadev Chengwen Feng
@ 2021-10-09  9:33  3% ` Chengwen Feng
  1 sibling, 0 replies; 200+ results
From: Chengwen Feng @ 2021-10-09  9:33 UTC (permalink / raw)
  To: thomas, ferruh.yigit, bruce.richardson, jerinj, jerinjacobk,
	andrew.rybchenko
  Cc: dev, mb, nipun.gupta, hemant.agrawal, maxime.coquelin,
	honnappa.nagarahalli, david.marchand, sburla, pkapoor,
	konstantin.ananyev, conor.walsh, kevin.laatz

This patch set contains six patch for new add dmadev.

Chengwen Feng (6):
  dmadev: introduce DMA device library
  dmadev: add control plane API support
  dmadev: add data plane API support
  dmadev: add multi-process support
  dma/skeleton: introduce skeleton dmadev driver
  app/test: add dmadev API test

---
v24:
* use rte_dma_fp_object to hide implementation details.
* support group doxygen for RTE_DMA_CAPA_* and RTE_DMA_OP_*.
* adjusted the naming of some functions.
* fix typo.
v23:
* split multi-process support from 1st patch.
* fix some static check warning.
* fix skeleton cpu thread zero_req_count flip bug.
* add test_dmadev_api.h.
* add the description of modifying the dmadev state when init OK.
v22:
* function prefix change from rte_dmadev_* to rte_dma_*.
* change to prefix comment in most scenarios.
* dmadev dev_id use int16_t type.
* fix typo.
* organize patchsets in incremental mode.
v21:
* add comment for reserved fields of struct rte_dmadev.
v20:
* delete unnecessary and duplicate include header files.
* the conf_sz parameter is added to the configure and vchan-setup
  callbacks of the PMD, this is mainly used to enhance ABI
  compatibility.
* the rte_dmadev structure field is rearranged to reserve more space
  for I/O functions.
* fix some ambiguous and unnecessary comments.
* fix the potential memory leak of ut.
* redefine skeldma_init_once to skeldma_count.
* suppress rte_dmadev error output when execute ut.

 MAINTAINERS                            |    7 +
 app/test/meson.build                   |    4 +
 app/test/test_dmadev.c                 |   41 +
 app/test/test_dmadev_api.c             |  574 +++++++++++++
 app/test/test_dmadev_api.h             |    5 +
 doc/api/doxy-api-index.md              |    1 +
 doc/api/doxy-api.conf.in               |    1 +
 doc/guides/dmadevs/index.rst           |   12 +
 doc/guides/index.rst                   |    1 +
 doc/guides/prog_guide/dmadev.rst       |  120 +++
 doc/guides/prog_guide/img/dmadev.svg   |  283 +++++++
 doc/guides/prog_guide/index.rst        |    1 +
 doc/guides/rel_notes/release_21_11.rst |    6 +
 drivers/dma/meson.build                |    6 +
 drivers/dma/skeleton/meson.build       |    7 +
 drivers/dma/skeleton/skeleton_dmadev.c |  571 +++++++++++++
 drivers/dma/skeleton/skeleton_dmadev.h |   61 ++
 drivers/dma/skeleton/version.map       |    3 +
 drivers/meson.build                    |    1 +
 lib/dmadev/meson.build                 |    7 +
 lib/dmadev/rte_dmadev.c                |  844 +++++++++++++++++++
 lib/dmadev/rte_dmadev.h                | 1048 ++++++++++++++++++++++++
 lib/dmadev/rte_dmadev_core.h           |   78 ++
 lib/dmadev/rte_dmadev_pmd.h            |  173 ++++
 lib/dmadev/version.map                 |   35 +
 lib/meson.build                        |    1 +
 26 files changed, 3891 insertions(+)
 create mode 100644 app/test/test_dmadev.c
 create mode 100644 app/test/test_dmadev_api.c
 create mode 100644 app/test/test_dmadev_api.h
 create mode 100644 doc/guides/dmadevs/index.rst
 create mode 100644 doc/guides/prog_guide/dmadev.rst
 create mode 100644 doc/guides/prog_guide/img/dmadev.svg
 create mode 100644 drivers/dma/meson.build
 create mode 100644 drivers/dma/skeleton/meson.build
 create mode 100644 drivers/dma/skeleton/skeleton_dmadev.c
 create mode 100644 drivers/dma/skeleton/skeleton_dmadev.h
 create mode 100644 drivers/dma/skeleton/version.map
 create mode 100644 lib/dmadev/meson.build
 create mode 100644 lib/dmadev/rte_dmadev.c
 create mode 100644 lib/dmadev/rte_dmadev.h
 create mode 100644 lib/dmadev/rte_dmadev_core.h
 create mode 100644 lib/dmadev/rte_dmadev_pmd.h
 create mode 100644 lib/dmadev/version.map

-- 
2.33.0


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure
  2021-10-07 11:27  2%         ` [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure Konstantin Ananyev
@ 2021-10-09 12:05  0%           ` fengchengwen
  2021-10-11  1:18  0%             ` fengchengwen
                               ` (2 more replies)
  2021-10-11  8:25  0%           ` Andrew Rybchenko
  1 sibling, 3 replies; 200+ results
From: fengchengwen @ 2021-10-09 12:05 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan

On 2021/10/7 19:27, Konstantin Ananyev wrote:
> Copy public function pointers (rx_pkt_burst(), etc.) and related
> pointers to internal data from rte_eth_dev structure into a
> separate flat array. That array will remain in a public header.
> The intention here is to make rte_eth_dev and related structures internal.
> That should allow future possible changes to core eth_dev structures
> to be transparent to the user and help to avoid ABI/API breakages.
> The plan is to keep minimal part of data from rte_eth_dev public,
> so we still can use inline functions for fast-path calls
> (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
> The whole idea beyond this new schema:
> 1. PMDs keep to setup fast-path function pointers and related data
>    inside rte_eth_dev struct in the same way they did it before.
> 2. Inside rte_eth_dev_start() and inside rte_eth_dev_probing_finish()
>    (for secondary process) we call eth_dev_fp_ops_setup, which
>    copies these function and data pointers into rte_eth_fp_ops[port_id].
> 3. Inside rte_eth_dev_stop() and inside rte_eth_dev_release_port()
>    we call eth_dev_fp_ops_reset(), which resets rte_eth_fp_ops[port_id]
>    into some dummy values.
> 4. fast-path ethdev API (rte_eth_rx_burst(), etc.) will use that new
>    flat array to call PMD specific functions.
> That approach should allow us to make rte_eth_devices[] private
> without introducing regression and help to avoid changes in drivers code.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>  lib/ethdev/ethdev_private.c  | 52 ++++++++++++++++++++++++++++++++++
>  lib/ethdev/ethdev_private.h  |  7 +++++
>  lib/ethdev/rte_ethdev.c      | 27 ++++++++++++++++++
>  lib/ethdev/rte_ethdev_core.h | 55 ++++++++++++++++++++++++++++++++++++
>  4 files changed, 141 insertions(+)
> 
> diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
> index 012cf73ca2..3eeda6e9f9 100644
> --- a/lib/ethdev/ethdev_private.c
> +++ b/lib/ethdev/ethdev_private.c
> @@ -174,3 +174,55 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
>  		RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
>  	return str == NULL ? -1 : 0;
>  }
> +
> +static uint16_t
> +dummy_eth_rx_burst(__rte_unused void *rxq,
> +		__rte_unused struct rte_mbuf **rx_pkts,
> +		__rte_unused uint16_t nb_pkts)
> +{
> +	RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
> +	rte_errno = ENOTSUP;
> +	return 0;
> +}
> +
> +static uint16_t
> +dummy_eth_tx_burst(__rte_unused void *txq,
> +		__rte_unused struct rte_mbuf **tx_pkts,
> +		__rte_unused uint16_t nb_pkts)
> +{
> +	RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n");
> +	rte_errno = ENOTSUP;
> +	return 0;
> +}
> +
> +void
> +eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo)

The port_id parameter is preferable, this will hide rte_eth_fp_ops as much as possible.

> +{
> +	static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
> +	static const struct rte_eth_fp_ops dummy_ops = {
> +		.rx_pkt_burst = dummy_eth_rx_burst,
> +		.tx_pkt_burst = dummy_eth_tx_burst,
> +		.rxq = {.data = dummy_data, .clbk = dummy_data,},
> +		.txq = {.data = dummy_data, .clbk = dummy_data,},
> +	};
> +
> +	*fpo = dummy_ops;
> +}
> +
> +void
> +eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> +		const struct rte_eth_dev *dev)

Because fp_ops and eth_dev is a one-to-one correspondence. It's better only use
port_id parameter.

> +{
> +	fpo->rx_pkt_burst = dev->rx_pkt_burst;
> +	fpo->tx_pkt_burst = dev->tx_pkt_burst;
> +	fpo->tx_pkt_prepare = dev->tx_pkt_prepare;
> +	fpo->rx_queue_count = dev->rx_queue_count;
> +	fpo->rx_descriptor_status = dev->rx_descriptor_status;
> +	fpo->tx_descriptor_status = dev->tx_descriptor_status;
> +
> +	fpo->rxq.data = dev->data->rx_queues;
> +	fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
> +
> +	fpo->txq.data = dev->data->tx_queues;
> +	fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
> +}
> diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
> index 3724429577..5721be7bdc 100644
> --- a/lib/ethdev/ethdev_private.h
> +++ b/lib/ethdev/ethdev_private.h
> @@ -26,4 +26,11 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
>  /* Parse devargs value for representor parameter. */
>  int rte_eth_devargs_parse_representor_ports(char *str, void *data);
>  
> +/* reset eth fast-path API to dummy values */
> +void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
> +
> +/* setup eth fast-path API to ethdev values */
> +void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> +		const struct rte_eth_dev *dev);

Some drivers control the transmit/receive function during operation. E.g.
for hns3 driver, when detect reset, primary process will set rx/tx burst to dummy, after
process reset, primary process will set the correct rx/tx burst. During this process, the
send and receive threads are still working, but the bursts they call are changed. So:
1. it is recommended that trace be deleted from the dummy function.
2. public the eth_dev_fp_ops_reset/setup interface for driver usage.

> +
>  #endif /* _ETH_PRIVATE_H_ */
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index c8abda6dd7..9f7a0cbb8c 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -44,6 +44,9 @@
>  static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
>  struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
>  
> +/* public fast-path API */
> +struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
> +
>  /* spinlock for eth device callbacks */
>  static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
>  
> @@ -578,6 +581,8 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
>  		rte_eth_dev_callback_process(eth_dev,
>  				RTE_ETH_EVENT_DESTROY, NULL);
>  
> +	eth_dev_fp_ops_reset(rte_eth_fp_ops + eth_dev->data->port_id);
> +
>  	rte_spinlock_lock(&eth_dev_shared_data->ownership_lock);
>  
>  	eth_dev->state = RTE_ETH_DEV_UNUSED;
> @@ -1787,6 +1792,9 @@ rte_eth_dev_start(uint16_t port_id)
>  		(*dev->dev_ops->link_update)(dev, 0);
>  	}
>  
> +	/* expose selection of PMD fast-path functions */
> +	eth_dev_fp_ops_setup(rte_eth_fp_ops + port_id, dev);
> +
>  	rte_ethdev_trace_start(port_id);
>  	return 0;
>  }
> @@ -1809,6 +1817,9 @@ rte_eth_dev_stop(uint16_t port_id)
>  		return 0;
>  	}
>  
> +	/* point fast-path functions to dummy ones */
> +	eth_dev_fp_ops_reset(rte_eth_fp_ops + port_id);
> +
>  	dev->data->dev_started = 0;
>  	ret = (*dev->dev_ops->dev_stop)(dev);
>  	rte_ethdev_trace_stop(port_id, ret);
> @@ -4567,6 +4578,14 @@ rte_eth_mirror_rule_reset(uint16_t port_id, uint8_t rule_id)
>  	return eth_err(port_id, (*dev->dev_ops->mirror_rule_reset)(dev, rule_id));
>  }
>  
> +RTE_INIT(eth_dev_init_fp_ops)
> +{
> +	uint32_t i;
> +
> +	for (i = 0; i != RTE_DIM(rte_eth_fp_ops); i++)
> +		eth_dev_fp_ops_reset(rte_eth_fp_ops + i);
> +}
> +
>  RTE_INIT(eth_dev_init_cb_lists)
>  {
>  	uint16_t i;
> @@ -4735,6 +4754,14 @@ rte_eth_dev_probing_finish(struct rte_eth_dev *dev)
>  	if (dev == NULL)
>  		return;
>  
> +	/*
> +	 * for secondary process, at that point we expect device
> +	 * to be already 'usable', so shared data and all function pointers
> +	 * for fast-path devops have to be setup properly inside rte_eth_dev.
> +	 */
> +	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
> +		eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev);
> +
>  	rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL);
>  
>  	dev->state = RTE_ETH_DEV_ATTACHED;
> diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
> index 51cd68de94..d5853dff86 100644
> --- a/lib/ethdev/rte_ethdev_core.h
> +++ b/lib/ethdev/rte_ethdev_core.h
> @@ -50,6 +50,61 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
>  typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
>  /**< @internal Check the status of a Tx descriptor */
>  
> +/**
> + * @internal
> + * Structure used to hold opaque pointers to internal ethdev Rx/Tx
> + * queues data.
> + * The main purpose to expose these pointers at all - allow compiler
> + * to fetch this data for fast-path ethdev inline functions in advance.
> + */
> +struct rte_ethdev_qdata {
> +	void **data;
> +	/**< points to array of internal queue data pointers */
> +	void **clbk;
> +	/**< points to array of queue callback data pointers */
> +};
> +
> +/**
> + * @internal
> + * fast-path ethdev functions and related data are hold in a flat array.
> + * One entry per ethdev.
> + * On 64-bit systems contents of this structure occupy exactly two 64B lines.
> + * On 32-bit systems contents of this structure fits into one 64B line.
> + */
> +struct rte_eth_fp_ops {
> +
> +	/**
> +	 * Rx fast-path functions and related data.
> +	 * 64-bit systems: occupies first 64B line
> +	 */
> +	eth_rx_burst_t rx_pkt_burst;
> +	/**< PMD receive function. */
> +	eth_rx_queue_count_t rx_queue_count;
> +	/**< Get the number of used RX descriptors. */
> +	eth_rx_descriptor_status_t rx_descriptor_status;
> +	/**< Check the status of a Rx descriptor. */
> +	struct rte_ethdev_qdata rxq;
> +	/**< Rx queues data. */
> +	uintptr_t reserved1[3];
> +
> +	/**
> +	 * Tx fast-path functions and related data.
> +	 * 64-bit systems: occupies second 64B line
> +	 */
> +	eth_tx_burst_t tx_pkt_burst;

Why not place rx_pkt_burst/tx_pkt_burst/rxq /txq to the first cacheline ?
Other function, e.g. rx_queue_count/descriptor_status are low frequency call functions.

> +	/**< PMD transmit function. */
> +	eth_tx_prep_t tx_pkt_prepare;
> +	/**< PMD transmit prepare function. */
> +	eth_tx_descriptor_status_t tx_descriptor_status;
> +	/**< Check the status of a Tx descriptor. */
> +	struct rte_ethdev_qdata txq;
> +	/**< Tx queues data. */
> +	uintptr_t reserved2[3];
> +
> +} __rte_cache_aligned;
> +
> +extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
> +
>  
>  /**
>   * @internal
> 


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v2 1/5] ethdev: update modify field flow action
  @ 2021-10-10 23:45  8%   ` Viacheslav Ovsiienko
  2021-10-11  9:54  3%     ` Andrew Rybchenko
  0 siblings, 1 reply; 200+ results
From: Viacheslav Ovsiienko @ 2021-10-10 23:45 UTC (permalink / raw)
  To: dev; +Cc: rasland, matan, shahafs, orika, getelson, thomas

The generic modify field flow action introduced in [1] has
some issues related to the immediate source operand:

  - immediate source can be presented either as an unsigned
    64-bit integer or pointer to data pattern in memory.
    There was no explicit pointer field defined in the union.

  - the byte ordering for 64-bit integer was not specified.
    Many fields have shorter lengths and byte ordering
    is crucial.

  - how the bit offset is applied to the immediate source
    field was not defined and documented.

  - 64-bit integer size is not enough to provide IPv6
    addresses.

In order to cover the issues and exclude any ambiguities
the following is done:

  - introduce the explicit pointer field
    in rte_flow_action_modify_data structure

  - replace the 64-bit unsigned integer with 16-byte array

  - update the modify field flow action documentation

[1] commit 73b68f4c54a0 ("ethdev: introduce generic modify flow action")

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 doc/guides/prog_guide/rte_flow.rst     | 16 ++++++++++++++++
 doc/guides/rel_notes/release_21_11.rst |  9 +++++++++
 lib/ethdev/rte_flow.h                  | 17 ++++++++++++++---
 3 files changed, 39 insertions(+), 3 deletions(-)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 2b42d5ec8c..1ceecb399f 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2835,6 +2835,22 @@ a packet to any other part of it.
 ``value`` sets an immediate value to be used as a source or points to a
 location of the value in memory. It is used instead of ``level`` and ``offset``
 for ``RTE_FLOW_FIELD_VALUE`` and ``RTE_FLOW_FIELD_POINTER`` respectively.
+The data in memory should be presented exactly in the same byte order and
+length as in the relevant flow item, i.e. data for field with type
+RTE_FLOW_FIELD_MAC_DST should follow the conventions of dst field
+in rte_flow_item_eth structure, with type RTE_FLOW_FIELD_IPV6_SRC -
+rte_flow_item_ipv6 conventions, and so on. If the field size is large than
+16 bytes the pattern can be provided as pointer only.
+
+The bitfield extracted from the memory being applied as second operation
+parameter is defined by action width and by the destination field offset.
+Application should provide the data in immediate value memory (either as
+buffer or by pointer) exactly as item field without any applied explicit offset,
+and destination packet field (with specified width and bit offset) will be
+replaced by immediate source bits from the same bit offset. For example,
+to replace the third byte of MAC address with value 0x85, application should
+specify destination width as 8, destination width as 16, and provide immediate
+value as sequence of bytes {xxx, xxx, 0x85, xxx, xxx, xxx}.
 
 .. _table_rte_flow_action_modify_field:
 
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index dfc2cbdeed..41a087d7c1 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -187,6 +187,13 @@ API Changes
   the crypto/security operation. This field will be used to communicate
   events such as soft expiry with IPsec in lookaside mode.
 
+* ethdev: ``rte_flow_action_modify_data`` structure udpdated, immediate data
+  array is extended, data pointer field is explicitly added to union, the
+  action behavior is defined in more strict fashion and documentation updated.
+  The immediate value behavior has been changed, the entire immediate field
+  should be provided, and offset for immediate source bitfield is assigned
+  from destination one.
+
 
 ABI Changes
 -----------
@@ -222,6 +229,8 @@ ABI Changes
   ``rte_security_ipsec_xform`` to allow applications to configure SA soft
   and hard expiry limits. Limits can be either in number of packets or bytes.
 
+* ethdev: ``rte_flow_action_modify_data`` structure udpdated.
+
 
 Known Issues
 ------------
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 7b1ed7f110..953924d42b 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3204,6 +3204,9 @@ enum rte_flow_field_id {
 };
 
 /**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice
+ *
  * Field description for MODIFY_FIELD action.
  */
 struct rte_flow_action_modify_data {
@@ -3217,10 +3220,18 @@ struct rte_flow_action_modify_data {
 			uint32_t offset;
 		};
 		/**
-		 * Immediate value for RTE_FLOW_FIELD_VALUE or
-		 * memory address for RTE_FLOW_FIELD_POINTER.
+		 * Immediate value for RTE_FLOW_FIELD_VALUE, presented in the
+		 * same byte order and length as in relevant rte_flow_item_xxx.
+		 * The immediate source bitfield offset is inherited from
+		 * the destination's one.
 		 */
-		uint64_t value;
+		uint8_t value[16];
+		/*
+		 * Memory address for RTE_FLOW_FIELD_POINTER, memory layout
+		 * should be the same as for relevant field in the
+		 * rte_flow_item_xxx structure.
+		 */
+		void *pvalue;
 	};
 };
 
-- 
2.18.1


^ permalink raw reply	[relevance 8%]

* Re: [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure
  2021-10-09 12:05  0%           ` fengchengwen
@ 2021-10-11  1:18  0%             ` fengchengwen
  2021-10-11  8:35  0%             ` Andrew Rybchenko
  2021-10-11 15:15  0%             ` Ananyev, Konstantin
  2 siblings, 0 replies; 200+ results
From: fengchengwen @ 2021-10-11  1:18 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	andrew.rybchenko, mczekaj, jiawenwu, jianwang, maxime.coquelin,
	chenbo.xia, thomas, ferruh.yigit, mdr, jay.jayatheerthan

Sorry to self-reply.

I think it's better the 'struct rte_eth_dev *dev' hold a pointer to the
'struct rte_eth_fp_ops', e.g.

	struct rte_eth_dev {
		struct rte_eth_fp_ops *fp_ops;
		...  // other field
	}

The eth framework set the pointer in the rte_eth_dev_pci_allocate(), and driver fill
corresponding callback:
	dev->fp_ops->rx_pkt_burst = xxx_recv_pkts;
	dev->fp_ops->tx_pkt_burst = xxx_xmit_pkts;
	...

In this way, the behavior of the primary and secondary processes can be unified, which
is basically the same as that of the original process.


On 2021/10/9 20:05, fengchengwen wrote:
> On 2021/10/7 19:27, Konstantin Ananyev wrote:
>> Copy public function pointers (rx_pkt_burst(), etc.) and related
>> pointers to internal data from rte_eth_dev structure into a
>> separate flat array. That array will remain in a public header.
>> The intention here is to make rte_eth_dev and related structures internal.
>> That should allow future possible changes to core eth_dev structures
>> to be transparent to the user and help to avoid ABI/API breakages.
>> The plan is to keep minimal part of data from rte_eth_dev public,
>> so we still can use inline functions for fast-path calls
>> (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
>> The whole idea beyond this new schema:
>> 1. PMDs keep to setup fast-path function pointers and related data
>>    inside rte_eth_dev struct in the same way they did it before.
>> 2. Inside rte_eth_dev_start() and inside rte_eth_dev_probing_finish()
>>    (for secondary process) we call eth_dev_fp_ops_setup, which
>>    copies these function and data pointers into rte_eth_fp_ops[port_id].
>> 3. Inside rte_eth_dev_stop() and inside rte_eth_dev_release_port()
>>    we call eth_dev_fp_ops_reset(), which resets rte_eth_fp_ops[port_id]
>>    into some dummy values.
>> 4. fast-path ethdev API (rte_eth_rx_burst(), etc.) will use that new
>>    flat array to call PMD specific functions.
>> That approach should allow us to make rte_eth_devices[] private
>> without introducing regression and help to avoid changes in drivers code.
>>
>> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>> ---
>>  lib/ethdev/ethdev_private.c  | 52 ++++++++++++++++++++++++++++++++++
>>  lib/ethdev/ethdev_private.h  |  7 +++++
>>  lib/ethdev/rte_ethdev.c      | 27 ++++++++++++++++++
>>  lib/ethdev/rte_ethdev_core.h | 55 ++++++++++++++++++++++++++++++++++++
>>  4 files changed, 141 insertions(+)
>>
>> diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
>> index 012cf73ca2..3eeda6e9f9 100644
>> --- a/lib/ethdev/ethdev_private.c
>> +++ b/lib/ethdev/ethdev_private.c
>> @@ -174,3 +174,55 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
>>  		RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
>>  	return str == NULL ? -1 : 0;
>>  }
>> +
>> +static uint16_t
>> +dummy_eth_rx_burst(__rte_unused void *rxq,
>> +		__rte_unused struct rte_mbuf **rx_pkts,
>> +		__rte_unused uint16_t nb_pkts)
>> +{
>> +	RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
>> +	rte_errno = ENOTSUP;
>> +	return 0;
>> +}
>> +
>> +static uint16_t
>> +dummy_eth_tx_burst(__rte_unused void *txq,
>> +		__rte_unused struct rte_mbuf **tx_pkts,
>> +		__rte_unused uint16_t nb_pkts)
>> +{
>> +	RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n");
>> +	rte_errno = ENOTSUP;
>> +	return 0;
>> +}
>> +
>> +void
>> +eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo)
> 
> The port_id parameter is preferable, this will hide rte_eth_fp_ops as much as possible.
> 
>> +{
>> +	static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
>> +	static const struct rte_eth_fp_ops dummy_ops = {
>> +		.rx_pkt_burst = dummy_eth_rx_burst,
>> +		.tx_pkt_burst = dummy_eth_tx_burst,
>> +		.rxq = {.data = dummy_data, .clbk = dummy_data,},
>> +		.txq = {.data = dummy_data, .clbk = dummy_data,},
>> +	};
>> +
>> +	*fpo = dummy_ops;
>> +}
>> +
>> +void
>> +eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
>> +		const struct rte_eth_dev *dev)
> 
> Because fp_ops and eth_dev is a one-to-one correspondence. It's better only use
> port_id parameter.
> 
>> +{
>> +	fpo->rx_pkt_burst = dev->rx_pkt_burst;
>> +	fpo->tx_pkt_burst = dev->tx_pkt_burst;
>> +	fpo->tx_pkt_prepare = dev->tx_pkt_prepare;
>> +	fpo->rx_queue_count = dev->rx_queue_count;
>> +	fpo->rx_descriptor_status = dev->rx_descriptor_status;
>> +	fpo->tx_descriptor_status = dev->tx_descriptor_status;
>> +
>> +	fpo->rxq.data = dev->data->rx_queues;
>> +	fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
>> +
>> +	fpo->txq.data = dev->data->tx_queues;
>> +	fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
>> +}
>> diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
>> index 3724429577..5721be7bdc 100644
>> --- a/lib/ethdev/ethdev_private.h
>> +++ b/lib/ethdev/ethdev_private.h
>> @@ -26,4 +26,11 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
>>  /* Parse devargs value for representor parameter. */
>>  int rte_eth_devargs_parse_representor_ports(char *str, void *data);
>>  
>> +/* reset eth fast-path API to dummy values */
>> +void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
>> +
>> +/* setup eth fast-path API to ethdev values */
>> +void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
>> +		const struct rte_eth_dev *dev);
> 
> Some drivers control the transmit/receive function during operation. E.g.
> for hns3 driver, when detect reset, primary process will set rx/tx burst to dummy, after
> process reset, primary process will set the correct rx/tx burst. During this process, the
> send and receive threads are still working, but the bursts they call are changed. So:
> 1. it is recommended that trace be deleted from the dummy function.
> 2. public the eth_dev_fp_ops_reset/setup interface for driver usage.
> 
>> +
>>  #endif /* _ETH_PRIVATE_H_ */
>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
>> index c8abda6dd7..9f7a0cbb8c 100644
>> --- a/lib/ethdev/rte_ethdev.c
>> +++ b/lib/ethdev/rte_ethdev.c
>> @@ -44,6 +44,9 @@
>>  static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
>>  struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
>>  
>> +/* public fast-path API */
>> +struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
>> +
>>  /* spinlock for eth device callbacks */
>>  static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
>>  
>> @@ -578,6 +581,8 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
>>  		rte_eth_dev_callback_process(eth_dev,
>>  				RTE_ETH_EVENT_DESTROY, NULL);
>>  
>> +	eth_dev_fp_ops_reset(rte_eth_fp_ops + eth_dev->data->port_id);
>> +
>>  	rte_spinlock_lock(&eth_dev_shared_data->ownership_lock);
>>  
>>  	eth_dev->state = RTE_ETH_DEV_UNUSED;
>> @@ -1787,6 +1792,9 @@ rte_eth_dev_start(uint16_t port_id)
>>  		(*dev->dev_ops->link_update)(dev, 0);
>>  	}
>>  
>> +	/* expose selection of PMD fast-path functions */
>> +	eth_dev_fp_ops_setup(rte_eth_fp_ops + port_id, dev);
>> +
>>  	rte_ethdev_trace_start(port_id);
>>  	return 0;
>>  }
>> @@ -1809,6 +1817,9 @@ rte_eth_dev_stop(uint16_t port_id)
>>  		return 0;
>>  	}
>>  
>> +	/* point fast-path functions to dummy ones */
>> +	eth_dev_fp_ops_reset(rte_eth_fp_ops + port_id);
>> +
>>  	dev->data->dev_started = 0;
>>  	ret = (*dev->dev_ops->dev_stop)(dev);
>>  	rte_ethdev_trace_stop(port_id, ret);
>> @@ -4567,6 +4578,14 @@ rte_eth_mirror_rule_reset(uint16_t port_id, uint8_t rule_id)
>>  	return eth_err(port_id, (*dev->dev_ops->mirror_rule_reset)(dev, rule_id));
>>  }
>>  
>> +RTE_INIT(eth_dev_init_fp_ops)
>> +{
>> +	uint32_t i;
>> +
>> +	for (i = 0; i != RTE_DIM(rte_eth_fp_ops); i++)
>> +		eth_dev_fp_ops_reset(rte_eth_fp_ops + i);
>> +}
>> +
>>  RTE_INIT(eth_dev_init_cb_lists)
>>  {
>>  	uint16_t i;
>> @@ -4735,6 +4754,14 @@ rte_eth_dev_probing_finish(struct rte_eth_dev *dev)
>>  	if (dev == NULL)
>>  		return;
>>  
>> +	/*
>> +	 * for secondary process, at that point we expect device
>> +	 * to be already 'usable', so shared data and all function pointers
>> +	 * for fast-path devops have to be setup properly inside rte_eth_dev.
>> +	 */
>> +	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
>> +		eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev);
>> +
>>  	rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL);
>>  
>>  	dev->state = RTE_ETH_DEV_ATTACHED;
>> diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
>> index 51cd68de94..d5853dff86 100644
>> --- a/lib/ethdev/rte_ethdev_core.h
>> +++ b/lib/ethdev/rte_ethdev_core.h
>> @@ -50,6 +50,61 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
>>  typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
>>  /**< @internal Check the status of a Tx descriptor */
>>  
>> +/**
>> + * @internal
>> + * Structure used to hold opaque pointers to internal ethdev Rx/Tx
>> + * queues data.
>> + * The main purpose to expose these pointers at all - allow compiler
>> + * to fetch this data for fast-path ethdev inline functions in advance.
>> + */
>> +struct rte_ethdev_qdata {
>> +	void **data;
>> +	/**< points to array of internal queue data pointers */
>> +	void **clbk;
>> +	/**< points to array of queue callback data pointers */
>> +};
>> +
>> +/**
>> + * @internal
>> + * fast-path ethdev functions and related data are hold in a flat array.
>> + * One entry per ethdev.
>> + * On 64-bit systems contents of this structure occupy exactly two 64B lines.
>> + * On 32-bit systems contents of this structure fits into one 64B line.
>> + */
>> +struct rte_eth_fp_ops {
>> +
>> +	/**
>> +	 * Rx fast-path functions and related data.
>> +	 * 64-bit systems: occupies first 64B line
>> +	 */
>> +	eth_rx_burst_t rx_pkt_burst;
>> +	/**< PMD receive function. */
>> +	eth_rx_queue_count_t rx_queue_count;
>> +	/**< Get the number of used RX descriptors. */
>> +	eth_rx_descriptor_status_t rx_descriptor_status;
>> +	/**< Check the status of a Rx descriptor. */
>> +	struct rte_ethdev_qdata rxq;
>> +	/**< Rx queues data. */
>> +	uintptr_t reserved1[3];
>> +
>> +	/**
>> +	 * Tx fast-path functions and related data.
>> +	 * 64-bit systems: occupies second 64B line
>> +	 */
>> +	eth_tx_burst_t tx_pkt_burst;
> 
> Why not place rx_pkt_burst/tx_pkt_burst/rxq /txq to the first cacheline ?
> Other function, e.g. rx_queue_count/descriptor_status are low frequency call functions.
> 
>> +	/**< PMD transmit function. */
>> +	eth_tx_prep_t tx_pkt_prepare;
>> +	/**< PMD transmit prepare function. */
>> +	eth_tx_descriptor_status_t tx_descriptor_status;
>> +	/**< Check the status of a Tx descriptor. */
>> +	struct rte_ethdev_qdata txq;
>> +	/**< Tx queues data. */
>> +	uintptr_t reserved2[3];
>> +
>> +} __rte_cache_aligned;
>> +
>> +extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
>> +
>>  
>>  /**
>>   * @internal
>>
> 
> 
> .
> 


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v10 1/8] bbdev: add device info related to data endianness
  @ 2021-10-11  4:32  4%   ` nipun.gupta
  0 siblings, 0 replies; 200+ results
From: nipun.gupta @ 2021-10-11  4:32 UTC (permalink / raw)
  To: dev, gakhil, nicolas.chautru; +Cc: david.marchand, hemant.agrawal, Nipun Gupta

From: Nicolas Chautru <nicolas.chautru@intel.com>

Adding device information to capture explicitly the assumption
of the input/output data byte endianness being processed.

Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 doc/guides/rel_notes/release_21_11.rst             | 1 +
 drivers/baseband/acc100/rte_acc100_pmd.c           | 1 +
 drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 1 +
 drivers/baseband/fpga_lte_fec/fpga_lte_fec.c       | 1 +
 drivers/baseband/null/bbdev_null.c                 | 6 ++++++
 drivers/baseband/turbo_sw/bbdev_turbo_software.c   | 1 +
 lib/bbdev/rte_bbdev.h                              | 4 ++++
 7 files changed, 15 insertions(+)

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index c0a7f75518..135aa467f2 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -194,6 +194,7 @@ API Changes
   the crypto/security operation. This field will be used to communicate
   events such as soft expiry with IPsec in lookaside mode.
 
+* bbdev: Added device info related to data byte endianness processing.
 
 ABI Changes
 -----------
diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c b/drivers/baseband/acc100/rte_acc100_pmd.c
index 68ba523ea9..361e06cf94 100644
--- a/drivers/baseband/acc100/rte_acc100_pmd.c
+++ b/drivers/baseband/acc100/rte_acc100_pmd.c
@@ -1088,6 +1088,7 @@ acc100_dev_info_get(struct rte_bbdev *dev,
 #else
 	dev_info->harq_buffer_size = 0;
 #endif
+	dev_info->data_endianness = RTE_LITTLE_ENDIAN;
 	acc100_check_ir(d);
 }
 
diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
index 6485cc824a..ee457f3071 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
+++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
@@ -372,6 +372,7 @@ fpga_dev_info_get(struct rte_bbdev *dev,
 	dev_info->default_queue_conf = default_queue_conf;
 	dev_info->capabilities = bbdev_capabilities;
 	dev_info->cpu_flag_reqs = NULL;
+	dev_info->data_endianness = RTE_LITTLE_ENDIAN;
 
 	/* Calculates number of queues assigned to device */
 	dev_info->max_num_queues = 0;
diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
index 350c4248eb..703bb611a0 100644
--- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
+++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
@@ -644,6 +644,7 @@ fpga_dev_info_get(struct rte_bbdev *dev,
 	dev_info->default_queue_conf = default_queue_conf;
 	dev_info->capabilities = bbdev_capabilities;
 	dev_info->cpu_flag_reqs = NULL;
+	dev_info->data_endianness = RTE_LITTLE_ENDIAN;
 
 	/* Calculates number of queues assigned to device */
 	dev_info->max_num_queues = 0;
diff --git a/drivers/baseband/null/bbdev_null.c b/drivers/baseband/null/bbdev_null.c
index 53c538ba44..753d920e18 100644
--- a/drivers/baseband/null/bbdev_null.c
+++ b/drivers/baseband/null/bbdev_null.c
@@ -77,6 +77,12 @@ info_get(struct rte_bbdev *dev, struct rte_bbdev_driver_info *dev_info)
 	dev_info->cpu_flag_reqs = NULL;
 	dev_info->min_alignment = 0;
 
+	/* BBDEV null device does not process the data, so
+	 * endianness setting is not relevant, but setting it
+	 * here for code completeness.
+	 */
+	dev_info->data_endianness = RTE_LITTLE_ENDIAN;
+
 	rte_bbdev_log_debug("got device info from %u", dev->data->dev_id);
 }
 
diff --git a/drivers/baseband/turbo_sw/bbdev_turbo_software.c b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
index 77e9a2ecbc..7dfeec665a 100644
--- a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
+++ b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
@@ -251,6 +251,7 @@ info_get(struct rte_bbdev *dev, struct rte_bbdev_driver_info *dev_info)
 	dev_info->capabilities = bbdev_capabilities;
 	dev_info->min_alignment = 64;
 	dev_info->harq_buffer_size = 0;
+	dev_info->data_endianness = RTE_LITTLE_ENDIAN;
 
 	rte_bbdev_log_debug("got device info from %u\n", dev->data->dev_id);
 }
diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h
index 3ebf62e697..e863bd913f 100644
--- a/lib/bbdev/rte_bbdev.h
+++ b/lib/bbdev/rte_bbdev.h
@@ -309,6 +309,10 @@ struct rte_bbdev_driver_info {
 	uint16_t min_alignment;
 	/** HARQ memory available in kB */
 	uint32_t harq_buffer_size;
+	/** Byte endianness (RTE_BIG_ENDIAN/RTE_LITTLE_ENDIAN) supported
+	 *  for input/output data
+	 */
+	uint8_t data_endianness;
 	/** Default queue configuration used if none is supplied  */
 	struct rte_bbdev_queue_conf default_queue_conf;
 	/** Device operation capabilities */
-- 
2.17.1


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v3 1/2] lib/cmdline: release cl when cmdline exit
  2021-10-08  6:41  4% ` [dpdk-dev] [PATCH v3 1/2] lib/cmdline: release cl when cmdline exit zhihongx.peng
@ 2021-10-11  5:20  0%   ` Peng, ZhihongX
  2021-10-11  8:25  0%   ` Dmitry Kozlyuk
  1 sibling, 0 replies; 200+ results
From: Peng, ZhihongX @ 2021-10-11  5:20 UTC (permalink / raw)
  To: dmitry.kozliuk; +Cc: dev, stable, olivier.matz

> -----Original Message-----
> From: Peng, ZhihongX <zhihongx.peng@intel.com>
> Sent: Friday, October 8, 2021 2:42 PM
> To: olivier.matz@6wind.com; dmitry.kozliuk@gmail.com
> Cc: dev@dpdk.org; Peng, ZhihongX <zhihongx.peng@intel.com>;
> stable@dpdk.org
> Subject: [PATCH v3 1/2] lib/cmdline: release cl when cmdline exit
> 
> From: Zhihong Peng <zhihongx.peng@intel.com>
> 
> Malloc cl in the cmdline_stdin_new function, so release in the
> cmdline_stdin_exit function is logical, so that cl will not be released alone.
> 
> Fixes: af75078fece3 (first public release)
> Cc: stable@dpdk.org
> 
> Signed-off-by: Zhihong Peng <zhihongx.peng@intel.com>
> ---
>  doc/guides/rel_notes/release_21_11.rst | 5 +++++
>  lib/cmdline/cmdline_socket.c           | 1 +
>  2 files changed, 6 insertions(+)
> 
> diff --git a/doc/guides/rel_notes/release_21_11.rst
> b/doc/guides/rel_notes/release_21_11.rst
> index efeffe37a0..be24925d16 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -191,6 +191,11 @@ API Changes
>    the crypto/security operation. This field will be used to communicate
>    events such as soft expiry with IPsec in lookaside mode.
> 
> +* cmdline: The API cmdline_stdin_exit has added cmdline_free function.
> +  Malloc cl in the cmdline_stdin_new function, so release in the
> +  cmdline_stdin_exit function is logical. The application code
> +  that calls cmdline_free needs to be deleted.
> +
> 
>  ABI Changes
>  -----------
> diff --git a/lib/cmdline/cmdline_socket.c b/lib/cmdline/cmdline_socket.c
> index 998e8ade25..ebd5343754 100644
> --- a/lib/cmdline/cmdline_socket.c
> +++ b/lib/cmdline/cmdline_socket.c
> @@ -53,4 +53,5 @@ cmdline_stdin_exit(struct cmdline *cl)
>  		return;
> 
>  	terminal_restore(cl);
> +	cmdline_free(cl);
>  }
> --
> 2.25.1
Hi, kozliuk
Can you give me an ack, I have submitted v3, I have added the release notes.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v2 0/7] Removal of PCI bus ABIs
  2021-10-08  7:44  0%                 ` Liu, Changpeng
@ 2021-10-11  6:58  0%                   ` Xia, Chenbo
  0 siblings, 0 replies; 200+ results
From: Xia, Chenbo @ 2021-10-11  6:58 UTC (permalink / raw)
  To: Liu, Changpeng, David Marchand, Harris, James R
  Cc: dev, ci, Aaron Conole, dpdklab, Zawadzki, Tomasz, alexeymar

Hi David & Changpeng,

> -----Original Message-----
> From: Liu, Changpeng <changpeng.liu@intel.com>
> Sent: Friday, October 8, 2021 3:45 PM
> To: David Marchand <david.marchand@redhat.com>; Harris, James R
> <james.r.harris@intel.com>
> Cc: Xia, Chenbo <chenbo.xia@intel.com>; dev@dpdk.org; ci@dpdk.org; Aaron
> Conole <aconole@redhat.com>; dpdklab <dpdklab@iol.unh.edu>; Zawadzki, Tomasz
> <tomasz.zawadzki@intel.com>; alexeymar@mellanox.com
> Subject: RE: [dpdk-dev] [PATCH v2 0/7] Removal of PCI bus ABIs
> 
> Thanks, I have worked with Chenbo to address this issue before.  After enable
> the `ALLOW_INTERNAL_API` option, it works now with SPDK.
> 
> Another issue raised by Jim Harris is that for distro packaged DPDK, since
> this option isn't enabled by default, this will not allow SPDK
> to use the distro packaged DPDK after this release.

I think for this problem, we have two options: enable driver sdk by default or
let OSV configure the option when building distros. I'm fine with either option.

@David, What do you think?

Thanks,
Chenbo

> 
> > -----Original Message-----
> > From: David Marchand <david.marchand@redhat.com>
> > Sent: Friday, October 8, 2021 3:08 PM
> > To: Liu, Changpeng <changpeng.liu@intel.com>
> > Cc: Xia, Chenbo <chenbo.xia@intel.com>; Harris, James R
> > <james.r.harris@intel.com>; dev@dpdk.org; ci@dpdk.org; Aaron Conole
> > <aconole@redhat.com>; dpdklab <dpdklab@iol.unh.edu>; Zawadzki, Tomasz
> > <tomasz.zawadzki@intel.com>; alexeymar@mellanox.com
> > Subject: Re: [dpdk-dev] [PATCH v2 0/7] Removal of PCI bus ABIs
> >
> > Hello,
> >
> > On Fri, Oct 8, 2021 at 8:15 AM Liu, Changpeng <changpeng.liu@intel.com>
> wrote:
> > >
> > > I tried the above DPDK patches, and got the following errors:
> > >
> > > pci.c:115:7: error: call to ‘rte_pci_read_config’ declared with attribute
> error:
> > Symbol is not public ABI
> > >   115 |  rc = rte_pci_read_config(dev->dev_handle, value, len, offset);
> > >       |       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > pci.c: In function ‘cfg_write_rte’:
> > > pci.c:125:7: error: call to ‘rte_pci_write_config’ declared with attribute
> error:
> > Symbol is not public ABI
> > >   125 |  rc = rte_pci_write_config(dev->dev_handle, value, len, offset);
> > >       |       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > pci.c: In function ‘register_rte_driver’:
> > > pci.c:375:2: error: call to ‘rte_pci_register’ declared with attribute
> error:
> > Symbol is not public ABI
> > >   375 |  rte_pci_register(&driver->driver);
> >
> > I should have got this warning... but compilation passed fine for me.
> > Happy you tested it.
> >
> > >
> > > We may use the new added API to replace rte_pci_write_config and
> > rte_pci_read_config, but SPDK
> > > do require rte_pci_register().
> >
> > Since SPDK has a PCI driver, you'll need to compile code that calls
> > those PCI driver internal API with ALLOW_INTERNAL_API defined.
> > You can probably add a #define ALLOW_INTERNAL_API first thing (it's
> > important to have it defined before including any dpdk header) in
> > pci.c
> >
> > Another option, is to add it to lib/env_dpdk/env.mk:ENV_CFLAGS =
> > $(DPDK_INC) -DALLOW_EXPERIMENTAL_API.
> >
> > Can someone from SPDK take over this and sync with Chenbo?
> >
> >
> > Thanks.
> >
> > --
> > David Marchand


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v9 0/5] Add PIE support for HQoS library
  2021-09-23  9:45  3%   ` [dpdk-dev] [RFC PATCH v8 " Liguzinski, WojciechX
@ 2021-10-11  7:55  3%     ` Liguzinski, WojciechX
  0 siblings, 0 replies; 200+ results
From: Liguzinski, WojciechX @ 2021-10-11  7:55 UTC (permalink / raw)
  To: dev, jasvinder.singh, cristian.dumitrescu; +Cc: megha.ajmera

DPDK sched library is equipped with mechanism that secures it from the bufferbloat problem
which is a situation when excess buffers in the network cause high latency and latency 
variation. Currently, it supports RED for active queue management (which is designed 
to control the queue length but it does not control latency directly and is now being 
obsoleted). However, more advanced queue management is required to address this problem
and provide desirable quality of service to users.

This solution (RFC) proposes usage of new algorithm called "PIE" (Proportional Integral
controller Enhanced) that can effectively and directly control queuing latency to address 
the bufferbloat problem.

The implementation of mentioned functionality includes modification of existing and 
adding a new set of data structures to the library, adding PIE related APIs. 
This affects structures in public API/ABI. That is why deprecation notice is going
to be prepared and sent.

Liguzinski, WojciechX (5):
  sched: add PIE based congestion management
  example/qos_sched: add PIE support
  example/ip_pipeline: add PIE support
  doc/guides/prog_guide: added PIE
  app/test: add tests for PIE

 app/test/autotest_data.py                    |   18 +
 app/test/meson.build                         |    4 +
 app/test/test_pie.c                          | 1065 ++++++++++++++++++
 config/rte_config.h                          |    1 -
 doc/guides/prog_guide/glossary.rst           |    3 +
 doc/guides/prog_guide/qos_framework.rst      |   60 +-
 doc/guides/prog_guide/traffic_management.rst |   13 +-
 drivers/net/softnic/rte_eth_softnic_tm.c     |    6 +-
 examples/ip_pipeline/tmgr.c                  |    6 +-
 examples/qos_sched/app_thread.c              |    1 -
 examples/qos_sched/cfg_file.c                |   82 +-
 examples/qos_sched/init.c                    |    7 +-
 examples/qos_sched/profile.cfg               |  196 ++--
 lib/sched/meson.build                        |   10 +-
 lib/sched/rte_pie.c                          |   86 ++
 lib/sched/rte_pie.h                          |  398 +++++++
 lib/sched/rte_sched.c                        |  228 ++--
 lib/sched/rte_sched.h                        |   53 +-
 lib/sched/version.map                        |    3 +
 19 files changed, 2050 insertions(+), 190 deletions(-)
 create mode 100644 app/test/test_pie.c
 create mode 100644 lib/sched/rte_pie.c
 create mode 100644 lib/sched/rte_pie.h

-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v5 3/7] ethdev: change input parameters for rx_queue_count
  2021-10-07 11:27  6%         ` [dpdk-dev] [PATCH v5 3/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
@ 2021-10-11  8:06  0%           ` Andrew Rybchenko
  0 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-10-11  8:06 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	mczekaj, jiawenwu, jianwang, maxime.coquelin, chenbo.xia, thomas,
	ferruh.yigit, mdr, jay.jayatheerthan

On 10/7/21 2:27 PM, Konstantin Ananyev wrote:
> Currently majority of fast-path ethdev ops take pointers to internal
> queue data structures as an input parameter.
> While eth_rx_queue_count() takes a pointer to rte_eth_dev and queue
> index.
> For future work to hide rte_eth_devices[] and friends it would be
> plausible to unify parameters list of all fast-path ethdev ops.
> This patch changes eth_rx_queue_count() to accept pointer to internal
> queue data as input parameter.
> While this change is transparent to user, it still counts as an ABI change,
> as eth_rx_queue_count_t is used by ethdev public inline function
> rte_eth_rx_queue_count().
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

The patch introduces a number usages of rte_eth_devices in
drivers. As I understand it is undesirable, but I don't
think it is a blocker of the patch series. It should be
addresses separately.

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v7 0/2] Support IOMMU for DMA device
                     ` (4 preceding siblings ...)
  2021-09-29  2:41  3% ` [dpdk-dev] [PATCH v6 " Xuan Ding
@ 2021-10-11  7:59  3% ` Xuan Ding
  5 siblings, 0 replies; 200+ results
From: Xuan Ding @ 2021-10-11  7:59 UTC (permalink / raw)
  To: dev, anatoly.burakov, maxime.coquelin, chenbo.xia
  Cc: jiayu.hu, cheng1.jiang, bruce.richardson, sunil.pai.g,
	yinan.wang, yvonnex.yang, Xuan Ding

This series supports DMA device to use vfio in async vhost.

The first patch extends the capability of current vfio dma mapping
API to allow partial unmapping for adjacent memory if the platform
does not support partial unmapping. The second patch involves the
IOMMU programming for guest memory in async vhost.

v7:
* Fix an operator error.

v6:
* Fix a potential memory leak.

v5:
* Fix issue of a pointer be freed early.

v4:
* Fix a format issue.

v3:
* Move the async_map_status flag to virtio_net structure to avoid
ABI breaking.

v2:
* Add rte_errno filtering for some devices bound in the kernel driver.
* Add a flag to check the status of region mapping.
* Fix one typo.

Xuan Ding (2):
  vfio: allow partially unmapping adjacent memory
  vhost: enable IOMMU for async vhost

 lib/eal/linux/eal_vfio.c | 338 ++++++++++++++++++++++++++-------------
 lib/vhost/vhost.h        |   4 +
 lib/vhost/vhost_user.c   | 116 +++++++++++++-
 3 files changed, 346 insertions(+), 112 deletions(-)

-- 
2.17.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure
  2021-10-07 11:27  2%         ` [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure Konstantin Ananyev
  2021-10-09 12:05  0%           ` fengchengwen
@ 2021-10-11  8:25  0%           ` Andrew Rybchenko
  1 sibling, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-10-11  8:25 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	mczekaj, jiawenwu, jianwang, maxime.coquelin, chenbo.xia, thomas,
	ferruh.yigit, mdr, jay.jayatheerthan

On 10/7/21 2:27 PM, Konstantin Ananyev wrote:
> Copy public function pointers (rx_pkt_burst(), etc.) and related
> pointers to internal data from rte_eth_dev structure into a
> separate flat array. That array will remain in a public header.
> The intention here is to make rte_eth_dev and related structures internal.
> That should allow future possible changes to core eth_dev structures
> to be transparent to the user and help to avoid ABI/API breakages.
> The plan is to keep minimal part of data from rte_eth_dev public,
> so we still can use inline functions for fast-path calls
> (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
> The whole idea beyond this new schema:
> 1. PMDs keep to setup fast-path function pointers and related data
>    inside rte_eth_dev struct in the same way they did it before.
> 2. Inside rte_eth_dev_start() and inside rte_eth_dev_probing_finish()
>    (for secondary process) we call eth_dev_fp_ops_setup, which
>    copies these function and data pointers into rte_eth_fp_ops[port_id].
> 3. Inside rte_eth_dev_stop() and inside rte_eth_dev_release_port()
>    we call eth_dev_fp_ops_reset(), which resets rte_eth_fp_ops[port_id]
>    into some dummy values.
> 4. fast-path ethdev API (rte_eth_rx_burst(), etc.) will use that new
>    flat array to call PMD specific functions.
> That approach should allow us to make rte_eth_devices[] private
> without introducing regression and help to avoid changes in drivers code.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

Overall LGTM, few nits below.

> ---
>  lib/ethdev/ethdev_private.c  | 52 ++++++++++++++++++++++++++++++++++
>  lib/ethdev/ethdev_private.h  |  7 +++++
>  lib/ethdev/rte_ethdev.c      | 27 ++++++++++++++++++
>  lib/ethdev/rte_ethdev_core.h | 55 ++++++++++++++++++++++++++++++++++++
>  4 files changed, 141 insertions(+)
> 
> diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
> index 012cf73ca2..3eeda6e9f9 100644
> --- a/lib/ethdev/ethdev_private.c
> +++ b/lib/ethdev/ethdev_private.c
> @@ -174,3 +174,55 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
>  		RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
>  	return str == NULL ? -1 : 0;
>  }
> +
> +static uint16_t
> +dummy_eth_rx_burst(__rte_unused void *rxq,
> +		__rte_unused struct rte_mbuf **rx_pkts,
> +		__rte_unused uint16_t nb_pkts)
> +{
> +	RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");

May be "unconfigured" -> "stopped" ? Or "non-started" ?

> +	rte_errno = ENOTSUP;
> +	return 0;
> +}
> +
> +static uint16_t
> +dummy_eth_tx_burst(__rte_unused void *txq,
> +		__rte_unused struct rte_mbuf **tx_pkts,
> +		__rte_unused uint16_t nb_pkts)
> +{
> +	RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n");

May be "unconfigured" -> "stopped" ?

> +	rte_errno = ENOTSUP;
> +	return 0;
> +}
> +
> +void
> +eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo)
> +{
> +	static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
> +	static const struct rte_eth_fp_ops dummy_ops = {
> +		.rx_pkt_burst = dummy_eth_rx_burst,
> +		.tx_pkt_burst = dummy_eth_tx_burst,
> +		.rxq = {.data = dummy_data, .clbk = dummy_data,},
> +		.txq = {.data = dummy_data, .clbk = dummy_data,},
> +	};
> +
> +	*fpo = dummy_ops;
> +}
> +
> +void
> +eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> +		const struct rte_eth_dev *dev)
> +{
> +	fpo->rx_pkt_burst = dev->rx_pkt_burst;
> +	fpo->tx_pkt_burst = dev->tx_pkt_burst;
> +	fpo->tx_pkt_prepare = dev->tx_pkt_prepare;
> +	fpo->rx_queue_count = dev->rx_queue_count;
> +	fpo->rx_descriptor_status = dev->rx_descriptor_status;
> +	fpo->tx_descriptor_status = dev->tx_descriptor_status;
> +
> +	fpo->rxq.data = dev->data->rx_queues;
> +	fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
> +
> +	fpo->txq.data = dev->data->tx_queues;
> +	fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
> +}
> diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
> index 3724429577..5721be7bdc 100644
> --- a/lib/ethdev/ethdev_private.h
> +++ b/lib/ethdev/ethdev_private.h
> @@ -26,4 +26,11 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
>  /* Parse devargs value for representor parameter. */
>  int rte_eth_devargs_parse_representor_ports(char *str, void *data);
>  
> +/* reset eth fast-path API to dummy values */
> +void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
> +
> +/* setup eth fast-path API to ethdev values */
> +void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> +		const struct rte_eth_dev *dev);
> +
>  #endif /* _ETH_PRIVATE_H_ */
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index c8abda6dd7..9f7a0cbb8c 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -44,6 +44,9 @@
>  static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
>  struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
>  
> +/* public fast-path API */

Shoudn't it be a doxygen style comment?

> +struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
> +
>  /* spinlock for eth device callbacks */
>  static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
>  

[snip]

> diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
> index 51cd68de94..d5853dff86 100644
> --- a/lib/ethdev/rte_ethdev_core.h
> +++ b/lib/ethdev/rte_ethdev_core.h
> @@ -50,6 +50,61 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
>  typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
>  /**< @internal Check the status of a Tx descriptor */
>  
> +/**
> + * @internal
> + * Structure used to hold opaque pointers to internal ethdev Rx/Tx
> + * queues data.
> + * The main purpose to expose these pointers at all - allow compiler
> + * to fetch this data for fast-path ethdev inline functions in advance.
> + */
> +struct rte_ethdev_qdata {
> +	void **data;
> +	/**< points to array of internal queue data pointers */

Please, put the documentation on the same like or just
put documentation before the documented member.

> +	void **clbk;
> +	/**< points to array of queue callback data pointers */
> +};
> +
> +/**
> + * @internal
> + * fast-path ethdev functions and related data are hold in a flat array.
> + * One entry per ethdev.
> + * On 64-bit systems contents of this structure occupy exactly two 64B lines.
> + * On 32-bit systems contents of this structure fits into one 64B line.
> + */
> +struct rte_eth_fp_ops {
> +
> +	/**
> +	 * Rx fast-path functions and related data.
> +	 * 64-bit systems: occupies first 64B line
> +	 */

As I understand the above comment is for a group of below
fields. If so, Doxygen annocation for member groups should
be used.

> +	eth_rx_burst_t rx_pkt_burst;
> +	/**< PMD receive function. */

May I ask to avoid usage of documentation after member in a
separate line. It makes sense to if it is located in the same
line, but otherwise, it should be simply put before the
documented member.

[snip]

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3 1/2] lib/cmdline: release cl when cmdline exit
  2021-10-08  6:41  4% ` [dpdk-dev] [PATCH v3 1/2] lib/cmdline: release cl when cmdline exit zhihongx.peng
  2021-10-11  5:20  0%   ` Peng, ZhihongX
@ 2021-10-11  8:25  0%   ` Dmitry Kozlyuk
  1 sibling, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-10-11  8:25 UTC (permalink / raw)
  To: zhihongx.peng; +Cc: olivier.matz, dev, stable

2021-10-08 06:41 (UTC+0000), zhihongx.peng@intel.com:
> From: Zhihong Peng <zhihongx.peng@intel.com>
> 
> Malloc cl in the cmdline_stdin_new function, so release in the
> cmdline_stdin_exit function is logical, so that cl will not be
> released alone.
> 
> Fixes: af75078fece3 (first public release)
> Cc: stable@dpdk.org

As I have explained before, backporting this will introduce a double-free bug
in user apps unless their code are fixed, so it must not be done.

> 
> Signed-off-by: Zhihong Peng <zhihongx.peng@intel.com>
> ---
>  doc/guides/rel_notes/release_21_11.rst | 5 +++++
>  lib/cmdline/cmdline_socket.c           | 1 +
>  2 files changed, 6 insertions(+)
> 
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index efeffe37a0..be24925d16 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -191,6 +191,11 @@ API Changes
>    the crypto/security operation. This field will be used to communicate
>    events such as soft expiry with IPsec in lookaside mode.
>  
> +* cmdline: The API cmdline_stdin_exit has added cmdline_free function.
> +  Malloc cl in the cmdline_stdin_new function, so release in the
> +  cmdline_stdin_exit function is logical. The application code
> +  that calls cmdline_free needs to be deleted.
> +

There's probably no need to go into such details, suggestion:

* cmdline: ``cmdline_stdin_exit()`` now frees the ``cmdline`` structure.
  Calls to ``cmdline_free()`` after it need to be deleted from applications.

>  
>  ABI Changes
>  -----------
> diff --git a/lib/cmdline/cmdline_socket.c b/lib/cmdline/cmdline_socket.c
> index 998e8ade25..ebd5343754 100644
> --- a/lib/cmdline/cmdline_socket.c
> +++ b/lib/cmdline/cmdline_socket.c
> @@ -53,4 +53,5 @@ cmdline_stdin_exit(struct cmdline *cl)
>  		return;
>  
>  	terminal_restore(cl);
> +	cmdline_free(cl);
>  }


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v2 3/3] security: add reserved bitfields
  2021-10-08 20:45  3%   ` [dpdk-dev] [PATCH v2 3/3] security: add reserved bitfields Akhil Goyal
@ 2021-10-11  8:31  0%     ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-10-11  8:31 UTC (permalink / raw)
  To: Akhil Goyal
  Cc: dev, david.marchand, hemant.agrawal, anoobj,
	pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
	g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang,
	konstantin.ananyev, radu.nicolau, ajit.khaparde, rnagadheeraj,
	adwivedi, ciara.power, Stephen Hemminger, ray.kinsella,
	bruce.richardson

08/10/2021 22:45, Akhil Goyal:
> In struct rte_security_ipsec_sa_options, for every new option
> added, there is an ABI breakage, to avoid, a reserved_opts
> bitfield is added to for the remaining bits available in the
> structure.
> Now for every new sa option, these reserved_opts can be reduced
> and new option can be added.

How do you make sure this field is initialized to 0?




^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure
  2021-10-09 12:05  0%           ` fengchengwen
  2021-10-11  1:18  0%             ` fengchengwen
@ 2021-10-11  8:35  0%             ` Andrew Rybchenko
  2021-10-11 15:15  0%             ` Ananyev, Konstantin
  2 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-10-11  8:35 UTC (permalink / raw)
  To: fengchengwen, Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	mczekaj, jiawenwu, jianwang, maxime.coquelin, chenbo.xia, thomas,
	ferruh.yigit, mdr, jay.jayatheerthan

On 10/9/21 3:05 PM, fengchengwen wrote:
> On 2021/10/7 19:27, Konstantin Ananyev wrote:
>> Copy public function pointers (rx_pkt_burst(), etc.) and related
>> pointers to internal data from rte_eth_dev structure into a
>> separate flat array. That array will remain in a public header.
>> The intention here is to make rte_eth_dev and related structures internal.
>> That should allow future possible changes to core eth_dev structures
>> to be transparent to the user and help to avoid ABI/API breakages.
>> The plan is to keep minimal part of data from rte_eth_dev public,
>> so we still can use inline functions for fast-path calls
>> (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
>> The whole idea beyond this new schema:
>> 1. PMDs keep to setup fast-path function pointers and related data
>>    inside rte_eth_dev struct in the same way they did it before.
>> 2. Inside rte_eth_dev_start() and inside rte_eth_dev_probing_finish()
>>    (for secondary process) we call eth_dev_fp_ops_setup, which
>>    copies these function and data pointers into rte_eth_fp_ops[port_id].
>> 3. Inside rte_eth_dev_stop() and inside rte_eth_dev_release_port()
>>    we call eth_dev_fp_ops_reset(), which resets rte_eth_fp_ops[port_id]
>>    into some dummy values.
>> 4. fast-path ethdev API (rte_eth_rx_burst(), etc.) will use that new
>>    flat array to call PMD specific functions.
>> That approach should allow us to make rte_eth_devices[] private
>> without introducing regression and help to avoid changes in drivers code.
>>
>> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>> ---
>>  lib/ethdev/ethdev_private.c  | 52 ++++++++++++++++++++++++++++++++++
>>  lib/ethdev/ethdev_private.h  |  7 +++++
>>  lib/ethdev/rte_ethdev.c      | 27 ++++++++++++++++++
>>  lib/ethdev/rte_ethdev_core.h | 55 ++++++++++++++++++++++++++++++++++++
>>  4 files changed, 141 insertions(+)
>>
>> diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
>> index 012cf73ca2..3eeda6e9f9 100644
>> --- a/lib/ethdev/ethdev_private.c
>> +++ b/lib/ethdev/ethdev_private.c
>> @@ -174,3 +174,55 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
>>  		RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
>>  	return str == NULL ? -1 : 0;
>>  }
>> +
>> +static uint16_t
>> +dummy_eth_rx_burst(__rte_unused void *rxq,
>> +		__rte_unused struct rte_mbuf **rx_pkts,
>> +		__rte_unused uint16_t nb_pkts)
>> +{
>> +	RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
>> +	rte_errno = ENOTSUP;
>> +	return 0;
>> +}
>> +
>> +static uint16_t
>> +dummy_eth_tx_burst(__rte_unused void *txq,
>> +		__rte_unused struct rte_mbuf **tx_pkts,
>> +		__rte_unused uint16_t nb_pkts)
>> +{
>> +	RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n");
>> +	rte_errno = ENOTSUP;
>> +	return 0;
>> +}
>> +
>> +void
>> +eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo)
> 
> The port_id parameter is preferable, this will hide rte_eth_fp_ops as much as possible.

Sorry, but I see no point to hide it inside ethdev.
Of course, prototype should be reconsidered if we make
it ethdev-internal API available for drivers.
If so, I agree that the parameter should be port_id.

[snip]

>> diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
>> index 3724429577..5721be7bdc 100644
>> --- a/lib/ethdev/ethdev_private.h
>> +++ b/lib/ethdev/ethdev_private.h
>> @@ -26,4 +26,11 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
>>  /* Parse devargs value for representor parameter. */
>>  int rte_eth_devargs_parse_representor_ports(char *str, void *data);
>>  
>> +/* reset eth fast-path API to dummy values */
>> +void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
>> +
>> +/* setup eth fast-path API to ethdev values */
>> +void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
>> +		const struct rte_eth_dev *dev);
> 
> Some drivers control the transmit/receive function during operation. E.g.
> for hns3 driver, when detect reset, primary process will set rx/tx burst to dummy, after
> process reset, primary process will set the correct rx/tx burst. During this process, the
> send and receive threads are still working, but the bursts they call are changed. So:
> 1. it is recommended that trace be deleted from the dummy function.
> 2. public the eth_dev_fp_ops_reset/setup interface for driver usage.

Good point.

[snip]

>> diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
>> index 51cd68de94..d5853dff86 100644
>> --- a/lib/ethdev/rte_ethdev_core.h
>> +++ b/lib/ethdev/rte_ethdev_core.h
>> @@ -50,6 +50,61 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
>>  typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
>>  /**< @internal Check the status of a Tx descriptor */
>>  
>> +/**
>> + * @internal
>> + * Structure used to hold opaque pointers to internal ethdev Rx/Tx
>> + * queues data.
>> + * The main purpose to expose these pointers at all - allow compiler
>> + * to fetch this data for fast-path ethdev inline functions in advance.
>> + */
>> +struct rte_ethdev_qdata {
>> +	void **data;
>> +	/**< points to array of internal queue data pointers */
>> +	void **clbk;
>> +	/**< points to array of queue callback data pointers */
>> +};
>> +
>> +/**
>> + * @internal
>> + * fast-path ethdev functions and related data are hold in a flat array.
>> + * One entry per ethdev.
>> + * On 64-bit systems contents of this structure occupy exactly two 64B lines.
>> + * On 32-bit systems contents of this structure fits into one 64B line.
>> + */
>> +struct rte_eth_fp_ops {
>> +
>> +	/**
>> +	 * Rx fast-path functions and related data.
>> +	 * 64-bit systems: occupies first 64B line
>> +	 */
>> +	eth_rx_burst_t rx_pkt_burst;
>> +	/**< PMD receive function. */
>> +	eth_rx_queue_count_t rx_queue_count;
>> +	/**< Get the number of used RX descriptors. */
>> +	eth_rx_descriptor_status_t rx_descriptor_status;
>> +	/**< Check the status of a Rx descriptor. */
>> +	struct rte_ethdev_qdata rxq;
>> +	/**< Rx queues data. */
>> +	uintptr_t reserved1[3];
>> +
>> +	/**
>> +	 * Tx fast-path functions and related data.
>> +	 * 64-bit systems: occupies second 64B line
>> +	 */
>> +	eth_tx_burst_t tx_pkt_burst;
> 
> Why not place rx_pkt_burst/tx_pkt_burst/rxq /txq to the first cacheline ?
> Other function, e.g. rx_queue_count/descriptor_status are low frequency call functions.

+1 Very good question
If so, tx_pkt_prepare should be on the first cache-line
as well.

>> +	/**< PMD transmit function. */
>> +	eth_tx_prep_t tx_pkt_prepare;
>> +	/**< PMD transmit prepare function. */
>> +	eth_tx_descriptor_status_t tx_descriptor_status;
>> +	/**< Check the status of a Tx descriptor. */
>> +	struct rte_ethdev_qdata txq;
>> +	/**< Tx queues data. */
>> +	uintptr_t reserved2[3];
>> +
>> +} __rte_cache_aligned;
>> +
>> +extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
>> +
>>  
>>  /**
>>   * @internal
>>


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v5 5/7] ethdev: make fast-path functions to use new flat array
  2021-10-07 11:27  2%         ` [dpdk-dev] [PATCH v5 5/7] ethdev: make fast-path functions to use new flat array Konstantin Ananyev
@ 2021-10-11  9:02  0%           ` Andrew Rybchenko
  0 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-10-11  9:02 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	mczekaj, jiawenwu, jianwang, maxime.coquelin, chenbo.xia, thomas,
	ferruh.yigit, mdr, jay.jayatheerthan

On 10/7/21 2:27 PM, Konstantin Ananyev wrote:
> Rework fast-path ethdev functions to use rte_eth_fp_ops[].
> While it is an API/ABI breakage, this change is intended to be
> transparent for both users (no changes in user app is required) and
> PMD developers (no changes in PMD is required).
> One extra thing to note - RX/TX callback invocation will cause extra
> function call with these changes. That might cause some insignificant
> slowdown for code-path where RX/TX callbacks are heavily involved.

I'm sorry for nit picking here and below:

RX -> Rx, TX -> Tx everywhere above.

> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>  lib/ethdev/ethdev_private.c |  31 +++++
>  lib/ethdev/rte_ethdev.h     | 242 ++++++++++++++++++++++++++----------
>  lib/ethdev/version.map      |   3 +
>  3 files changed, 208 insertions(+), 68 deletions(-)
> 
> diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
> index 3eeda6e9f9..1222c6f84e 100644
> --- a/lib/ethdev/ethdev_private.c
> +++ b/lib/ethdev/ethdev_private.c
> @@ -226,3 +226,34 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
>  	fpo->txq.data = dev->data->tx_queues;
>  	fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
>  }
> +
> +uint16_t
> +rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id,
> +	struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
> +	void *opaque)
> +{
> +	const struct rte_eth_rxtx_callback *cb = opaque;
> +
> +	while (cb != NULL) {
> +		nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
> +				nb_pkts, cb->param);
> +		cb = cb->next;
> +	}
> +
> +	return nb_rx;
> +}
> +
> +uint16_t
> +rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
> +	struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque)
> +{
> +	const struct rte_eth_rxtx_callback *cb = opaque;
> +
> +	while (cb != NULL) {
> +		nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
> +				cb->param);
> +		cb = cb->next;
> +	}
> +
> +	return nb_pkts;
> +}
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index cdd16d6e57..c0e1a40681 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -4904,6 +4904,33 @@ int rte_eth_representor_info_get(uint16_t port_id,
>  
>  #include <rte_ethdev_core.h>
>  
> +/**
> + * @internal
> + * Helper routine for eth driver rx_burst API.

rx -> Rx

> + * Should be called at exit from PMD's rte_eth_rx_bulk implementation.
> + * Does necessary post-processing - invokes RX callbacks if any, etc.

RX -> Rx

> + *
> + * @param port_id
> + *  The port identifier of the Ethernet device.
> + * @param queue_id
> + *  The index of the receive queue from which to retrieve input packets.

Isn't:
The index of the queue from which packets are received from?

> + * @param rx_pkts
> + *   The address of an array of pointers to *rte_mbuf* structures that
> + *   have been retrieved from the device.
> + * @param nb_pkts

Should be @param nb_rx

> + *   The number of packets that were retrieved from the device.
> + * @param nb_pkts
> + *   The number of elements in *rx_pkts* array.

@p should be used to refer to a paramter.

The description does not help to understand why both nb_rx and
nb_pkts are necessary. Isn't nb_pkts >= nb_rx and nb_rx
sufficient?

> + * @param opaque
> + *   Opaque pointer of RX queue callback related data.

RX -> Rx

> + *
> + * @return
> + *  The number of packets effectively supplied to the *rx_pkts* array.

@p should be used to refer to a parameter.

> + */
> +uint16_t rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id,
> +		struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts,
> +		void *opaque);
> +
>  /**
>   *
>   * Retrieve a burst of input packets from a receive queue of an Ethernet
> @@ -4995,23 +5022,37 @@ static inline uint16_t
>  rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
>  		 struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
>  {
> -	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>  	uint16_t nb_rx;
> +	struct rte_eth_fp_ops *p;

p is typically a very bad name in a funcion with
many pointer variables etc. May be "fpo" as in previous
patch?

> +	void *cb, *qd;

Please, avoid variable, expecially pointers, declaration in
one line.

I'd suggest to use 'rxq' instead of 'qd'. The first paramter
of the rx_pkt_burst is 'rxq'.

Also 'cb' seems to be used under RTE_ETHDEV_RXTX_CALLBACKS
only. If so, it could be unused variable warning if
RTE_ETHDEV_RXTX_CALLBACKS is not defined.

> +
> +#ifdef RTE_ETHDEV_DEBUG_RX
> +	if (port_id >= RTE_MAX_ETHPORTS ||
> +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> +		RTE_ETHDEV_LOG(ERR,
> +			"Invalid port_id=%u or queue_id=%u\n",
> +			port_id, queue_id);
> +		return 0;
> +	}
> +#endif
> +
> +	/* fetch pointer to queue data */
> +	p = &rte_eth_fp_ops[port_id];
> +	qd = p->rxq.data[queue_id];
>  
>  #ifdef RTE_ETHDEV_DEBUG_RX
>  	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
> -	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
>  
> -	if (queue_id >= dev->data->nb_rx_queues) {
> -		RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u\n", queue_id);
> +	if (qd == NULL) {
> +		RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u for port_id=%u\n",

RX -> Rx

> +			queue_id, port_id);
>  		return 0;
>  	}
>  #endif
> -	nb_rx = (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id],
> -				     rx_pkts, nb_pkts);
> +
> +	nb_rx = p->rx_pkt_burst(qd, rx_pkts, nb_pkts);
>  
>  #ifdef RTE_ETHDEV_RXTX_CALLBACKS
> -	struct rte_eth_rxtx_callback *cb;
>  
>  	/* __ATOMIC_RELEASE memory order was used when the
>  	 * call back was inserted into the list.
> @@ -5019,16 +5060,10 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
>  	 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
>  	 * not required.
>  	 */
> -	cb = __atomic_load_n(&dev->post_rx_burst_cbs[queue_id],
> -				__ATOMIC_RELAXED);
> -
> -	if (unlikely(cb != NULL)) {
> -		do {
> -			nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
> -						nb_pkts, cb->param);
> -			cb = cb->next;
> -		} while (cb != NULL);
> -	}
> +	cb = __atomic_load_n((void **)&p->rxq.clbk[queue_id], __ATOMIC_RELAXED);
> +	if (unlikely(cb != NULL))
> +		nb_rx = rte_eth_call_rx_callbacks(port_id, queue_id, rx_pkts,
> +				nb_rx, nb_pkts, cb);
>  #endif
>  
>  	rte_ethdev_trace_rx_burst(port_id, queue_id, (void **)rx_pkts, nb_rx);
> @@ -5051,16 +5086,27 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
>  static inline int
>  rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id)
>  {
> -	struct rte_eth_dev *dev;
> +	struct rte_eth_fp_ops *p;
> +	void *qd;

p -> fpo, qd -> rxq

> +
> +	if (port_id >= RTE_MAX_ETHPORTS ||
> +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> +		RTE_ETHDEV_LOG(ERR,
> +			"Invalid port_id=%u or queue_id=%u\n",
> +			port_id, queue_id);
> +		return -EINVAL;
> +	}
> +
> +	/* fetch pointer to queue data */
> +	p = &rte_eth_fp_ops[port_id];
> +	qd = p->rxq.data[queue_id];
>  
>  	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> -	dev = &rte_eth_devices[port_id];
> -	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_queue_count, -ENOTSUP);
> -	if (queue_id >= dev->data->nb_rx_queues ||
> -	    dev->data->rx_queues[queue_id] == NULL)
> +	RTE_FUNC_PTR_OR_ERR_RET(*p->rx_queue_count, -ENOTSUP);
> +	if (qd == NULL)
>  		return -EINVAL;
>  
> -	return (int)(*dev->rx_queue_count)(dev->data->rx_queues[queue_id]);
> +	return (int)(*p->rx_queue_count)(qd);
>  }
>  
>  /**@{@name Rx hardware descriptor states
> @@ -5108,21 +5154,30 @@ static inline int
>  rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
>  	uint16_t offset)
>  {
> -	struct rte_eth_dev *dev;
> -	void *rxq;
> +	struct rte_eth_fp_ops *p;
> +	void *qd;

p -> fpo, qd -> rxq

>  
>  #ifdef RTE_ETHDEV_DEBUG_RX
> -	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> +	if (port_id >= RTE_MAX_ETHPORTS ||
> +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> +		RTE_ETHDEV_LOG(ERR,
> +			"Invalid port_id=%u or queue_id=%u\n",
> +			port_id, queue_id);
> +		return -EINVAL;
> +	}
>  #endif
> -	dev = &rte_eth_devices[port_id];
> +
> +	/* fetch pointer to queue data */
> +	p = &rte_eth_fp_ops[port_id];
> +	qd = p->rxq.data[queue_id];
> +
>  #ifdef RTE_ETHDEV_DEBUG_RX
> -	if (queue_id >= dev->data->nb_rx_queues)
> +	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> +	if (qd == NULL)
>  		return -ENODEV;
>  #endif
> -	RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_descriptor_status, -ENOTSUP);
> -	rxq = dev->data->rx_queues[queue_id];
> -
> -	return (*dev->rx_descriptor_status)(rxq, offset);
> +	RTE_FUNC_PTR_OR_ERR_RET(*p->rx_descriptor_status, -ENOTSUP);
> +	return (*p->rx_descriptor_status)(qd, offset);
>  }
>  
>  /**@{@name Tx hardware descriptor states
> @@ -5169,23 +5224,54 @@ rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id,
>  static inline int rte_eth_tx_descriptor_status(uint16_t port_id,
>  	uint16_t queue_id, uint16_t offset)
>  {
> -	struct rte_eth_dev *dev;
> -	void *txq;
> +	struct rte_eth_fp_ops *p;
> +	void *qd;

p -> fpo, qd -> txq

>  
>  #ifdef RTE_ETHDEV_DEBUG_TX
> -	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> +	if (port_id >= RTE_MAX_ETHPORTS ||
> +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> +		RTE_ETHDEV_LOG(ERR,
> +			"Invalid port_id=%u or queue_id=%u\n",
> +			port_id, queue_id);
> +		return -EINVAL;
> +	}
>  #endif
> -	dev = &rte_eth_devices[port_id];
> +
> +	/* fetch pointer to queue data */
> +	p = &rte_eth_fp_ops[port_id];
> +	qd = p->txq.data[queue_id];
> +
>  #ifdef RTE_ETHDEV_DEBUG_TX
> -	if (queue_id >= dev->data->nb_tx_queues)
> +	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> +	if (qd == NULL)
>  		return -ENODEV;
>  #endif
> -	RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_descriptor_status, -ENOTSUP);
> -	txq = dev->data->tx_queues[queue_id];
> -
> -	return (*dev->tx_descriptor_status)(txq, offset);
> +	RTE_FUNC_PTR_OR_ERR_RET(*p->tx_descriptor_status, -ENOTSUP);
> +	return (*p->tx_descriptor_status)(qd, offset);
>  }
>  
> +/**
> + * @internal
> + * Helper routine for eth driver tx_burst API.
> + * Should be called before entry PMD's rte_eth_tx_bulk implementation.
> + * Does necessary pre-processing - invokes TX callbacks if any, etc.

TX -> Tx

> + *
> + * @param port_id
> + *   The port identifier of the Ethernet device.
> + * @param queue_id
> + *   The index of the transmit queue through which output packets must be
> + *   sent.
> + * @param tx_pkts
> + *   The address of an array of *nb_pkts* pointers to *rte_mbuf* structures
> + *   which contain the output packets.

*nb_pkts* -> @p nb_pkts

> + * @param nb_pkts
> + *   The maximum number of packets to transmit.
> + * @return
> + *   The number of output packets to transmit.
> + */
> +uint16_t rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id,
> +	struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque);
> +
>  /**
>   * Send a burst of output packets on a transmit queue of an Ethernet device.
>   *
> @@ -5256,20 +5342,34 @@ static inline uint16_t
>  rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
>  		 struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
>  {
> -	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	struct rte_eth_fp_ops *p;
> +	void *cb, *qd;

Same as above

> +
> +#ifdef RTE_ETHDEV_DEBUG_TX
> +	if (port_id >= RTE_MAX_ETHPORTS ||
> +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> +		RTE_ETHDEV_LOG(ERR,
> +			"Invalid port_id=%u or queue_id=%u\n",
> +			port_id, queue_id);
> +		return 0;
> +	}
> +#endif
> +
> +	/* fetch pointer to queue data */
> +	p = &rte_eth_fp_ops[port_id];
> +	qd = p->txq.data[queue_id];
>  
>  #ifdef RTE_ETHDEV_DEBUG_TX
>  	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
> -	RTE_FUNC_PTR_OR_ERR_RET(*dev->tx_pkt_burst, 0);
>  
> -	if (queue_id >= dev->data->nb_tx_queues) {
> -		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
> +	if (qd == NULL) {
> +		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",

TX -> Tx

> +			queue_id, port_id);
>  		return 0;
>  	}
>  #endif
>  
>  #ifdef RTE_ETHDEV_RXTX_CALLBACKS
> -	struct rte_eth_rxtx_callback *cb;
>  
>  	/* __ATOMIC_RELEASE memory order was used when the
>  	 * call back was inserted into the list.
> @@ -5277,21 +5377,16 @@ rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
>  	 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
>  	 * not required.
>  	 */
> -	cb = __atomic_load_n(&dev->pre_tx_burst_cbs[queue_id],
> -				__ATOMIC_RELAXED);
> -
> -	if (unlikely(cb != NULL)) {
> -		do {
> -			nb_pkts = cb->fn.tx(port_id, queue_id, tx_pkts, nb_pkts,
> -					cb->param);
> -			cb = cb->next;
> -		} while (cb != NULL);
> -	}
> +	cb = __atomic_load_n((void **)&p->txq.clbk[queue_id], __ATOMIC_RELAXED);
> +	if (unlikely(cb != NULL))
> +		nb_pkts = rte_eth_call_tx_callbacks(port_id, queue_id, tx_pkts,
> +				nb_pkts, cb);
>  #endif
>  
> -	rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts,
> -		nb_pkts);
> -	return (*dev->tx_pkt_burst)(dev->data->tx_queues[queue_id], tx_pkts, nb_pkts);
> +	nb_pkts = p->tx_pkt_burst(qd, tx_pkts, nb_pkts);
> +
> +	rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts, nb_pkts);
> +	return nb_pkts;
>  }
>  
>  /**
> @@ -5354,31 +5449,42 @@ static inline uint16_t
>  rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id,
>  		struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
>  {
> -	struct rte_eth_dev *dev;
> +	struct rte_eth_fp_ops *p;
> +	void *qd;

p->fpo, qd->txq

>  
>  #ifdef RTE_ETHDEV_DEBUG_TX
> -	if (!rte_eth_dev_is_valid_port(port_id)) {
> -		RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);
> +	if (port_id >= RTE_MAX_ETHPORTS ||
> +			queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> +		RTE_ETHDEV_LOG(ERR,
> +			"Invalid port_id=%u or queue_id=%u\n",
> +			port_id, queue_id);
>  		rte_errno = ENODEV;
>  		return 0;
>  	}
>  #endif
>  
> -	dev = &rte_eth_devices[port_id];
> +	/* fetch pointer to queue data */
> +	p = &rte_eth_fp_ops[port_id];
> +	qd = p->txq.data[queue_id];
>  
>  #ifdef RTE_ETHDEV_DEBUG_TX
> -	if (queue_id >= dev->data->nb_tx_queues) {
> -		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u\n", queue_id);
> +	if (!rte_eth_dev_is_valid_port(port_id)) {
> +		RTE_ETHDEV_LOG(ERR, "Invalid TX port_id=%u\n", port_id);

TX -> Tx

> +		rte_errno = ENODEV;
> +		return 0;
> +	}
> +	if (qd == NULL) {
> +		RTE_ETHDEV_LOG(ERR, "Invalid TX queue_id=%u for port_id=%u\n",

TX -> Tx

> +			queue_id, port_id);
>  		rte_errno = EINVAL;
>  		return 0;
>  	}
>  #endif
>  
> -	if (!dev->tx_pkt_prepare)
> +	if (!p->tx_pkt_prepare)

Please, change it to compare vs NULL since you touch the line.
Just to be consistent with DPDK coding style and lines above.

>  		return nb_pkts;
>  
> -	return (*dev->tx_pkt_prepare)(dev->data->tx_queues[queue_id],
> -			tx_pkts, nb_pkts);
> +	return p->tx_pkt_prepare(qd, tx_pkts, nb_pkts);
>  }
>  
>  #else
> diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
> index 904bce6ea1..79e62dcf61 100644
> --- a/lib/ethdev/version.map
> +++ b/lib/ethdev/version.map
> @@ -7,6 +7,8 @@ DPDK_22 {
>  	rte_eth_allmulticast_disable;
>  	rte_eth_allmulticast_enable;
>  	rte_eth_allmulticast_get;
> +	rte_eth_call_rx_callbacks;
> +	rte_eth_call_tx_callbacks;
>  	rte_eth_dev_adjust_nb_rx_tx_desc;
>  	rte_eth_dev_callback_register;
>  	rte_eth_dev_callback_unregister;
> @@ -76,6 +78,7 @@ DPDK_22 {
>  	rte_eth_find_next_of;
>  	rte_eth_find_next_owned_by;
>  	rte_eth_find_next_sibling;
> +	rte_eth_fp_ops;
>  	rte_eth_iterator_cleanup;
>  	rte_eth_iterator_init;
>  	rte_eth_iterator_next;
> 


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v5 0/7] hide eth dev related structures
  2021-10-07 11:27  4%       ` [dpdk-dev] [PATCH v5 " Konstantin Ananyev
                           ` (4 preceding siblings ...)
  2021-10-08 18:13  0%         ` [dpdk-dev] [PATCH v5 0/7] " Slava Ovsiienko
@ 2021-10-11  9:22  0%         ` Andrew Rybchenko
  5 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-10-11  9:22 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: xiaoyun.li, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	haiyue.wang, johndale, hyonkim, qi.z.zhang, xiao.w.wang, humin29,
	yisen.zhuang, oulijun, beilei.xing, jingjing.wu, qiming.yang,
	matan, viacheslavo, sthemmin, longli, heinrich.kuhn, kirankumark,
	mczekaj, jiawenwu, jianwang, maxime.coquelin, chenbo.xia, thomas,
	ferruh.yigit, mdr, jay.jayatheerthan

Hi Konstantin,

On 10/7/21 2:27 PM, Konstantin Ananyev wrote:
> v5 changes:
> - Fix spelling (Thomas/David)
> - Rename internal helper functions (David)
> - Reorder patches and update commit messages (Thomas)
> - Update comments (Thomas)
> - Changed layout in rte_eth_fp_ops, to group functions and
>    related data based on their functionality:
>    first 64B line for Rx, second one for Tx.
>    Didn't observe any real performance difference comparing to
>    original layout. Though decided to keep a new one, as it seems
>    a bit more plausible. 
> 
> v4 changes:
>  - Fix secondary process attach (Pavan)
>  - Fix build failure (Ferruh)
>  - Update lib/ethdev/verion.map (Ferruh)
>    Note that moving newly added symbols from EXPERIMENTAL to DPDK_22
>    section makes checkpatch.sh to complain.
> 
> v3 changes:
>  - Changes in public struct naming (Jerin/Haiyue)
>  - Split patches
>  - Update docs
>  - Shamelessly included Andrew's patch:
>    https://patches.dpdk.org/project/dpdk/patch/20210928154856.1015020-1-andrew.rybchenko@oktetlabs.ru/
>    into these series.
>    I have to do similar thing here, so decided to avoid duplicated effort.
> 
> The aim of these patch series is to make rte_ethdev core data structures
> (rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback, etc.) internal to
> DPDK and not visible to the user.
> That should allow future possible changes to core ethdev related structures
> to be transparent to the user and help to improve ABI/API stability.
> Note that current ethdev API is preserved, but it is a formal ABI break.
> 
> The work is based on previous discussions at:
> https://www.mail-archive.com/dev@dpdk.org/msg211405.html
> https://www.mail-archive.com/dev@dpdk.org/msg216685.html
> and consists of the following main points:
> 1. Copy public 'fast' function pointers (rx_pkt_burst(), etc.) and
>    related data pointer from rte_eth_dev into a separate flat array.
>    We keep it public to still be able to use inline functions for these
>    'fast' calls (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
>    Note that apart from function pointers itself, each element of this
>    flat array also contains two opaque pointers for each ethdev:
>    1) a pointer to an array of internal queue data pointers
>    2)  points to array of queue callback data pointers.
>    Note that exposing this extra information allows us to avoid extra
>    changes inside PMD level, plus should help to avoid possible
>    performance degradation.
> 2. Change implementation of 'fast' inline ethdev functions
>    (rte_eth_rx_burst(), etc.) to use new public flat array.
>    While it is an ABI breakage, this change is intended to be transparent
>    for both users (no changes in user app is required) and PMD developers
>    (no changes in PMD is required).
>    One extra note - with new implementation RX/TX callback invocation
>    will cost one extra function call with this changes. That might cause
>    some slowdown for code-path with RX/TX callbacks heavily involved.
>    Hope such trade-off is acceptable for the community.
> 3. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related
>    things into internal header: <ethdev_driver.h>.
> 
> That approach was selected to:
>   - Avoid(/minimize) possible performance losses.
>   - Minimize required changes inside PMDs.
> 
> Performance testing results (ICX 2.0GHz, E810 (ice)):
>  - testpmd macswap fwd mode, plus
>    a) no RX/TX callbacks:
>       no actual slowdown observed
>    b) bpf-load rx 0 0 JM ./dpdk.org/examples/bpf/t3.o:
>       ~2% slowdown
>  - l3fwd: no actual slowdown observed
> 
> Would like to thank everyone who already reviewed and tested previous
> versions of these series. All other interested parties please don't be shy
> and provide your feedback.

Many thanks for the very good patch series.
I hope we'll make it to be included in 21.11.
If you need any help with cosmetic fixes suggested
by me on review, please, let me know.

Andrew.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v2 1/5] ethdev: update modify field flow action
  2021-10-10 23:45  8%   ` [dpdk-dev] [PATCH v2 1/5] " Viacheslav Ovsiienko
@ 2021-10-11  9:54  3%     ` Andrew Rybchenko
  0 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-10-11  9:54 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, dev
  Cc: rasland, matan, shahafs, orika, getelson, thomas

On 10/11/21 2:45 AM, Viacheslav Ovsiienko wrote:
> The generic modify field flow action introduced in [1] has
> some issues related to the immediate source operand:
> 
>   - immediate source can be presented either as an unsigned
>     64-bit integer or pointer to data pattern in memory.
>     There was no explicit pointer field defined in the union.
> 
>   - the byte ordering for 64-bit integer was not specified.
>     Many fields have shorter lengths and byte ordering
>     is crucial.
> 
>   - how the bit offset is applied to the immediate source
>     field was not defined and documented.
> 
>   - 64-bit integer size is not enough to provide IPv6
>     addresses.
> 
> In order to cover the issues and exclude any ambiguities
> the following is done:
> 
>   - introduce the explicit pointer field
>     in rte_flow_action_modify_data structure
> 
>   - replace the 64-bit unsigned integer with 16-byte array
> 
>   - update the modify field flow action documentation
> 
> [1] commit 73b68f4c54a0 ("ethdev: introduce generic modify flow action")
> 
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---
>  doc/guides/prog_guide/rte_flow.rst     | 16 ++++++++++++++++
>  doc/guides/rel_notes/release_21_11.rst |  9 +++++++++
>  lib/ethdev/rte_flow.h                  | 17 ++++++++++++++---
>  3 files changed, 39 insertions(+), 3 deletions(-)
> 
> diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
> index 2b42d5ec8c..1ceecb399f 100644
> --- a/doc/guides/prog_guide/rte_flow.rst
> +++ b/doc/guides/prog_guide/rte_flow.rst
> @@ -2835,6 +2835,22 @@ a packet to any other part of it.
>  ``value`` sets an immediate value to be used as a source or points to a
>  location of the value in memory. It is used instead of ``level`` and ``offset``
>  for ``RTE_FLOW_FIELD_VALUE`` and ``RTE_FLOW_FIELD_POINTER`` respectively.
> +The data in memory should be presented exactly in the same byte order and
> +length as in the relevant flow item, i.e. data for field with type
> +RTE_FLOW_FIELD_MAC_DST should follow the conventions of dst field
> +in rte_flow_item_eth structure, with type RTE_FLOW_FIELD_IPV6_SRC -
> +rte_flow_item_ipv6 conventions, and so on. If the field size is large than

large -> larger

> +16 bytes the pattern can be provided as pointer only.

RTE_FLOW_FIELD_MAC_DST, dst, rte_flow_item_eth, RTE_FLOW_FIELD_IPV6_SRC,
rte_flow_item_ipv6 should be ``x``.

> +
> +The bitfield extracted from the memory being applied as second operation
> +parameter is defined by action width and by the destination field offset.
> +Application should provide the data in immediate value memory (either as
> +buffer or by pointer) exactly as item field without any applied explicit offset,
> +and destination packet field (with specified width and bit offset) will be
> +replaced by immediate source bits from the same bit offset. For example,
> +to replace the third byte of MAC address with value 0x85, application should
> +specify destination width as 8, destination width as 16, and provide immediate

destination width twice above

> +value as sequence of bytes {xxx, xxx, 0x85, xxx, xxx, xxx}.
>  
>  .. _table_rte_flow_action_modify_field:

pvalue should be added in the "destination/source field
definition".

dst and src members documentation should be improved to
highlight the difference. Destination cannot be "immediate" and
"pointer". In fact, "pointer" is a kind of "immediate". May be
it is better to use "constant value" instead of "immediate".

>  
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> index dfc2cbdeed..41a087d7c1 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -187,6 +187,13 @@ API Changes
>    the crypto/security operation. This field will be used to communicate
>    events such as soft expiry with IPsec in lookaside mode.
>  
> +* ethdev: ``rte_flow_action_modify_data`` structure udpdated, immediate data

udpdated -> updated

> +  array is extended, data pointer field is explicitly added to union, the
> +  action behavior is defined in more strict fashion and documentation updated.
> +  The immediate value behavior has been changed, the entire immediate field
> +  should be provided, and offset for immediate source bitfield is assigned
> +  from destination one.
> +
>  
>  ABI Changes
>  -----------
> @@ -222,6 +229,8 @@ ABI Changes
>    ``rte_security_ipsec_xform`` to allow applications to configure SA soft
>    and hard expiry limits. Limits can be either in number of packets or bytes.
>  
> +* ethdev: ``rte_flow_action_modify_data`` structure udpdated.

udpdated -> updated
I'm not sure that it makes sense to duplicate ABI changes if
API is changed.

> +
>  
>  Known Issues
>  ------------
> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> index 7b1ed7f110..953924d42b 100644
> --- a/lib/ethdev/rte_flow.h
> +++ b/lib/ethdev/rte_flow.h
> @@ -3204,6 +3204,9 @@ enum rte_flow_field_id {
>  };
>  
>  /**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice
> + *

Isn't a separate fix to add missing experimental header?

>   * Field description for MODIFY_FIELD action.
>   */

"Another packet field" in the next paragraph I read as
a field of another packet which sounds confusing.
I guess it is "Another field of the packet" in fact.
I think it would be nice to clarify as well.

>  struct rte_flow_action_modify_data {
> @@ -3217,10 +3220,18 @@ struct rte_flow_action_modify_data {
>  			uint32_t offset;
>  		};
>  		/**
> -		 * Immediate value for RTE_FLOW_FIELD_VALUE or
> -		 * memory address for RTE_FLOW_FIELD_POINTER.
> +		 * Immediate value for RTE_FLOW_FIELD_VALUE, presented in the
> +		 * same byte order and length as in relevant rte_flow_item_xxx.
> +		 * The immediate source bitfield offset is inherited from
> +		 * the destination's one.
>  		 */
> -		uint64_t value;
> +		uint8_t value[16];
> +		/*

It should be a Doxygen style comment.

> +		 * Memory address for RTE_FLOW_FIELD_POINTER, memory layout
> +		 * should be the same as for relevant field in the
> +		 * rte_flow_item_xxx structure.
> +		 */
> +		void *pvalue;
>  	};
>  };
>  
> 


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v2 1/3] cryptodev: remove LIST_END enumerators
  2021-10-08 20:45  3% ` [dpdk-dev] [PATCH v2 1/3] cryptodev: remove LIST_END enumerators Akhil Goyal
  2021-10-08 20:45  3%   ` [dpdk-dev] [PATCH v2 3/3] security: add reserved bitfields Akhil Goyal
@ 2021-10-11 10:46  0%   ` Zhang, Roy Fan
  1 sibling, 0 replies; 200+ results
From: Zhang, Roy Fan @ 2021-10-11 10:46 UTC (permalink / raw)
  To: Akhil Goyal, dev
  Cc: thomas, david.marchand, hemant.agrawal, anoobj, De Lara Guarch,
	Pablo, Trahe, Fiona, Doherty, Declan, matan, g.singh,
	jianjay.zhou, asomalap, ruifeng.wang, Ananyev, Konstantin,
	Nicolau, Radu, ajit.khaparde, rnagadheeraj, adwivedi, Power,
	Ciara

> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Friday, October 8, 2021 9:45 PM
> To: dev@dpdk.org
> Cc: thomas@monjalon.net; david.marchand@redhat.com;
> hemant.agrawal@nxp.com; anoobj@marvell.com; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>; Trahe, Fiona <fiona.trahe@intel.com>;
> Doherty, Declan <declan.doherty@intel.com>; matan@nvidia.com;
> g.singh@nxp.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> jianjay.zhou@huawei.com; asomalap@amd.com; ruifeng.wang@arm.com;
> Ananyev, Konstantin <konstantin.ananyev@intel.com>; Nicolau, Radu
> <radu.nicolau@intel.com>; ajit.khaparde@broadcom.com;
> rnagadheeraj@marvell.com; adwivedi@marvell.com; Power, Ciara
> <ciara.power@intel.com>; Akhil Goyal <gakhil@marvell.com>
> Subject: [PATCH v2 1/3] cryptodev: remove LIST_END enumerators
> 
> Remove *_LIST_END enumerators from asymmetric crypto
> lib to avoid ABI breakage for every new addition in
> enums.
> 
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> ---
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] sort symbols map
  2021-10-05  9:16  4% [dpdk-dev] [PATCH] sort symbols map David Marchand
  2021-10-05 14:16  0% ` Kinsella, Ray
@ 2021-10-11 11:36  0% ` Dumitrescu, Cristian
  1 sibling, 0 replies; 200+ results
From: Dumitrescu, Cristian @ 2021-10-11 11:36 UTC (permalink / raw)
  To: David Marchand, dev
  Cc: thomas, Yigit, Ferruh, Ray Kinsella, Singh, Jasvinder, Medvedkin,
	Vladimir, Walsh, Conor, Stephen Hemminger



> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Tuesday, October 5, 2021 10:16 AM
> To: dev@dpdk.org
> Cc: thomas@monjalon.net; Yigit, Ferruh <ferruh.yigit@intel.com>; Ray
> Kinsella <mdr@ashroe.eu>; Singh, Jasvinder <jasvinder.singh@intel.com>;
> Dumitrescu, Cristian <cristian.dumitrescu@intel.com>; Medvedkin, Vladimir
> <vladimir.medvedkin@intel.com>; Walsh, Conor <conor.walsh@intel.com>;
> Stephen Hemminger <stephen@networkplumber.org>
> Subject: [PATCH] sort symbols map
> 
> Fixed with ./devtools/update-abi.sh $(cat ABI_VERSION)
> 
> Fixes: e73a7ab22422 ("net/softnic: promote manage API")
> Fixes: 8f532a34c4f2 ("fib: promote API to stable")
> Fixes: 4aeb92396b85 ("rib: promote API to stable")
> 
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
> I added "./devtools/update-abi.sh $(cat ABI_VERSION)" to my checks.
> 
> I should have caught it when merging fib and rib patches...
> But my eyes (or more likely brain) stopped at net/softnic bits.
> 
> What do you think?
> Should I wait a bit more and send a global patch to catch any missed
> sorting just before rc1?
> 
> In the meantime, if you merge .map updates, try to remember to run the
> command above.
> 
> Thanks.
> ---
>  drivers/net/softnic/version.map |  2 +-
>  lib/fib/version.map             | 21 ++++++++++-----------
>  lib/rib/version.map             | 33 ++++++++++++++++-----------------
>  3 files changed, 27 insertions(+), 29 deletions(-)
> 
> diff --git a/drivers/net/softnic/version.map

Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v8 01/10] security: add ESN field to ipsec_xform
  @ 2021-10-11 11:29  5%   ` Radu Nicolau
  2021-10-11 11:29  5%   ` [dpdk-dev] [PATCH v8 03/10] security: add UDP params for IPsec NAT-T Radu Nicolau
  1 sibling, 0 replies; 200+ results
From: Radu Nicolau @ 2021-10-11 11:29 UTC (permalink / raw)
  To: Ray Kinsella, Akhil Goyal, Declan Doherty
  Cc: dev, konstantin.ananyev, vladimir.medvedkin, bruce.richardson,
	roy.fan.zhang, hemant.agrawal, anoobj, abhijit.sinha,
	daniel.m.buckley, marchana, ktejasree, matan, Radu Nicolau

Update ipsec_xform definition to include ESN field.
This allows the application to control the ESN starting value.

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
---
 doc/guides/rel_notes/deprecation.rst   | 2 +-
 doc/guides/rel_notes/release_21_11.rst | 4 ++++
 lib/security/rte_security.h            | 8 ++++++++
 3 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index baf15aa722..8b7b0beee2 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -212,7 +212,7 @@ Deprecation Notices
 
 * security: The structure ``rte_security_ipsec_xform`` will be extended with
   multiple fields: source and destination port of UDP encapsulation,
-  IPsec payload MSS (Maximum Segment Size), and ESN (Extended Sequence Number).
+  IPsec payload MSS (Maximum Segment Size).
 
 * security: The IPsec SA config options ``struct rte_security_ipsec_sa_options``
   will be updated with new fields to support new features like IPsec inner
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index c0a7f75518..401c6d453a 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -229,6 +229,10 @@ ABI Changes
   ``rte_security_ipsec_xform`` to allow applications to configure SA soft
   and hard expiry limits. Limits can be either in number of packets or bytes.
 
+* security: A new structure ``esn`` was added in structure
+  ``rte_security_ipsec_xform`` to set an initial ESN value. This permits
+  application to start from an arbitrary ESN value for debug and SA lifetime
+  enforcement purposes.
 
 Known Issues
 ------------
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 2013e65e49..371d64647a 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -280,6 +280,14 @@ struct rte_security_ipsec_xform {
 	/**< Anti replay window size to enable sequence replay attack handling.
 	 * replay checking is disabled if the window size is 0.
 	 */
+	union {
+		uint64_t value;
+		struct {
+			uint32_t low;
+			uint32_t hi;
+		};
+	} esn;
+	/**< Extended Sequence Number */
 };
 
 /**
-- 
2.25.1


^ permalink raw reply	[relevance 5%]

* [dpdk-dev] [PATCH v8 03/10] security: add UDP params for IPsec NAT-T
    2021-10-11 11:29  5%   ` [dpdk-dev] [PATCH v8 01/10] security: add ESN field to ipsec_xform Radu Nicolau
@ 2021-10-11 11:29  5%   ` Radu Nicolau
  1 sibling, 0 replies; 200+ results
From: Radu Nicolau @ 2021-10-11 11:29 UTC (permalink / raw)
  To: Ray Kinsella, Akhil Goyal, Declan Doherty
  Cc: dev, konstantin.ananyev, vladimir.medvedkin, bruce.richardson,
	roy.fan.zhang, hemant.agrawal, anoobj, abhijit.sinha,
	daniel.m.buckley, marchana, ktejasree, matan, Radu Nicolau

Add support for specifying UDP port params for UDP encapsulation option.
RFC3948 section-2.1 does not enforce using specific the UDP ports for
UDP-Encapsulated ESP Header

Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
---
 doc/guides/rel_notes/deprecation.rst   | 5 ++---
 doc/guides/rel_notes/release_21_11.rst | 5 +++++
 lib/security/rte_security.h            | 7 +++++++
 3 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 8b7b0beee2..d24d69b669 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -210,9 +210,8 @@ Deprecation Notices
   pointer for the private data to the application which can be attached
   to the packet while enqueuing.
 
-* security: The structure ``rte_security_ipsec_xform`` will be extended with
-  multiple fields: source and destination port of UDP encapsulation,
-  IPsec payload MSS (Maximum Segment Size).
+* security: The structure ``rte_security_ipsec_xform`` will be extended with:
+  new field: IPsec payload MSS (Maximum Segment Size).
 
 * security: The IPsec SA config options ``struct rte_security_ipsec_sa_options``
   will be updated with new fields to support new features like IPsec inner
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 8ac6632abf..1a29640eea 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -238,6 +238,11 @@ ABI Changes
   application to start from an arbitrary ESN value for debug and SA lifetime
   enforcement purposes.
 
+* security: A new structure ``udp`` was added in structure
+  ``rte_security_ipsec_xform`` to allow setting the source and destination ports
+  for UDP encapsulated IPsec traffic.
+
+
 Known Issues
 ------------
 
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 371d64647a..b30425e206 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -128,6 +128,11 @@ struct rte_security_ipsec_tunnel_param {
 	};
 };
 
+struct rte_security_ipsec_udp_param {
+	uint16_t sport;
+	uint16_t dport;
+};
+
 /**
  * IPsec Security Association option flags
  */
@@ -288,6 +293,8 @@ struct rte_security_ipsec_xform {
 		};
 	} esn;
 	/**< Extended Sequence Number */
+	struct rte_security_ipsec_udp_param udp;
+	/**< UDP parameters, ignored when udp_encap option not specified */
 };
 
 /**
-- 
2.25.1


^ permalink raw reply	[relevance 5%]

* [dpdk-dev] [PATCH v7] ethdev: fix representor port ID search by name
      2021-10-08  9:27  4% ` [dpdk-dev] [PATCH v6] " Andrew Rybchenko
@ 2021-10-11 12:30  4% ` Andrew Rybchenko
  2021-10-11 12:53  4% ` [dpdk-dev] [PATCH v8] " Andrew Rybchenko
  3 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-10-11 12:30 UTC (permalink / raw)
  To: Ajit Khaparde, Somnath Kotur, John Daley, Hyong Youb Kim,
	Beilei Xing, Qiming Yang, Qi Zhang, Haiyue Wang, Matan Azrad,
	Viacheslav Ovsiienko, Thomas Monjalon, Ferruh Yigit
  Cc: dev, Viacheslav Galaktionov, Xueming Li

From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>

Getting a list of representors from a representor does not make sense.
Instead, a parent device should be used.

To this end, extend the rte_eth_dev_data structure to include the port ID
of the backing device for representors.

Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Reviewed-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
The new field is added into the hole in rte_eth_dev_data structure.
The patch does not change ABI, but extra care is required since ABI
check is disabled for the structure because of the libabigail bug [1].
It should not be a problem anyway since 21.11 is a ABI breaking release.

Potentially it is bad for out-of-tree drivers which implement
representors but do not fill in a new backer_port_id field in
rte_eth_dev_data structure. Get ID by name will not work.

mlx5 changes should be reviwed by maintainers very carefully, since
we are not sure if we patch it correctly.

[1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060

v7:
    - use dpdk_dev in net/mlx5 as suggested by Viacheslav O.

v6:
    - provide more information in the changeset description

v5:
    - try to improve name: backer_port_id instead of parent_port_id
    - init new field to RTE_MAX_ETHPORTS on allocation to avoid
      zero port usage by default

v4:
    - apply mlx5 review notes: remove fallback from generic ethdev
      code and add fallback to mlx5 code to handle legacy usecase

v3:
    - fix mlx5 build breakage

v2:
    - fix mlx5 review notes
    - try device port ID first before parent in order to address
      backward compatibility issue
 drivers/net/bnxt/bnxt_reps.c             |  1 +
 drivers/net/enic/enic_vf_representor.c   |  1 +
 drivers/net/i40e/i40e_vf_representor.c   |  1 +
 drivers/net/ice/ice_dcf_vf_representor.c |  1 +
 drivers/net/ixgbe/ixgbe_vf_representor.c |  1 +
 drivers/net/mlx5/linux/mlx5_os.c         | 13 +++++++++++++
 drivers/net/mlx5/windows/mlx5_os.c       | 13 +++++++++++++
 lib/ethdev/ethdev_driver.h               |  6 +++---
 lib/ethdev/rte_class_eth.c               |  2 +-
 lib/ethdev/rte_ethdev.c                  |  9 +++++----
 lib/ethdev/rte_ethdev_core.h             |  6 ++++++
 11 files changed, 46 insertions(+), 8 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index df05619c3f..b7e88e013a 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -187,6 +187,7 @@ int bnxt_representor_init(struct rte_eth_dev *eth_dev, void *params)
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
 					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 	eth_dev->data->representor_id = rep_params->vf_id;
+	eth_dev->data->backer_port_id = rep_params->parent_dev->data->port_id;
 
 	rte_eth_random_addr(vf_rep_bp->dflt_mac_addr);
 	memcpy(vf_rep_bp->mac_addr, vf_rep_bp->dflt_mac_addr,
diff --git a/drivers/net/enic/enic_vf_representor.c b/drivers/net/enic/enic_vf_representor.c
index cfd02c03cc..1a4411844a 100644
--- a/drivers/net/enic/enic_vf_representor.c
+++ b/drivers/net/enic/enic_vf_representor.c
@@ -666,6 +666,7 @@ int enic_vf_representor_init(struct rte_eth_dev *eth_dev, void *init_params)
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
 					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 	eth_dev->data->representor_id = vf->vf_id;
+	eth_dev->data->backer_port_id = pf->port_id;
 	eth_dev->data->mac_addrs = rte_zmalloc("enic_mac_addr_vf",
 		sizeof(struct rte_ether_addr) *
 		ENIC_UNICAST_PERFECT_FILTERS, 0);
diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
index 0481b55381..d65b821a01 100644
--- a/drivers/net/i40e/i40e_vf_representor.c
+++ b/drivers/net/i40e/i40e_vf_representor.c
@@ -514,6 +514,7 @@ i40e_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
 	ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
 					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 	ethdev->data->representor_id = representor->vf_id;
+	ethdev->data->backer_port_id = pf->dev_data->port_id;
 
 	/* Setting the number queues allocated to the VF */
 	ethdev->data->nb_rx_queues = vf->vsi->nb_qps;
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index b547c42f91..c5335ac3cc 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -426,6 +426,7 @@ ice_dcf_vf_repr_init(struct rte_eth_dev *vf_rep_eth_dev, void *init_param)
 
 	vf_rep_eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
 	vf_rep_eth_dev->data->representor_id = repr->vf_id;
+	vf_rep_eth_dev->data->backer_port_id = repr->dcf_eth_dev->data->port_id;
 
 	vf_rep_eth_dev->data->mac_addrs = &repr->mac_addr;
 
diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
index d5b636a194..9fa75984fb 100644
--- a/drivers/net/ixgbe/ixgbe_vf_representor.c
+++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
@@ -197,6 +197,7 @@ ixgbe_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
 
 	ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
 	ethdev->data->representor_id = representor->vf_id;
+	ethdev->data->backer_port_id = representor->pf_ethdev->data->port_id;
 
 	/* Set representor device ops */
 	ethdev->dev_ops = &ixgbe_vf_representor_dev_ops;
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 3746057673..3858984f02 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1677,6 +1677,19 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	if (priv->representor) {
 		eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
 		eth_dev->data->representor_id = priv->representor_id;
+		MLX5_ETH_FOREACH_DEV(port_id, dpdk_dev) {
+			struct mlx5_priv *opriv =
+				rte_eth_devices[port_id].data->dev_private;
+			if (opriv &&
+			    opriv->master &&
+			    opriv->domain_id == priv->domain_id &&
+			    opriv->sh == priv->sh) {
+				eth_dev->data->backer_port_id = port_id;
+				break;
+			}
+		}
+		if (port_id >= RTE_MAX_ETHPORTS)
+			eth_dev->data->backer_port_id = eth_dev->data->port_id;
 	}
 	priv->mp_id.port_id = eth_dev->data->port_id;
 	strlcpy(priv->mp_id.name, MLX5_MP_NAME, RTE_MP_MAX_NAME_LEN);
diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
index 26fa927039..9de8adecf4 100644
--- a/drivers/net/mlx5/windows/mlx5_os.c
+++ b/drivers/net/mlx5/windows/mlx5_os.c
@@ -543,6 +543,19 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	if (priv->representor) {
 		eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
 		eth_dev->data->representor_id = priv->representor_id;
+		MLX5_ETH_FOREACH_DEV(port_id, dpdk_dev) {
+			struct mlx5_priv *opriv =
+				rte_eth_devices[port_id].data->dev_private;
+			if (opriv &&
+			    opriv->master &&
+			    opriv->domain_id == priv->domain_id &&
+			    opriv->sh == priv->sh) {
+				eth_dev->data->backer_port_id = port_id;
+				break;
+			}
+		}
+		if (port_id >= RTE_MAX_ETHPORTS)
+			eth_dev->data->backer_port_id = eth_dev->data->port_id;
 	}
 	/*
 	 * Store associated network device interface index. This index
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 7ce0f7729a..c4ea735732 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -1266,8 +1266,8 @@ struct rte_eth_devargs {
  * For backward compatibility, if no representor info, direct
  * map legacy VF (no controller and pf).
  *
- * @param ethdev
- *  Handle of ethdev port.
+ * @param port_id
+ *  Port ID of the backing device.
  * @param type
  *  Representor type.
  * @param controller
@@ -1284,7 +1284,7 @@ struct rte_eth_devargs {
  */
 __rte_internal
 int
-rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
+rte_eth_representor_id_get(uint16_t port_id,
 			   enum rte_eth_representor_type type,
 			   int controller, int pf, int representor_port,
 			   uint16_t *repr_id);
diff --git a/lib/ethdev/rte_class_eth.c b/lib/ethdev/rte_class_eth.c
index 1fe5fa1f36..eda216ced5 100644
--- a/lib/ethdev/rte_class_eth.c
+++ b/lib/ethdev/rte_class_eth.c
@@ -95,7 +95,7 @@ eth_representor_cmp(const char *key __rte_unused,
 		c = i / (np * nf);
 		p = (i / nf) % np;
 		f = i % nf;
-		if (rte_eth_representor_id_get(edev,
+		if (rte_eth_representor_id_get(edev->data->backer_port_id,
 			eth_da.type,
 			eth_da.nb_mh_controllers == 0 ? -1 :
 					eth_da.mh_controllers[c],
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 028907bc4b..ed7b43a99f 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -524,6 +524,7 @@ rte_eth_dev_allocate(const char *name)
 	eth_dev = eth_dev_get(port_id);
 	strlcpy(eth_dev->data->name, name, sizeof(eth_dev->data->name));
 	eth_dev->data->port_id = port_id;
+	eth_dev->data->backer_port_id = RTE_MAX_ETHPORTS;
 	eth_dev->data->mtu = RTE_ETHER_MTU;
 	pthread_mutex_init(&eth_dev->data->flow_ops_mutex, NULL);
 
@@ -5915,7 +5916,7 @@ rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da)
 }
 
 int
-rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
+rte_eth_representor_id_get(uint16_t port_id,
 			   enum rte_eth_representor_type type,
 			   int controller, int pf, int representor_port,
 			   uint16_t *repr_id)
@@ -5931,7 +5932,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
 		return -EINVAL;
 
 	/* Get PMD representor range info. */
-	ret = rte_eth_representor_info_get(ethdev->data->port_id, NULL);
+	ret = rte_eth_representor_info_get(port_id, NULL);
 	if (ret == -ENOTSUP && type == RTE_ETH_REPRESENTOR_VF &&
 	    controller == -1 && pf == -1) {
 		/* Direct mapping for legacy VF representor. */
@@ -5946,7 +5947,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
 	if (info == NULL)
 		return -ENOMEM;
 	info->nb_ranges_alloc = n;
-	ret = rte_eth_representor_info_get(ethdev->data->port_id, info);
+	ret = rte_eth_representor_info_get(port_id, info);
 	if (ret < 0)
 		goto out;
 
@@ -5965,7 +5966,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
 			continue;
 		if (info->ranges[i].id_end < info->ranges[i].id_base) {
 			RTE_LOG(WARNING, EAL, "Port %hu invalid representor ID Range %u - %u, entry %d\n",
-				ethdev->data->port_id, info->ranges[i].id_base,
+				port_id, info->ranges[i].id_base,
 				info->ranges[i].id_end, i);
 			continue;
 
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index d2c9ec42c7..66ad8b13c8 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -185,6 +185,12 @@ struct rte_eth_dev_data {
 			/**< Switch-specific identifier.
 			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
 			 */
+	uint16_t backer_port_id;
+			/**< Port ID of the backing device.
+			 *   This device will be used to query representor
+			 *   info and calculate representor IDs.
+			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
+			 */
 
 	pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
 	uint64_t reserved_64s[4]; /**< Reserved for future fields */
-- 
2.30.2


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v2 3/5] cryptodev: move inline APIs into separate structure
  @ 2021-10-11 12:43  2%   ` Akhil Goyal
  2021-10-11 14:45  0%     ` Zhang, Roy Fan
  2021-10-11 12:43  3%   ` [dpdk-dev] [PATCH v2 4/5] cryptodev: update fast path APIs to use new flat array Akhil Goyal
  1 sibling, 1 reply; 200+ results
From: Akhil Goyal @ 2021-10-11 12:43 UTC (permalink / raw)
  To: dev
  Cc: thomas, david.marchand, hemant.agrawal, anoobj,
	pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
	g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang,
	konstantin.ananyev, radu.nicolau, ajit.khaparde, rnagadheeraj,
	adwivedi, ciara.power, Akhil Goyal

Move fastpath inline function pointers from rte_cryptodev into a
separate structure accessed via a flat array.
The intension is to make rte_cryptodev and related structures private
to avoid future API/ABI breakages.

Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
 lib/cryptodev/cryptodev_pmd.c      | 51 ++++++++++++++++++++++++++++++
 lib/cryptodev/cryptodev_pmd.h      | 11 +++++++
 lib/cryptodev/rte_cryptodev.c      | 29 +++++++++++++++++
 lib/cryptodev/rte_cryptodev_core.h | 29 +++++++++++++++++
 lib/cryptodev/version.map          |  5 +++
 5 files changed, 125 insertions(+)

diff --git a/lib/cryptodev/cryptodev_pmd.c b/lib/cryptodev/cryptodev_pmd.c
index 44a70ecb35..4646708045 100644
--- a/lib/cryptodev/cryptodev_pmd.c
+++ b/lib/cryptodev/cryptodev_pmd.c
@@ -4,6 +4,7 @@
 
 #include <sys/queue.h>
 
+#include <rte_errno.h>
 #include <rte_string_fns.h>
 #include <rte_malloc.h>
 
@@ -160,3 +161,53 @@ rte_cryptodev_pmd_destroy(struct rte_cryptodev *cryptodev)
 
 	return 0;
 }
+
+static uint16_t
+dummy_crypto_enqueue_burst(__rte_unused void *qp,
+			   __rte_unused struct rte_crypto_op **ops,
+			   __rte_unused uint16_t nb_ops)
+{
+	CDEV_LOG_ERR(
+		"crypto enqueue burst requested for unconfigured device");
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+static uint16_t
+dummy_crypto_dequeue_burst(__rte_unused void *qp,
+			   __rte_unused struct rte_crypto_op **ops,
+			   __rte_unused uint16_t nb_ops)
+{
+	CDEV_LOG_ERR(
+		"crypto dequeue burst requested for unconfigured device");
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+void
+cryptodev_fp_ops_reset(struct rte_crypto_fp_ops *fp_ops)
+{
+	static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
+	static const struct rte_crypto_fp_ops dummy = {
+		.enqueue_burst = dummy_crypto_enqueue_burst,
+		.dequeue_burst = dummy_crypto_dequeue_burst,
+		.qp = {
+			.data = dummy_data,
+			.enq_cb = dummy_data,
+			.deq_cb = dummy_data,
+		},
+	};
+
+	*fp_ops = dummy;
+}
+
+void
+cryptodev_fp_ops_set(struct rte_crypto_fp_ops *fp_ops,
+		     const struct rte_cryptodev *dev)
+{
+	fp_ops->enqueue_burst = dev->enqueue_burst;
+	fp_ops->dequeue_burst = dev->dequeue_burst;
+	fp_ops->qp.data = dev->data->queue_pairs;
+	fp_ops->qp.enq_cb = (void **)(uintptr_t)dev->enq_cbs;
+	fp_ops->qp.deq_cb = (void **)(uintptr_t)dev->deq_cbs;
+}
diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h
index 36606dd10b..a71edbb991 100644
--- a/lib/cryptodev/cryptodev_pmd.h
+++ b/lib/cryptodev/cryptodev_pmd.h
@@ -516,6 +516,17 @@ RTE_INIT(init_ ##driver_id)\
 	driver_id = rte_cryptodev_allocate_driver(&crypto_drv, &(drv));\
 }
 
+/* Reset crypto device fastpath APIs to dummy values. */
+__rte_internal
+void
+cryptodev_fp_ops_reset(struct rte_crypto_fp_ops *fp_ops);
+
+/* Setup crypto device fastpath APIs. */
+__rte_internal
+void
+cryptodev_fp_ops_set(struct rte_crypto_fp_ops *fp_ops,
+		     const struct rte_cryptodev *dev);
+
 static inline void *
 get_sym_session_private_data(const struct rte_cryptodev_sym_session *sess,
 		uint8_t driver_id) {
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index eb86e629aa..2378892d40 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -53,6 +53,9 @@ static struct rte_cryptodev_global cryptodev_globals = {
 		.nb_devs		= 0
 };
 
+/* Public fastpath APIs. */
+struct rte_crypto_fp_ops rte_crypto_fp_ops[RTE_CRYPTO_MAX_DEVS];
+
 /* spinlock for crypto device callbacks */
 static rte_spinlock_t rte_cryptodev_cb_lock = RTE_SPINLOCK_INITIALIZER;
 
@@ -903,6 +906,16 @@ rte_cryptodev_pmd_allocate(const char *name, int socket_id)
 		cryptodev_globals.nb_devs++;
 	}
 
+	/*
+	 * for secondary process, at that point we expect device
+	 * to be already 'usable', so shared data and all function
+	 * pointers for fast-path devops have to be setup properly
+	 * inside rte_cryptodev.
+	 */
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		cryptodev_fp_ops_set(rte_crypto_fp_ops +
+				cryptodev->data->dev_id, cryptodev);
+
 	return cryptodev;
 }
 
@@ -917,6 +930,8 @@ rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev)
 
 	dev_id = cryptodev->data->dev_id;
 
+	cryptodev_fp_ops_reset(rte_crypto_fp_ops + dev_id);
+
 	/* Close device only if device operations have been set */
 	if (cryptodev->dev_ops) {
 		ret = rte_cryptodev_close(dev_id);
@@ -1080,6 +1095,9 @@ rte_cryptodev_start(uint8_t dev_id)
 	}
 
 	diag = (*dev->dev_ops->dev_start)(dev);
+	/* expose selection of PMD fast-path functions */
+	cryptodev_fp_ops_set(rte_crypto_fp_ops + dev_id, dev);
+
 	rte_cryptodev_trace_start(dev_id, diag);
 	if (diag == 0)
 		dev->data->dev_started = 1;
@@ -1109,6 +1127,9 @@ rte_cryptodev_stop(uint8_t dev_id)
 		return;
 	}
 
+	/* point fast-path functions to dummy ones */
+	cryptodev_fp_ops_reset(rte_crypto_fp_ops + dev_id);
+
 	(*dev->dev_ops->dev_stop)(dev);
 	rte_cryptodev_trace_stop(dev_id);
 	dev->data->dev_started = 0;
@@ -2411,3 +2432,11 @@ rte_cryptodev_allocate_driver(struct cryptodev_driver *crypto_drv,
 
 	return nb_drivers++;
 }
+
+RTE_INIT(cryptodev_init_fp_ops)
+{
+	uint32_t i;
+
+	for (i = 0; i != RTE_DIM(rte_crypto_fp_ops); i++)
+		cryptodev_fp_ops_reset(rte_crypto_fp_ops + i);
+}
diff --git a/lib/cryptodev/rte_cryptodev_core.h b/lib/cryptodev/rte_cryptodev_core.h
index 1633e55889..bac5f8d984 100644
--- a/lib/cryptodev/rte_cryptodev_core.h
+++ b/lib/cryptodev/rte_cryptodev_core.h
@@ -25,6 +25,35 @@ typedef uint16_t (*enqueue_pkt_burst_t)(void *qp,
 		struct rte_crypto_op **ops,	uint16_t nb_ops);
 /**< Enqueue packets for processing on queue pair of a device. */
 
+/**
+ * @internal
+ * Structure used to hold opaque pointers to internal ethdev Rx/Tx
+ * queues data.
+ * The main purpose to expose these pointers at all - allow compiler
+ * to fetch this data for fast-path cryptodev inline functions in advance.
+ */
+struct rte_cryptodev_qpdata {
+	/** points to array of internal queue pair data pointers. */
+	void **data;
+	/** points to array of enqueue callback data pointers */
+	void **enq_cb;
+	/** points to array of dequeue callback data pointers */
+	void **deq_cb;
+};
+
+struct rte_crypto_fp_ops {
+	/** PMD enqueue burst function. */
+	enqueue_pkt_burst_t enqueue_burst;
+	/** PMD dequeue burst function. */
+	dequeue_pkt_burst_t dequeue_burst;
+	/** Internal queue pair data pointers. */
+	struct rte_cryptodev_qpdata qp;
+	/** Reserved for future ops. */
+	uintptr_t reserved[4];
+} __rte_cache_aligned;
+
+extern struct rte_crypto_fp_ops rte_crypto_fp_ops[RTE_CRYPTO_MAX_DEVS];
+
 /**
  * @internal
  * The data part, with no function pointers, associated with each device.
diff --git a/lib/cryptodev/version.map b/lib/cryptodev/version.map
index 43cf937e40..ed62ced221 100644
--- a/lib/cryptodev/version.map
+++ b/lib/cryptodev/version.map
@@ -45,6 +45,9 @@ DPDK_22 {
 	rte_cryptodev_sym_session_init;
 	rte_cryptodevs;
 
+	#added in 21.11
+	rte_crypto_fp_ops;
+
 	local: *;
 };
 
@@ -109,6 +112,8 @@ EXPERIMENTAL {
 INTERNAL {
 	global:
 
+	cryptodev_fp_ops_reset;
+	cryptodev_fp_ops_set;
 	rte_cryptodev_allocate_driver;
 	rte_cryptodev_pmd_allocate;
 	rte_cryptodev_pmd_callback_process;
-- 
2.25.1


^ permalink raw reply	[relevance 2%]

* [dpdk-dev] [PATCH v2 4/5] cryptodev: update fast path APIs to use new flat array
    2021-10-11 12:43  2%   ` [dpdk-dev] [PATCH v2 3/5] cryptodev: move inline APIs into separate structure Akhil Goyal
@ 2021-10-11 12:43  3%   ` Akhil Goyal
  2021-10-11 14:54  0%     ` Zhang, Roy Fan
  1 sibling, 1 reply; 200+ results
From: Akhil Goyal @ 2021-10-11 12:43 UTC (permalink / raw)
  To: dev
  Cc: thomas, david.marchand, hemant.agrawal, anoobj,
	pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
	g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang,
	konstantin.ananyev, radu.nicolau, ajit.khaparde, rnagadheeraj,
	adwivedi, ciara.power, Akhil Goyal

Rework fast-path cryptodev functions to use rte_crypto_fp_ops[].
While it is an API/ABI breakage, this change is intended to be
transparent for both users (no changes in user app is required) and
PMD developers (no changes in PMD is required).

Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
 lib/cryptodev/rte_cryptodev.h | 27 +++++++++++++++++----------
 1 file changed, 17 insertions(+), 10 deletions(-)

diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index ce0dca72be..739ad529e5 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -1832,13 +1832,18 @@ static inline uint16_t
 rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
 		struct rte_crypto_op **ops, uint16_t nb_ops)
 {
-	struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
+	struct rte_crypto_fp_ops *fp_ops;
+	void *qp;
 
 	rte_cryptodev_trace_dequeue_burst(dev_id, qp_id, (void **)ops, nb_ops);
-	nb_ops = (*dev->dequeue_burst)
-			(dev->data->queue_pairs[qp_id], ops, nb_ops);
+
+	fp_ops = &rte_crypto_fp_ops[dev_id];
+	qp = fp_ops->qp.data[qp_id];
+
+	nb_ops = fp_ops->dequeue_burst(qp, ops, nb_ops);
+
 #ifdef RTE_CRYPTO_CALLBACKS
-	if (unlikely(dev->deq_cbs != NULL)) {
+	if (unlikely(fp_ops->qp.deq_cb != NULL)) {
 		struct rte_cryptodev_cb_rcu *list;
 		struct rte_cryptodev_cb *cb;
 
@@ -1848,7 +1853,7 @@ rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
 		 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
 		 * not required.
 		 */
-		list = &dev->deq_cbs[qp_id];
+		list = (struct rte_cryptodev_cb_rcu *)&fp_ops->qp.deq_cb[qp_id];
 		rte_rcu_qsbr_thread_online(list->qsbr, 0);
 		cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED);
 
@@ -1899,10 +1904,13 @@ static inline uint16_t
 rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
 		struct rte_crypto_op **ops, uint16_t nb_ops)
 {
-	struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
+	struct rte_crypto_fp_ops *fp_ops;
+	void *qp;
 
+	fp_ops = &rte_crypto_fp_ops[dev_id];
+	qp = fp_ops->qp.data[qp_id];
 #ifdef RTE_CRYPTO_CALLBACKS
-	if (unlikely(dev->enq_cbs != NULL)) {
+	if (unlikely(fp_ops->qp.enq_cb != NULL)) {
 		struct rte_cryptodev_cb_rcu *list;
 		struct rte_cryptodev_cb *cb;
 
@@ -1912,7 +1920,7 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
 		 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is
 		 * not required.
 		 */
-		list = &dev->enq_cbs[qp_id];
+		list = (struct rte_cryptodev_cb_rcu *)&fp_ops->qp.enq_cb[qp_id];
 		rte_rcu_qsbr_thread_online(list->qsbr, 0);
 		cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED);
 
@@ -1927,8 +1935,7 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
 #endif
 
 	rte_cryptodev_trace_enqueue_burst(dev_id, qp_id, (void **)ops, nb_ops);
-	return (*dev->enqueue_burst)(
-			dev->data->queue_pairs[qp_id], ops, nb_ops);
+	return fp_ops->enqueue_burst(qp, ops, nb_ops);
 }
 
 
-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v8] ethdev: fix representor port ID search by name
                     ` (2 preceding siblings ...)
  2021-10-11 12:30  4% ` [dpdk-dev] [PATCH v7] " Andrew Rybchenko
@ 2021-10-11 12:53  4% ` Andrew Rybchenko
  3 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-10-11 12:53 UTC (permalink / raw)
  To: Ajit Khaparde, Somnath Kotur, John Daley, Hyong Youb Kim,
	Beilei Xing, Qiming Yang, Qi Zhang, Haiyue Wang, Matan Azrad,
	Viacheslav Ovsiienko, Thomas Monjalon, Ferruh Yigit
  Cc: dev, Viacheslav Galaktionov, Xueming Li

From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>

The patch is required for all PMDs which do not provide representors
info on the representor itself.

The function, rte_eth_representor_id_get(), is used in
eth_representor_cmp() which is required in ethdev class iterator to
search ethdev port ID by name (representor case). Before the patch
the function is called on the representor itself and tries to get
representors info to match.

Search of port ID by name is used after hotplug to find out port ID
of the just plugged device.

Getting a list of representors from a representor does not make sense.
Instead, a backer device should be used.

To this end, extend the rte_eth_dev_data structure to include the port ID
of the backing device for representors.

Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Beilei Xing <beilei.xing@intel.com>
Reviewed-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
The new field is added into the hole in rte_eth_dev_data structure.
The patch does not change ABI, but extra care is required since ABI
check is disabled for the structure because of the libabigail bug [1].
It should not be a problem anyway since 21.11 is a ABI breaking release.

Potentially it is bad for out-of-tree drivers which implement
representors but do not fill in a new backer_port_id field in
rte_eth_dev_data structure. Get ID by name will not work.

mlx5 changes should be reviwed by maintainers very carefully, since
we are not sure if we patch it correctly.

[1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060

v8:
    - restore lost description improvements

v7:
    - use dpdk_dev in net/mlx5 as suggested by Viacheslav O.

v6:
    - provide more information in the changeset description

v5:
    - try to improve name: backer_port_id instead of parent_port_id
    - init new field to RTE_MAX_ETHPORTS on allocation to avoid
      zero port usage by default

v4:
    - apply mlx5 review notes: remove fallback from generic ethdev
      code and add fallback to mlx5 code to handle legacy usecase

v3:
    - fix mlx5 build breakage

v2:
    - fix mlx5 review notes
    - try device port ID first before parent in order to address
      backward compatibility issue

 drivers/net/bnxt/bnxt_reps.c             |  1 +
 drivers/net/enic/enic_vf_representor.c   |  1 +
 drivers/net/i40e/i40e_vf_representor.c   |  1 +
 drivers/net/ice/ice_dcf_vf_representor.c |  1 +
 drivers/net/ixgbe/ixgbe_vf_representor.c |  1 +
 drivers/net/mlx5/linux/mlx5_os.c         | 13 +++++++++++++
 drivers/net/mlx5/windows/mlx5_os.c       | 13 +++++++++++++
 lib/ethdev/ethdev_driver.h               |  6 +++---
 lib/ethdev/rte_class_eth.c               |  2 +-
 lib/ethdev/rte_ethdev.c                  |  9 +++++----
 lib/ethdev/rte_ethdev_core.h             |  6 ++++++
 11 files changed, 46 insertions(+), 8 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index df05619c3f..b7e88e013a 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -187,6 +187,7 @@ int bnxt_representor_init(struct rte_eth_dev *eth_dev, void *params)
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
 					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 	eth_dev->data->representor_id = rep_params->vf_id;
+	eth_dev->data->backer_port_id = rep_params->parent_dev->data->port_id;
 
 	rte_eth_random_addr(vf_rep_bp->dflt_mac_addr);
 	memcpy(vf_rep_bp->mac_addr, vf_rep_bp->dflt_mac_addr,
diff --git a/drivers/net/enic/enic_vf_representor.c b/drivers/net/enic/enic_vf_representor.c
index cfd02c03cc..1a4411844a 100644
--- a/drivers/net/enic/enic_vf_representor.c
+++ b/drivers/net/enic/enic_vf_representor.c
@@ -666,6 +666,7 @@ int enic_vf_representor_init(struct rte_eth_dev *eth_dev, void *init_params)
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
 					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 	eth_dev->data->representor_id = vf->vf_id;
+	eth_dev->data->backer_port_id = pf->port_id;
 	eth_dev->data->mac_addrs = rte_zmalloc("enic_mac_addr_vf",
 		sizeof(struct rte_ether_addr) *
 		ENIC_UNICAST_PERFECT_FILTERS, 0);
diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
index 0481b55381..d65b821a01 100644
--- a/drivers/net/i40e/i40e_vf_representor.c
+++ b/drivers/net/i40e/i40e_vf_representor.c
@@ -514,6 +514,7 @@ i40e_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
 	ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
 					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 	ethdev->data->representor_id = representor->vf_id;
+	ethdev->data->backer_port_id = pf->dev_data->port_id;
 
 	/* Setting the number queues allocated to the VF */
 	ethdev->data->nb_rx_queues = vf->vsi->nb_qps;
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index b547c42f91..c5335ac3cc 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -426,6 +426,7 @@ ice_dcf_vf_repr_init(struct rte_eth_dev *vf_rep_eth_dev, void *init_param)
 
 	vf_rep_eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
 	vf_rep_eth_dev->data->representor_id = repr->vf_id;
+	vf_rep_eth_dev->data->backer_port_id = repr->dcf_eth_dev->data->port_id;
 
 	vf_rep_eth_dev->data->mac_addrs = &repr->mac_addr;
 
diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
index d5b636a194..9fa75984fb 100644
--- a/drivers/net/ixgbe/ixgbe_vf_representor.c
+++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
@@ -197,6 +197,7 @@ ixgbe_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
 
 	ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
 	ethdev->data->representor_id = representor->vf_id;
+	ethdev->data->backer_port_id = representor->pf_ethdev->data->port_id;
 
 	/* Set representor device ops */
 	ethdev->dev_ops = &ixgbe_vf_representor_dev_ops;
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 3746057673..3858984f02 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1677,6 +1677,19 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	if (priv->representor) {
 		eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
 		eth_dev->data->representor_id = priv->representor_id;
+		MLX5_ETH_FOREACH_DEV(port_id, dpdk_dev) {
+			struct mlx5_priv *opriv =
+				rte_eth_devices[port_id].data->dev_private;
+			if (opriv &&
+			    opriv->master &&
+			    opriv->domain_id == priv->domain_id &&
+			    opriv->sh == priv->sh) {
+				eth_dev->data->backer_port_id = port_id;
+				break;
+			}
+		}
+		if (port_id >= RTE_MAX_ETHPORTS)
+			eth_dev->data->backer_port_id = eth_dev->data->port_id;
 	}
 	priv->mp_id.port_id = eth_dev->data->port_id;
 	strlcpy(priv->mp_id.name, MLX5_MP_NAME, RTE_MP_MAX_NAME_LEN);
diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
index 26fa927039..9de8adecf4 100644
--- a/drivers/net/mlx5/windows/mlx5_os.c
+++ b/drivers/net/mlx5/windows/mlx5_os.c
@@ -543,6 +543,19 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	if (priv->representor) {
 		eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
 		eth_dev->data->representor_id = priv->representor_id;
+		MLX5_ETH_FOREACH_DEV(port_id, dpdk_dev) {
+			struct mlx5_priv *opriv =
+				rte_eth_devices[port_id].data->dev_private;
+			if (opriv &&
+			    opriv->master &&
+			    opriv->domain_id == priv->domain_id &&
+			    opriv->sh == priv->sh) {
+				eth_dev->data->backer_port_id = port_id;
+				break;
+			}
+		}
+		if (port_id >= RTE_MAX_ETHPORTS)
+			eth_dev->data->backer_port_id = eth_dev->data->port_id;
 	}
 	/*
 	 * Store associated network device interface index. This index
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 7ce0f7729a..c4ea735732 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -1266,8 +1266,8 @@ struct rte_eth_devargs {
  * For backward compatibility, if no representor info, direct
  * map legacy VF (no controller and pf).
  *
- * @param ethdev
- *  Handle of ethdev port.
+ * @param port_id
+ *  Port ID of the backing device.
  * @param type
  *  Representor type.
  * @param controller
@@ -1284,7 +1284,7 @@ struct rte_eth_devargs {
  */
 __rte_internal
 int
-rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
+rte_eth_representor_id_get(uint16_t port_id,
 			   enum rte_eth_representor_type type,
 			   int controller, int pf, int representor_port,
 			   uint16_t *repr_id);
diff --git a/lib/ethdev/rte_class_eth.c b/lib/ethdev/rte_class_eth.c
index 1fe5fa1f36..eda216ced5 100644
--- a/lib/ethdev/rte_class_eth.c
+++ b/lib/ethdev/rte_class_eth.c
@@ -95,7 +95,7 @@ eth_representor_cmp(const char *key __rte_unused,
 		c = i / (np * nf);
 		p = (i / nf) % np;
 		f = i % nf;
-		if (rte_eth_representor_id_get(edev,
+		if (rte_eth_representor_id_get(edev->data->backer_port_id,
 			eth_da.type,
 			eth_da.nb_mh_controllers == 0 ? -1 :
 					eth_da.mh_controllers[c],
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 028907bc4b..ed7b43a99f 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -524,6 +524,7 @@ rte_eth_dev_allocate(const char *name)
 	eth_dev = eth_dev_get(port_id);
 	strlcpy(eth_dev->data->name, name, sizeof(eth_dev->data->name));
 	eth_dev->data->port_id = port_id;
+	eth_dev->data->backer_port_id = RTE_MAX_ETHPORTS;
 	eth_dev->data->mtu = RTE_ETHER_MTU;
 	pthread_mutex_init(&eth_dev->data->flow_ops_mutex, NULL);
 
@@ -5915,7 +5916,7 @@ rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da)
 }
 
 int
-rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
+rte_eth_representor_id_get(uint16_t port_id,
 			   enum rte_eth_representor_type type,
 			   int controller, int pf, int representor_port,
 			   uint16_t *repr_id)
@@ -5931,7 +5932,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
 		return -EINVAL;
 
 	/* Get PMD representor range info. */
-	ret = rte_eth_representor_info_get(ethdev->data->port_id, NULL);
+	ret = rte_eth_representor_info_get(port_id, NULL);
 	if (ret == -ENOTSUP && type == RTE_ETH_REPRESENTOR_VF &&
 	    controller == -1 && pf == -1) {
 		/* Direct mapping for legacy VF representor. */
@@ -5946,7 +5947,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
 	if (info == NULL)
 		return -ENOMEM;
 	info->nb_ranges_alloc = n;
-	ret = rte_eth_representor_info_get(ethdev->data->port_id, info);
+	ret = rte_eth_representor_info_get(port_id, info);
 	if (ret < 0)
 		goto out;
 
@@ -5965,7 +5966,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
 			continue;
 		if (info->ranges[i].id_end < info->ranges[i].id_base) {
 			RTE_LOG(WARNING, EAL, "Port %hu invalid representor ID Range %u - %u, entry %d\n",
-				ethdev->data->port_id, info->ranges[i].id_base,
+				port_id, info->ranges[i].id_base,
 				info->ranges[i].id_end, i);
 			continue;
 
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index d2c9ec42c7..66ad8b13c8 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -185,6 +185,12 @@ struct rte_eth_dev_data {
 			/**< Switch-specific identifier.
 			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
 			 */
+	uint16_t backer_port_id;
+			/**< Port ID of the backing device.
+			 *   This device will be used to query representor
+			 *   info and calculate representor IDs.
+			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
+			 */
 
 	pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
 	uint64_t reserved_64s[4]; /**< Reserved for future fields */
-- 
2.30.2


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v3] ci: update machine meson option to platform
  2021-10-04 13:29  4% ` [dpdk-dev] [PATCH v2] " Juraj Linkeš
@ 2021-10-11 13:40  4%   ` Juraj Linkeš
  0 siblings, 0 replies; 200+ results
From: Juraj Linkeš @ 2021-10-11 13:40 UTC (permalink / raw)
  To: thomas, david.marchand, aconole, maicolgabriel; +Cc: dev, Juraj Linkeš

The way we're building DPDK in CI, with -Dmachine=default, has not been
updated when the option got replaced to preserve a backwards-complatible
build call to facilitate ABI verification between DPDK versions. Update
the call to use -Dplatform=generic, which is the most up to date way to
execute the same build which is now present in all DPDK versions the ABI
check verifies.

Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
---
v3: ci retest
---
 .ci/linux-build.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/.ci/linux-build.sh b/.ci/linux-build.sh
index 91e43a975b..06aaa79100 100755
--- a/.ci/linux-build.sh
+++ b/.ci/linux-build.sh
@@ -77,7 +77,7 @@ else
     OPTS="$OPTS -Dexamples=all"
 fi
 
-OPTS="$OPTS -Dmachine=default"
+OPTS="$OPTS -Dplatform=generic"
 OPTS="$OPTS --default-library=$DEF_LIB"
 OPTS="$OPTS --buildtype=debugoptimized"
 OPTS="$OPTS -Dcheck_includes=true"
-- 
2.20.1


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v2 3/5] cryptodev: move inline APIs into separate structure
  2021-10-11 12:43  2%   ` [dpdk-dev] [PATCH v2 3/5] cryptodev: move inline APIs into separate structure Akhil Goyal
@ 2021-10-11 14:45  0%     ` Zhang, Roy Fan
  0 siblings, 0 replies; 200+ results
From: Zhang, Roy Fan @ 2021-10-11 14:45 UTC (permalink / raw)
  To: Akhil Goyal, dev
  Cc: thomas, david.marchand, hemant.agrawal, anoobj, De Lara Guarch,
	Pablo, Trahe, Fiona, Doherty, Declan, matan, g.singh,
	jianjay.zhou, asomalap, ruifeng.wang, Ananyev, Konstantin,
	Nicolau, Radu, ajit.khaparde, rnagadheeraj, adwivedi, Power,
	Ciara

Hi Akhil,

> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Monday, October 11, 2021 1:43 PM
> To: dev@dpdk.org
> Cc: thomas@monjalon.net; david.marchand@redhat.com;
> hemant.agrawal@nxp.com; anoobj@marvell.com; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>; Trahe, Fiona <fiona.trahe@intel.com>;
> Doherty, Declan <declan.doherty@intel.com>; matan@nvidia.com;
> g.singh@nxp.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> jianjay.zhou@huawei.com; asomalap@amd.com; ruifeng.wang@arm.com;
> Ananyev, Konstantin <konstantin.ananyev@intel.com>; Nicolau, Radu
> <radu.nicolau@intel.com>; ajit.khaparde@broadcom.com;
> rnagadheeraj@marvell.com; adwivedi@marvell.com; Power, Ciara
> <ciara.power@intel.com>; Akhil Goyal <gakhil@marvell.com>
> Subject: [PATCH v2 3/5] cryptodev: move inline APIs into separate structure
> 
> Move fastpath inline function pointers from rte_cryptodev into a
> separate structure accessed via a flat array.
> The intension is to make rte_cryptodev and related structures private
> to avoid future API/ABI breakages.
> 
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> ---
>  lib/cryptodev/cryptodev_pmd.c      | 51
> ++++++++++++++++++++++++++++++
>  lib/cryptodev/cryptodev_pmd.h      | 11 +++++++
>  lib/cryptodev/rte_cryptodev.c      | 29 +++++++++++++++++
>  lib/cryptodev/rte_cryptodev_core.h | 29 +++++++++++++++++
>  lib/cryptodev/version.map          |  5 +++
>  5 files changed, 125 insertions(+)
> 
> diff --git a/lib/cryptodev/cryptodev_pmd.c
> b/lib/cryptodev/cryptodev_pmd.c
> index 44a70ecb35..4646708045 100644
> --- a/lib/cryptodev/cryptodev_pmd.c
> +++ b/lib/cryptodev/cryptodev_pmd.c
> @@ -4,6 +4,7 @@
> 
>  #include <sys/queue.h>
> 
> +#include <rte_errno.h>
>  #include <rte_string_fns.h>
>  #include <rte_malloc.h>
> 
> @@ -160,3 +161,53 @@ rte_cryptodev_pmd_destroy(struct rte_cryptodev
> *cryptodev)
> 

When a device is removed - aka when rte_pci_remove() is called 
cryptodev_fp_ops_reset() will never be called. This may expose a problem.
Looks like cryptodev_fp_ops_reset() needs to be called here too.

>  	return 0;
... 


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v2 01/38] common/sfc_efx/base: update MCDI headers
  @ 2021-10-11 14:48  2%   ` Andrew Rybchenko
  0 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
  To: dev

Pickup new FW interface definitions.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 drivers/common/sfc_efx/base/efx_regs_mcdi.h | 1211 ++++++++++++++++++-
 1 file changed, 1176 insertions(+), 35 deletions(-)

diff --git a/drivers/common/sfc_efx/base/efx_regs_mcdi.h b/drivers/common/sfc_efx/base/efx_regs_mcdi.h
index a3c9f076ec..2daf825a36 100644
--- a/drivers/common/sfc_efx/base/efx_regs_mcdi.h
+++ b/drivers/common/sfc_efx/base/efx_regs_mcdi.h
@@ -492,6 +492,24 @@
  */
 #define	MAE_FIELD_SUPPORTED_MATCH_MASK 0x5
 
+/* MAE_CT_VNI_MODE enum: Controls the layout of the VNI input to the conntrack
+ * lookup. (Values are not arbitrary - constrained by table access ABI.)
+ */
+/* enum: The VNI input to the conntrack lookup will be zero. */
+#define	MAE_CT_VNI_MODE_ZERO 0x0
+/* enum: The VNI input to the conntrack lookup will be the VNI (VXLAN/Geneve)
+ * or VSID (NVGRE) field from the packet.
+ */
+#define	MAE_CT_VNI_MODE_VNI 0x1
+/* enum: The VNI input to the conntrack lookup will be the VLAN ID from the
+ * outermost VLAN tag (in bottom 12 bits; top 12 bits zero).
+ */
+#define	MAE_CT_VNI_MODE_1VLAN 0x2
+/* enum: The VNI input to the conntrack lookup will be the VLAN IDs from both
+ * VLAN tags (outermost in bottom 12 bits, innermost in top 12 bits).
+ */
+#define	MAE_CT_VNI_MODE_2VLAN 0x3
+
 /* MAE_FIELD enum: NB: this enum shares namespace with the support status enum.
  */
 /* enum: Source mport upon entering the MAE. */
@@ -617,7 +635,8 @@
 
 /* MAE_MCDI_ENCAP_TYPE enum: Encapsulation type. Defines how the payload will
  * be parsed to an inner frame. Other values are reserved. Unknown values
- * should be treated same as NONE.
+ * should be treated same as NONE. (Values are not arbitrary - constrained by
+ * table access ABI.)
  */
 #define	MAE_MCDI_ENCAP_TYPE_NONE 0x0 /* enum */
 /* enum: Don't assume enum aligns with support bitmask... */
@@ -634,6 +653,18 @@
 /* enum: Selects the virtual NIC plugged into the MAE switch */
 #define	MAE_MPORT_END_VNIC 0x2
 
+/* MAE_COUNTER_TYPE enum: The datapath maintains several sets of counters, each
+ * being associated with a different table. Note that the same counter ID may
+ * be allocated by different counter blocks, so e.g. AR counter 42 is different
+ * from CT counter 42. Generation counts are also type-specific. This value is
+ * also present in the header of streaming counter packets, in the IDENTIFIER
+ * field (see packetiser packet format definitions).
+ */
+/* enum: Action Rule counters - can be referenced in AR response. */
+#define	MAE_COUNTER_TYPE_AR 0x0
+/* enum: Conntrack counters - can be referenced in CT response. */
+#define	MAE_COUNTER_TYPE_CT 0x1
+
 /* MCDI_EVENT structuredef: The structure of an MCDI_EVENT on Siena/EF10/EF100
  * platforms
  */
@@ -4547,6 +4578,8 @@
 #define	MC_CMD_MEDIA_BASE_T 0x6
 /* enum: QSFP+. */
 #define	MC_CMD_MEDIA_QSFP_PLUS 0x7
+/* enum: DSFP. */
+#define	MC_CMD_MEDIA_DSFP 0x8
 #define	MC_CMD_GET_PHY_CFG_OUT_MMD_MASK_OFST 48
 #define	MC_CMD_GET_PHY_CFG_OUT_MMD_MASK_LEN 4
 /* enum: Native clause 22 */
@@ -7823,11 +7856,16 @@
 /***********************************/
 /* MC_CMD_GET_PHY_MEDIA_INFO
  * Read media-specific data from PHY (e.g. SFP/SFP+ module ID information for
- * SFP+ PHYs). The 'media type' can be found via GET_PHY_CFG
- * (GET_PHY_CFG_OUT_MEDIA_TYPE); the valid 'page number' input values, and the
- * output data, are interpreted on a per-type basis. For SFP+: PAGE=0 or 1
+ * SFP+ PHYs). The "media type" can be found via GET_PHY_CFG
+ * (GET_PHY_CFG_OUT_MEDIA_TYPE); the valid "page number" input values, and the
+ * output data, are interpreted on a per-type basis. For SFP+, PAGE=0 or 1
  * returns a 128-byte block read from module I2C address 0xA0 offset 0 or 0x80.
- * Anything else: currently undefined. Locks required: None. Return code: 0.
+ * For QSFP, PAGE=-1 is the lower (unbanked) page. PAGE=2 is the EEPROM and
+ * PAGE=3 is the module limits. For DSFP, module addressing requires a
+ * "BANK:PAGE". Not every bank has the same number of pages. See the Common
+ * Management Interface Specification (CMIS) for further details. A BANK:PAGE
+ * of "0xffff:0xffff" retrieves the lower (unbanked) page. Locks required -
+ * None. Return code - 0.
  */
 #define	MC_CMD_GET_PHY_MEDIA_INFO 0x4b
 #define	MC_CMD_GET_PHY_MEDIA_INFO_MSGSET 0x4b
@@ -7839,6 +7877,12 @@
 #define	MC_CMD_GET_PHY_MEDIA_INFO_IN_LEN 4
 #define	MC_CMD_GET_PHY_MEDIA_INFO_IN_PAGE_OFST 0
 #define	MC_CMD_GET_PHY_MEDIA_INFO_IN_PAGE_LEN 4
+#define	MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_PAGE_OFST 0
+#define	MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_PAGE_LBN 0
+#define	MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_PAGE_WIDTH 16
+#define	MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_BANK_OFST 0
+#define	MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_BANK_LBN 16
+#define	MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_BANK_WIDTH 16
 
 /* MC_CMD_GET_PHY_MEDIA_INFO_OUT msgresponse */
 #define	MC_CMD_GET_PHY_MEDIA_INFO_OUT_LENMIN 5
@@ -9350,6 +9394,8 @@
 #define	NVRAM_PARTITION_TYPE_FPGA_JUMP 0xb08
 /* enum: FPGA Validate XCLBIN */
 #define	NVRAM_PARTITION_TYPE_FPGA_XCLBIN_VALIDATE 0xb09
+/* enum: FPGA XOCL Configuration information */
+#define	NVRAM_PARTITION_TYPE_FPGA_XOCL_CONFIG 0xb0a
 /* enum: MUM firmware partition */
 #define	NVRAM_PARTITION_TYPE_MUM_FIRMWARE 0xc00
 /* enum: SUC firmware partition (this is intentionally an alias of
@@ -9427,6 +9473,8 @@
 #define	NVRAM_PARTITION_TYPE_BUNDLE_LOG 0x1e02
 /* enum: Partition for Solarflare gPXE bootrom installed via Bundle update. */
 #define	NVRAM_PARTITION_TYPE_EXPANSION_ROM_INTERNAL 0x1e03
+/* enum: Partition to store ASN.1 format Bundle Signature for checking. */
+#define	NVRAM_PARTITION_TYPE_BUNDLE_SIGNATURE 0x1e04
 /* enum: Test partition on SmartNIC system microcontroller (SUC) */
 #define	NVRAM_PARTITION_TYPE_SUC_TEST 0x1f00
 /* enum: System microcontroller access to primary FPGA flash. */
@@ -10051,6 +10099,158 @@
 #define	MC_CMD_INIT_EVQ_V2_OUT_FLAG_RXQ_FORCE_EV_MERGING_LBN 3
 #define	MC_CMD_INIT_EVQ_V2_OUT_FLAG_RXQ_FORCE_EV_MERGING_WIDTH 1
 
+/* MC_CMD_INIT_EVQ_V3_IN msgrequest: Extended request to specify per-queue
+ * event merge timeouts.
+ */
+#define	MC_CMD_INIT_EVQ_V3_IN_LEN 556
+/* Size, in entries */
+#define	MC_CMD_INIT_EVQ_V3_IN_SIZE_OFST 0
+#define	MC_CMD_INIT_EVQ_V3_IN_SIZE_LEN 4
+/* Desired instance. Must be set to a specific instance, which is a function
+ * local queue index. The calling client must be the currently-assigned user of
+ * this VI (see MC_CMD_SET_VI_USER).
+ */
+#define	MC_CMD_INIT_EVQ_V3_IN_INSTANCE_OFST 4
+#define	MC_CMD_INIT_EVQ_V3_IN_INSTANCE_LEN 4
+/* The initial timer value. The load value is ignored if the timer mode is DIS.
+ */
+#define	MC_CMD_INIT_EVQ_V3_IN_TMR_LOAD_OFST 8
+#define	MC_CMD_INIT_EVQ_V3_IN_TMR_LOAD_LEN 4
+/* The reload value is ignored in one-shot modes */
+#define	MC_CMD_INIT_EVQ_V3_IN_TMR_RELOAD_OFST 12
+#define	MC_CMD_INIT_EVQ_V3_IN_TMR_RELOAD_LEN 4
+/* tbd */
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAGS_OFST 16
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAGS_LEN 4
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_INTERRUPTING_OFST 16
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_INTERRUPTING_LBN 0
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_INTERRUPTING_WIDTH 1
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_RPTR_DOS_OFST 16
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_RPTR_DOS_LBN 1
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_RPTR_DOS_WIDTH 1
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_INT_ARMD_OFST 16
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_INT_ARMD_LBN 2
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_INT_ARMD_WIDTH 1
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_CUT_THRU_OFST 16
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_CUT_THRU_LBN 3
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_CUT_THRU_WIDTH 1
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_RX_MERGE_OFST 16
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_RX_MERGE_LBN 4
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_RX_MERGE_WIDTH 1
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_TX_MERGE_OFST 16
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_TX_MERGE_LBN 5
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_TX_MERGE_WIDTH 1
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_USE_TIMER_OFST 16
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_USE_TIMER_LBN 6
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_USE_TIMER_WIDTH 1
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_OFST 16
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_LBN 7
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_WIDTH 4
+/* enum: All initialisation flags specified by host. */
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_MANUAL 0x0
+/* enum: MEDFORD only. Certain initialisation flags specified by host may be
+ * over-ridden by firmware based on licenses and firmware variant in order to
+ * provide the lowest latency achievable. See
+ * MC_CMD_INIT_EVQ_V2/MC_CMD_INIT_EVQ_V2_OUT/FLAGS for list of affected flags.
+ */
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_LOW_LATENCY 0x1
+/* enum: MEDFORD only. Certain initialisation flags specified by host may be
+ * over-ridden by firmware based on licenses and firmware variant in order to
+ * provide the best throughput achievable. See
+ * MC_CMD_INIT_EVQ_V2/MC_CMD_INIT_EVQ_V2_OUT/FLAGS for list of affected flags.
+ */
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_THROUGHPUT 0x2
+/* enum: MEDFORD only. Certain initialisation flags may be over-ridden by
+ * firmware based on licenses and firmware variant. See
+ * MC_CMD_INIT_EVQ_V2/MC_CMD_INIT_EVQ_V2_OUT/FLAGS for list of affected flags.
+ */
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_AUTO 0x3
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_EXT_WIDTH_OFST 16
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_EXT_WIDTH_LBN 11
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_EXT_WIDTH_WIDTH 1
+#define	MC_CMD_INIT_EVQ_V3_IN_TMR_MODE_OFST 20
+#define	MC_CMD_INIT_EVQ_V3_IN_TMR_MODE_LEN 4
+/* enum: Disabled */
+#define	MC_CMD_INIT_EVQ_V3_IN_TMR_MODE_DIS 0x0
+/* enum: Immediate */
+#define	MC_CMD_INIT_EVQ_V3_IN_TMR_IMMED_START 0x1
+/* enum: Triggered */
+#define	MC_CMD_INIT_EVQ_V3_IN_TMR_TRIG_START 0x2
+/* enum: Hold-off */
+#define	MC_CMD_INIT_EVQ_V3_IN_TMR_INT_HLDOFF 0x3
+/* Target EVQ for wakeups if in wakeup mode. */
+#define	MC_CMD_INIT_EVQ_V3_IN_TARGET_EVQ_OFST 24
+#define	MC_CMD_INIT_EVQ_V3_IN_TARGET_EVQ_LEN 4
+/* Target interrupt if in interrupting mode (note union with target EVQ). Use
+ * MC_CMD_RESOURCE_INSTANCE_ANY unless a specific one required for test
+ * purposes.
+ */
+#define	MC_CMD_INIT_EVQ_V3_IN_IRQ_NUM_OFST 24
+#define	MC_CMD_INIT_EVQ_V3_IN_IRQ_NUM_LEN 4
+/* Event Counter Mode. */
+#define	MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_OFST 28
+#define	MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_LEN 4
+/* enum: Disabled */
+#define	MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_DIS 0x0
+/* enum: Disabled */
+#define	MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_RX 0x1
+/* enum: Disabled */
+#define	MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_TX 0x2
+/* enum: Disabled */
+#define	MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_RXTX 0x3
+/* Event queue packet count threshold. */
+#define	MC_CMD_INIT_EVQ_V3_IN_COUNT_THRSHLD_OFST 32
+#define	MC_CMD_INIT_EVQ_V3_IN_COUNT_THRSHLD_LEN 4
+/* 64-bit address of 4k of 4k-aligned host memory buffer */
+#define	MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_OFST 36
+#define	MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_LEN 8
+#define	MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_LO_OFST 36
+#define	MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_LO_LEN 4
+#define	MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_LO_LBN 288
+#define	MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_LO_WIDTH 32
+#define	MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_HI_OFST 40
+#define	MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_HI_LEN 4
+#define	MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_HI_LBN 320
+#define	MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_HI_WIDTH 32
+#define	MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_MINNUM 1
+#define	MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_MAXNUM 64
+#define	MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_MAXNUM_MCDI2 64
+/* Receive event merge timeout to configure, in nanoseconds. The valid range
+ * and granularity are device specific. Specify 0 to use the firmware's default
+ * value. This field is ignored and per-queue merging is disabled if
+ * MC_CMD_INIT_EVQ/MC_CMD_INIT_EVQ_IN/FLAG_RX_MERGE is not set.
+ */
+#define	MC_CMD_INIT_EVQ_V3_IN_RX_MERGE_TIMEOUT_NS_OFST 548
+#define	MC_CMD_INIT_EVQ_V3_IN_RX_MERGE_TIMEOUT_NS_LEN 4
+/* Transmit event merge timeout to configure, in nanoseconds. The valid range
+ * and granularity are device specific. Specify 0 to use the firmware's default
+ * value. This field is ignored and per-queue merging is disabled if
+ * MC_CMD_INIT_EVQ/MC_CMD_INIT_EVQ_IN/FLAG_TX_MERGE is not set.
+ */
+#define	MC_CMD_INIT_EVQ_V3_IN_TX_MERGE_TIMEOUT_NS_OFST 552
+#define	MC_CMD_INIT_EVQ_V3_IN_TX_MERGE_TIMEOUT_NS_LEN 4
+
+/* MC_CMD_INIT_EVQ_V3_OUT msgresponse */
+#define	MC_CMD_INIT_EVQ_V3_OUT_LEN 8
+/* Only valid if INTRFLAG was true */
+#define	MC_CMD_INIT_EVQ_V3_OUT_IRQ_OFST 0
+#define	MC_CMD_INIT_EVQ_V3_OUT_IRQ_LEN 4
+/* Actual configuration applied on the card */
+#define	MC_CMD_INIT_EVQ_V3_OUT_FLAGS_OFST 4
+#define	MC_CMD_INIT_EVQ_V3_OUT_FLAGS_LEN 4
+#define	MC_CMD_INIT_EVQ_V3_OUT_FLAG_CUT_THRU_OFST 4
+#define	MC_CMD_INIT_EVQ_V3_OUT_FLAG_CUT_THRU_LBN 0
+#define	MC_CMD_INIT_EVQ_V3_OUT_FLAG_CUT_THRU_WIDTH 1
+#define	MC_CMD_INIT_EVQ_V3_OUT_FLAG_RX_MERGE_OFST 4
+#define	MC_CMD_INIT_EVQ_V3_OUT_FLAG_RX_MERGE_LBN 1
+#define	MC_CMD_INIT_EVQ_V3_OUT_FLAG_RX_MERGE_WIDTH 1
+#define	MC_CMD_INIT_EVQ_V3_OUT_FLAG_TX_MERGE_OFST 4
+#define	MC_CMD_INIT_EVQ_V3_OUT_FLAG_TX_MERGE_LBN 2
+#define	MC_CMD_INIT_EVQ_V3_OUT_FLAG_TX_MERGE_WIDTH 1
+#define	MC_CMD_INIT_EVQ_V3_OUT_FLAG_RXQ_FORCE_EV_MERGING_OFST 4
+#define	MC_CMD_INIT_EVQ_V3_OUT_FLAG_RXQ_FORCE_EV_MERGING_LBN 3
+#define	MC_CMD_INIT_EVQ_V3_OUT_FLAG_RXQ_FORCE_EV_MERGING_WIDTH 1
+
 /* QUEUE_CRC_MODE structuredef */
 #define	QUEUE_CRC_MODE_LEN 1
 #define	QUEUE_CRC_MODE_MODE_LBN 0
@@ -10256,7 +10456,9 @@
 #define	MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_HI_LEN 4
 #define	MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_HI_LBN 256
 #define	MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_HI_WIDTH 32
-#define	MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_NUM 64
+#define	MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_MINNUM 0
+#define	MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_MAXNUM 64
+#define	MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_MAXNUM_MCDI2 64
 /* Maximum length of packet to receive, if SNAPSHOT_MODE flag is set */
 #define	MC_CMD_INIT_RXQ_EXT_IN_SNAPSHOT_LENGTH_OFST 540
 #define	MC_CMD_INIT_RXQ_EXT_IN_SNAPSHOT_LENGTH_LEN 4
@@ -10360,7 +10562,9 @@
 #define	MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_HI_LEN 4
 #define	MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_HI_LBN 256
 #define	MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_HI_WIDTH 32
-#define	MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_NUM 64
+#define	MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_MINNUM 0
+#define	MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_MAXNUM 64
+#define	MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_MAXNUM_MCDI2 64
 /* Maximum length of packet to receive, if SNAPSHOT_MODE flag is set */
 #define	MC_CMD_INIT_RXQ_V3_IN_SNAPSHOT_LENGTH_OFST 540
 #define	MC_CMD_INIT_RXQ_V3_IN_SNAPSHOT_LENGTH_LEN 4
@@ -10493,7 +10697,9 @@
 #define	MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_HI_LEN 4
 #define	MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_HI_LBN 256
 #define	MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_HI_WIDTH 32
-#define	MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_NUM 64
+#define	MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_MINNUM 0
+#define	MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_MAXNUM 64
+#define	MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_MAXNUM_MCDI2 64
 /* Maximum length of packet to receive, if SNAPSHOT_MODE flag is set */
 #define	MC_CMD_INIT_RXQ_V4_IN_SNAPSHOT_LENGTH_OFST 540
 #define	MC_CMD_INIT_RXQ_V4_IN_SNAPSHOT_LENGTH_LEN 4
@@ -10639,7 +10845,9 @@
 #define	MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_HI_LEN 4
 #define	MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_HI_LBN 256
 #define	MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_HI_WIDTH 32
-#define	MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_NUM 64
+#define	MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_MINNUM 0
+#define	MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_MAXNUM 64
+#define	MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_MAXNUM_MCDI2 64
 /* Maximum length of packet to receive, if SNAPSHOT_MODE flag is set */
 #define	MC_CMD_INIT_RXQ_V5_IN_SNAPSHOT_LENGTH_OFST 540
 #define	MC_CMD_INIT_RXQ_V5_IN_SNAPSHOT_LENGTH_LEN 4
@@ -10878,7 +11086,7 @@
 #define	MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_HI_LEN 4
 #define	MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_HI_LBN 256
 #define	MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_HI_WIDTH 32
-#define	MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_MINNUM 1
+#define	MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_MINNUM 0
 #define	MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_MAXNUM 64
 #define	MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_MAXNUM_MCDI2 64
 /* Flags related to Qbb flow control mode. */
@@ -12228,6 +12436,8 @@
  * rules inserted by MC_CMD_VNIC_ENCAP_RULE_ADD. (ef100 and later)
  */
 #define	MC_CMD_GET_PARSER_DISP_INFO_IN_OP_GET_SUPPORTED_VNIC_ENCAP_MATCHES 0x5
+/* enum: read the supported encapsulation types for the VNIC */
+#define	MC_CMD_GET_PARSER_DISP_INFO_IN_OP_GET_SUPPORTED_VNIC_ENCAP_TYPES 0x6
 
 /* MC_CMD_GET_PARSER_DISP_INFO_OUT msgresponse */
 #define	MC_CMD_GET_PARSER_DISP_INFO_OUT_LENMIN 8
@@ -12336,6 +12546,30 @@
 #define	MC_CMD_GET_PARSER_DISP_VNIC_ENCAP_MATCHES_OUT_SUPPORTED_MATCHES_MAXNUM 61
 #define	MC_CMD_GET_PARSER_DISP_VNIC_ENCAP_MATCHES_OUT_SUPPORTED_MATCHES_MAXNUM_MCDI2 253
 
+/* MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT msgresponse: Returns
+ * the supported encapsulation types for the VNIC
+ */
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_LEN 8
+/* The op code OP_GET_SUPPORTED_VNIC_ENCAP_TYPES is returned */
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_OP_OFST 0
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_OP_LEN 4
+/*            Enum values, see field(s): */
+/*               MC_CMD_GET_PARSER_DISP_INFO_IN/OP */
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPES_SUPPORTED_OFST 4
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPES_SUPPORTED_LEN 4
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_VXLAN_OFST 4
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_VXLAN_LBN 0
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_VXLAN_WIDTH 1
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_NVGRE_OFST 4
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_NVGRE_LBN 1
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_NVGRE_WIDTH 1
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_GENEVE_OFST 4
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_GENEVE_LBN 2
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_GENEVE_WIDTH 1
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_L2GRE_OFST 4
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_L2GRE_LBN 3
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_L2GRE_WIDTH 1
+
 
 /***********************************/
 /* MC_CMD_PARSER_DISP_RW
@@ -16236,6 +16470,9 @@
 #define	MC_CMD_GET_CAPABILITIES_V7_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_OFST 148
 #define	MC_CMD_GET_CAPABILITIES_V7_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_LBN 11
 #define	MC_CMD_GET_CAPABILITIES_V7_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V7_OUT_RSS_STEER_ON_OUTER_SUPPORTED_OFST 148
+#define	MC_CMD_GET_CAPABILITIES_V7_OUT_RSS_STEER_ON_OUTER_SUPPORTED_LBN 12
+#define	MC_CMD_GET_CAPABILITIES_V7_OUT_RSS_STEER_ON_OUTER_SUPPORTED_WIDTH 1
 
 /* MC_CMD_GET_CAPABILITIES_V8_OUT msgresponse */
 #define	MC_CMD_GET_CAPABILITIES_V8_OUT_LEN 160
@@ -16734,6 +16971,9 @@
 #define	MC_CMD_GET_CAPABILITIES_V8_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_OFST 148
 #define	MC_CMD_GET_CAPABILITIES_V8_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_LBN 11
 #define	MC_CMD_GET_CAPABILITIES_V8_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V8_OUT_RSS_STEER_ON_OUTER_SUPPORTED_OFST 148
+#define	MC_CMD_GET_CAPABILITIES_V8_OUT_RSS_STEER_ON_OUTER_SUPPORTED_LBN 12
+#define	MC_CMD_GET_CAPABILITIES_V8_OUT_RSS_STEER_ON_OUTER_SUPPORTED_WIDTH 1
 /* These bits are reserved for communicating test-specific capabilities to
  * host-side test software. All production drivers should treat this field as
  * opaque.
@@ -17246,6 +17486,9 @@
 #define	MC_CMD_GET_CAPABILITIES_V9_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_OFST 148
 #define	MC_CMD_GET_CAPABILITIES_V9_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_LBN 11
 #define	MC_CMD_GET_CAPABILITIES_V9_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V9_OUT_RSS_STEER_ON_OUTER_SUPPORTED_OFST 148
+#define	MC_CMD_GET_CAPABILITIES_V9_OUT_RSS_STEER_ON_OUTER_SUPPORTED_LBN 12
+#define	MC_CMD_GET_CAPABILITIES_V9_OUT_RSS_STEER_ON_OUTER_SUPPORTED_WIDTH 1
 /* These bits are reserved for communicating test-specific capabilities to
  * host-side test software. All production drivers should treat this field as
  * opaque.
@@ -17793,6 +18036,9 @@
 #define	MC_CMD_GET_CAPABILITIES_V10_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_OFST 148
 #define	MC_CMD_GET_CAPABILITIES_V10_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_LBN 11
 #define	MC_CMD_GET_CAPABILITIES_V10_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V10_OUT_RSS_STEER_ON_OUTER_SUPPORTED_OFST 148
+#define	MC_CMD_GET_CAPABILITIES_V10_OUT_RSS_STEER_ON_OUTER_SUPPORTED_LBN 12
+#define	MC_CMD_GET_CAPABILITIES_V10_OUT_RSS_STEER_ON_OUTER_SUPPORTED_WIDTH 1
 /* These bits are reserved for communicating test-specific capabilities to
  * host-side test software. All production drivers should treat this field as
  * opaque.
@@ -19900,6 +20146,18 @@
 #define	MC_CMD_GET_FUNCTION_INFO_OUT_VF_OFST 4
 #define	MC_CMD_GET_FUNCTION_INFO_OUT_VF_LEN 4
 
+/* MC_CMD_GET_FUNCTION_INFO_OUT_V2 msgresponse */
+#define	MC_CMD_GET_FUNCTION_INFO_OUT_V2_LEN 12
+#define	MC_CMD_GET_FUNCTION_INFO_OUT_V2_PF_OFST 0
+#define	MC_CMD_GET_FUNCTION_INFO_OUT_V2_PF_LEN 4
+#define	MC_CMD_GET_FUNCTION_INFO_OUT_V2_VF_OFST 4
+#define	MC_CMD_GET_FUNCTION_INFO_OUT_V2_VF_LEN 4
+/* Values from PCIE_INTERFACE enumeration. For NICs with a single interface, or
+ * in the case of a V1 response, this should be HOST_PRIMARY.
+ */
+#define	MC_CMD_GET_FUNCTION_INFO_OUT_V2_INTF_OFST 8
+#define	MC_CMD_GET_FUNCTION_INFO_OUT_V2_INTF_LEN 4
+
 
 /***********************************/
 /* MC_CMD_ENABLE_OFFLINE_BIST
@@ -25682,6 +25940,9 @@
 #define	MC_CMD_GET_RX_PREFIX_ID_IN_USER_MARK_OFST 0
 #define	MC_CMD_GET_RX_PREFIX_ID_IN_USER_MARK_LBN 6
 #define	MC_CMD_GET_RX_PREFIX_ID_IN_USER_MARK_WIDTH 1
+#define	MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_MPORT_OFST 0
+#define	MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_MPORT_LBN 7
+#define	MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_MPORT_WIDTH 1
 #define	MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_VPORT_OFST 0
 #define	MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_VPORT_LBN 7
 #define	MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_VPORT_WIDTH 1
@@ -25691,6 +25952,12 @@
 #define	MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIP_TCI_OFST 0
 #define	MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIP_TCI_LBN 9
 #define	MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIP_TCI_WIDTH 1
+#define	MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIPPED_OFST 0
+#define	MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIPPED_LBN 10
+#define	MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIPPED_WIDTH 1
+#define	MC_CMD_GET_RX_PREFIX_ID_IN_VSWITCH_STATUS_OFST 0
+#define	MC_CMD_GET_RX_PREFIX_ID_IN_VSWITCH_STATUS_LBN 11
+#define	MC_CMD_GET_RX_PREFIX_ID_IN_VSWITCH_STATUS_WIDTH 1
 
 /* MC_CMD_GET_RX_PREFIX_ID_OUT msgresponse */
 #define	MC_CMD_GET_RX_PREFIX_ID_OUT_LENMIN 8
@@ -25736,9 +26003,12 @@
 #define	RX_PREFIX_FIELD_INFO_PARTIAL_TSTAMP 0x4 /* enum */
 #define	RX_PREFIX_FIELD_INFO_RSS_HASH 0x5 /* enum */
 #define	RX_PREFIX_FIELD_INFO_USER_MARK 0x6 /* enum */
+#define	RX_PREFIX_FIELD_INFO_INGRESS_MPORT 0x7 /* enum */
 #define	RX_PREFIX_FIELD_INFO_INGRESS_VPORT 0x7 /* enum */
 #define	RX_PREFIX_FIELD_INFO_CSUM_FRAME 0x8 /* enum */
 #define	RX_PREFIX_FIELD_INFO_VLAN_STRIP_TCI 0x9 /* enum */
+#define	RX_PREFIX_FIELD_INFO_VLAN_STRIPPED 0xa /* enum */
+#define	RX_PREFIX_FIELD_INFO_VSWITCH_STATUS 0xb /* enum */
 #define	RX_PREFIX_FIELD_INFO_TYPE_LBN 24
 #define	RX_PREFIX_FIELD_INFO_TYPE_WIDTH 8
 
@@ -26063,6 +26333,10 @@
 #define	MC_CMD_FPGA_IN_OP_SET_INTERNAL_LINK 0x5
 /* enum: Read internal link configuration. */
 #define	MC_CMD_FPGA_IN_OP_GET_INTERNAL_LINK 0x6
+/* enum: Get MAC statistics of FPGA external port. */
+#define	MC_CMD_FPGA_IN_OP_GET_MAC_STATS 0x7
+/* enum: Set configuration on internal FPGA MAC. */
+#define	MC_CMD_FPGA_IN_OP_SET_INTERNAL_MAC 0x8
 
 /* MC_CMD_FPGA_OP_GET_VERSION_IN msgrequest: Get the FPGA version string. A
  * free-format string is returned in response to this command. Any checks on
@@ -26206,6 +26480,87 @@
 #define	MC_CMD_FPGA_OP_GET_INTERNAL_LINK_OUT_SPEED_OFST 4
 #define	MC_CMD_FPGA_OP_GET_INTERNAL_LINK_OUT_SPEED_LEN 4
 
+/* MC_CMD_FPGA_OP_GET_MAC_STATS_IN msgrequest: Get FPGA external port MAC
+ * statistics.
+ */
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_IN_LEN 4
+/* Sub-command code. Must be OP_GET_MAC_STATS. */
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_IN_OP_OFST 0
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_IN_OP_LEN 4
+
+/* MC_CMD_FPGA_OP_GET_MAC_STATS_OUT msgresponse */
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_LENMIN 4
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_LENMAX 252
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_LENMAX_MCDI2 1020
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_LEN(num) (4+8*(num))
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_NUM(len) (((len)-4)/8)
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_NUM_STATS_OFST 0
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_NUM_STATS_LEN 4
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_OFST 4
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_LEN 8
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_LO_OFST 4
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_LO_LEN 4
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_LO_LBN 32
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_LO_WIDTH 32
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_HI_OFST 8
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_HI_LEN 4
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_HI_LBN 64
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_HI_WIDTH 32
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_MINNUM 0
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_MAXNUM 31
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_MAXNUM_MCDI2 127
+#define	MC_CMD_FPGA_MAC_TX_TOTAL_PACKETS 0x0 /* enum */
+#define	MC_CMD_FPGA_MAC_TX_TOTAL_BYTES 0x1 /* enum */
+#define	MC_CMD_FPGA_MAC_TX_TOTAL_GOOD_PACKETS 0x2 /* enum */
+#define	MC_CMD_FPGA_MAC_TX_TOTAL_GOOD_BYTES 0x3 /* enum */
+#define	MC_CMD_FPGA_MAC_TX_BAD_FCS 0x4 /* enum */
+#define	MC_CMD_FPGA_MAC_TX_PAUSE 0x5 /* enum */
+#define	MC_CMD_FPGA_MAC_TX_USER_PAUSE 0x6 /* enum */
+#define	MC_CMD_FPGA_MAC_RX_TOTAL_PACKETS 0x7 /* enum */
+#define	MC_CMD_FPGA_MAC_RX_TOTAL_BYTES 0x8 /* enum */
+#define	MC_CMD_FPGA_MAC_RX_TOTAL_GOOD_PACKETS 0x9 /* enum */
+#define	MC_CMD_FPGA_MAC_RX_TOTAL_GOOD_BYTES 0xa /* enum */
+#define	MC_CMD_FPGA_MAC_RX_BAD_FCS 0xb /* enum */
+#define	MC_CMD_FPGA_MAC_RX_PAUSE 0xc /* enum */
+#define	MC_CMD_FPGA_MAC_RX_USER_PAUSE 0xd /* enum */
+#define	MC_CMD_FPGA_MAC_RX_UNDERSIZE 0xe /* enum */
+#define	MC_CMD_FPGA_MAC_RX_OVERSIZE 0xf /* enum */
+#define	MC_CMD_FPGA_MAC_RX_FRAMING_ERR 0x10 /* enum */
+#define	MC_CMD_FPGA_MAC_FEC_UNCORRECTED_ERRORS 0x11 /* enum */
+#define	MC_CMD_FPGA_MAC_FEC_CORRECTED_ERRORS 0x12 /* enum */
+
+/* MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN msgrequest: Configures the internal port
+ * MAC on the FPGA.
+ */
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_LEN 20
+/* Sub-command code. Must be OP_SET_INTERNAL_MAC. */
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_OP_OFST 0
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_OP_LEN 4
+/* Select which parameters to configure. */
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CONTROL_OFST 4
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CONTROL_LEN 4
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_MTU_OFST 4
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_MTU_LBN 0
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_MTU_WIDTH 1
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_DRAIN_OFST 4
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_DRAIN_LBN 1
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_DRAIN_WIDTH 1
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_FCNTL_OFST 4
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_FCNTL_LBN 2
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_FCNTL_WIDTH 1
+/* The MTU to be programmed into the MAC. */
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_MTU_OFST 8
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_MTU_LEN 4
+/* Drain Tx FIFO */
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_DRAIN_OFST 12
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_DRAIN_LEN 4
+/* flow control configuration. See MC_CMD_SET_MAC/MC_CMD_SET_MAC_IN/FCNTL. */
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_FCNTL_OFST 16
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_FCNTL_LEN 4
+
+/* MC_CMD_FPGA_OP_SET_INTERNAL_MAC_OUT msgresponse */
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_OUT_LEN 0
+
 
 /***********************************/
 /* MC_CMD_EXTERNAL_MAE_GET_LINK_MODE
@@ -26483,6 +26838,12 @@
 #define	MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STRIP_OUTER_VLAN_OFST 29
 #define	MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STRIP_OUTER_VLAN_LBN 0
 #define	MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STRIP_OUTER_VLAN_WIDTH 1
+#define	MC_CMD_VNIC_ENCAP_RULE_ADD_IN_RSS_ON_OUTER_OFST 29
+#define	MC_CMD_VNIC_ENCAP_RULE_ADD_IN_RSS_ON_OUTER_LBN 1
+#define	MC_CMD_VNIC_ENCAP_RULE_ADD_IN_RSS_ON_OUTER_WIDTH 1
+#define	MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STEER_ON_OUTER_OFST 29
+#define	MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STEER_ON_OUTER_LBN 2
+#define	MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STEER_ON_OUTER_WIDTH 1
 /* Only if MATCH_DST_PORT is set. Port number as bytes in network order. */
 #define	MC_CMD_VNIC_ENCAP_RULE_ADD_IN_DST_PORT_OFST 30
 #define	MC_CMD_VNIC_ENCAP_RULE_ADD_IN_DST_PORT_LEN 2
@@ -26544,6 +26905,257 @@
 #define	UUID_NODE_LBN 80
 #define	UUID_NODE_WIDTH 48
 
+
+/***********************************/
+/* MC_CMD_PLUGIN_ALLOC
+ * Create a handle to a datapath plugin's extension. This involves finding a
+ * currently-loaded plugin offering the given functionality (as identified by
+ * the UUID) and allocating a handle to track the usage of it. Plugin
+ * functionality is identified by 'extension' rather than any other identifier
+ * so that a single plugin bitfile may offer more than one piece of independent
+ * functionality. If two bitfiles are loaded which both offer the same
+ * extension, then the metadata is interrogated further to determine which is
+ * the newest and that is the one opened. See SF-123625-SW for architectural
+ * detail on datapath plugins.
+ */
+#define	MC_CMD_PLUGIN_ALLOC 0x1ad
+#define	MC_CMD_PLUGIN_ALLOC_MSGSET 0x1ad
+#undef	MC_CMD_0x1ad_PRIVILEGE_CTG
+
+#define	MC_CMD_0x1ad_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_ALLOC_IN msgrequest */
+#define	MC_CMD_PLUGIN_ALLOC_IN_LEN 24
+/* The functionality requested of the plugin, as a UUID structure */
+#define	MC_CMD_PLUGIN_ALLOC_IN_UUID_OFST 0
+#define	MC_CMD_PLUGIN_ALLOC_IN_UUID_LEN 16
+/* Additional options for opening the handle */
+#define	MC_CMD_PLUGIN_ALLOC_IN_FLAGS_OFST 16
+#define	MC_CMD_PLUGIN_ALLOC_IN_FLAGS_LEN 4
+#define	MC_CMD_PLUGIN_ALLOC_IN_FLAG_INFO_ONLY_OFST 16
+#define	MC_CMD_PLUGIN_ALLOC_IN_FLAG_INFO_ONLY_LBN 0
+#define	MC_CMD_PLUGIN_ALLOC_IN_FLAG_INFO_ONLY_WIDTH 1
+#define	MC_CMD_PLUGIN_ALLOC_IN_FLAG_ALLOW_DISABLED_OFST 16
+#define	MC_CMD_PLUGIN_ALLOC_IN_FLAG_ALLOW_DISABLED_LBN 1
+#define	MC_CMD_PLUGIN_ALLOC_IN_FLAG_ALLOW_DISABLED_WIDTH 1
+/* Load the extension only if it is in the specified administrative group.
+ * Specify ANY to load the extension wherever it is found (if there are
+ * multiple choices then the extension with the highest MINOR_VER/PATCH_VER
+ * will be loaded). See MC_CMD_PLUGIN_GET_META_GLOBAL for a description of
+ * administrative groups.
+ */
+#define	MC_CMD_PLUGIN_ALLOC_IN_ADMIN_GROUP_OFST 20
+#define	MC_CMD_PLUGIN_ALLOC_IN_ADMIN_GROUP_LEN 2
+/* enum: Load the extension from any ADMIN_GROUP. */
+#define	MC_CMD_PLUGIN_ALLOC_IN_ANY 0xffff
+/* Reserved */
+#define	MC_CMD_PLUGIN_ALLOC_IN_RESERVED_OFST 22
+#define	MC_CMD_PLUGIN_ALLOC_IN_RESERVED_LEN 2
+
+/* MC_CMD_PLUGIN_ALLOC_OUT msgresponse */
+#define	MC_CMD_PLUGIN_ALLOC_OUT_LEN 4
+/* Unique identifier of this usage */
+#define	MC_CMD_PLUGIN_ALLOC_OUT_HANDLE_OFST 0
+#define	MC_CMD_PLUGIN_ALLOC_OUT_HANDLE_LEN 4
+
+
+/***********************************/
+/* MC_CMD_PLUGIN_FREE
+ * Delete a handle to a plugin's extension.
+ */
+#define	MC_CMD_PLUGIN_FREE 0x1ae
+#define	MC_CMD_PLUGIN_FREE_MSGSET 0x1ae
+#undef	MC_CMD_0x1ae_PRIVILEGE_CTG
+
+#define	MC_CMD_0x1ae_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_FREE_IN msgrequest */
+#define	MC_CMD_PLUGIN_FREE_IN_LEN 4
+/* Handle returned by MC_CMD_PLUGIN_ALLOC_OUT */
+#define	MC_CMD_PLUGIN_FREE_IN_HANDLE_OFST 0
+#define	MC_CMD_PLUGIN_FREE_IN_HANDLE_LEN 4
+
+/* MC_CMD_PLUGIN_FREE_OUT msgresponse */
+#define	MC_CMD_PLUGIN_FREE_OUT_LEN 0
+
+
+/***********************************/
+/* MC_CMD_PLUGIN_GET_META_GLOBAL
+ * Returns the global metadata applying to the whole plugin extension. See the
+ * other metadata calls for subtypes of data.
+ */
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL 0x1af
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_MSGSET 0x1af
+#undef	MC_CMD_0x1af_PRIVILEGE_CTG
+
+#define	MC_CMD_0x1af_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_GET_META_GLOBAL_IN msgrequest */
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_IN_LEN 4
+/* Handle returned by MC_CMD_PLUGIN_ALLOC_OUT */
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_IN_HANDLE_OFST 0
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_IN_HANDLE_LEN 4
+
+/* MC_CMD_PLUGIN_GET_META_GLOBAL_OUT msgresponse */
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_LEN 36
+/* Unique identifier of this plugin extension. This is identical to the value
+ * which was requested when the handle was allocated.
+ */
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_UUID_OFST 0
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_UUID_LEN 16
+/* semver sub-version of this plugin extension */
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MINOR_VER_OFST 16
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MINOR_VER_LEN 2
+/* semver micro-version of this plugin extension */
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_PATCH_VER_OFST 18
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_PATCH_VER_LEN 2
+/* Number of different messages which can be sent to this extension */
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_NUM_MSGS_OFST 20
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_NUM_MSGS_LEN 4
+/* Byte offset within the VI window of the plugin's mapped CSR window. */
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_OFFSET_OFST 24
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_OFFSET_LEN 2
+/* Number of bytes mapped through to the plugin's CSRs. 0 if that feature was
+ * not requested by the plugin (in which case MAPPED_CSR_OFFSET and
+ * MAPPED_CSR_FLAGS are ignored).
+ */
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_SIZE_OFST 26
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_SIZE_LEN 2
+/* Flags indicating how to perform the CSR window mapping. */
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAGS_OFST 28
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAGS_LEN 4
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_READ_OFST 28
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_READ_LBN 0
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_READ_WIDTH 1
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_WRITE_OFST 28
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_WRITE_LBN 1
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_WRITE_WIDTH 1
+/* Identifier of the set of extensions which all change state together.
+ * Extensions having the same ADMIN_GROUP will always load and unload at the
+ * same time. ADMIN_GROUP values themselves are arbitrary (but they contain a
+ * generation number as an implementation detail to ensure that they're not
+ * reused rapidly).
+ */
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_ADMIN_GROUP_OFST 32
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_ADMIN_GROUP_LEN 1
+/* Bitshift in MC_CMD_DEVEL_CLIENT_PRIVILEGE_MODIFY's MASK parameters
+ * corresponding to this extension, i.e. set the bit 1<<PRIVILEGE_BIT to permit
+ * access to this extension.
+ */
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_PRIVILEGE_BIT_OFST 33
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_PRIVILEGE_BIT_LEN 1
+/* Reserved */
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_RESERVED_OFST 34
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_RESERVED_LEN 2
+
+
+/***********************************/
+/* MC_CMD_PLUGIN_GET_META_PUBLISHER
+ * Returns metadata supplied by the plugin author which describes this
+ * extension in a human-readable way. Contrast with
+ * MC_CMD_PLUGIN_GET_META_GLOBAL, which returns information needed for software
+ * to operate.
+ */
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER 0x1b0
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_MSGSET 0x1b0
+#undef	MC_CMD_0x1b0_PRIVILEGE_CTG
+
+#define	MC_CMD_0x1b0_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_GET_META_PUBLISHER_IN msgrequest */
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_LEN 12
+/* Handle returned by MC_CMD_PLUGIN_ALLOC_OUT */
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_HANDLE_OFST 0
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_HANDLE_LEN 4
+/* Category of data to return */
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_SUBTYPE_OFST 4
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_SUBTYPE_LEN 4
+/* enum: Top-level information about the extension. The returned data is an
+ * array of key/value pairs using the keys in RFC5013 (Dublin Core) to describe
+ * the extension. The data is a back-to-back list of zero-terminated strings;
+ * the even-numbered fields (0,2,4,...) are keys and their following odd-
+ * numbered fields are the corresponding values. Both keys and values are
+ * nominally UTF-8. Per RFC5013, the same key may be repeated any number of
+ * times. Note that all information (including the key/value structure itself
+ * and the UTF-8 encoding) may have been provided by the plugin author, so
+ * callers must be cautious about parsing it. Callers should parse only the
+ * top-level structure to separate out the keys and values; the contents of the
+ * values is not expected to be machine-readable.
+ */
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_EXTENSION_KVS 0x0
+/* Byte position of the data to be returned within the full data block of the
+ * given SUBTYPE.
+ */
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_OFFSET_OFST 8
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_OFFSET_LEN 4
+
+/* MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT msgresponse */
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_LENMIN 4
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_LENMAX 252
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_LENMAX_MCDI2 1020
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_LEN(num) (4+1*(num))
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_NUM(len) (((len)-4)/1)
+/* Full length of the data block of the requested SUBTYPE, in bytes. */
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_TOTAL_SIZE_OFST 0
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_TOTAL_SIZE_LEN 4
+/* The information requested by SUBTYPE. */
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_OFST 4
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_LEN 1
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_MINNUM 0
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_MAXNUM 248
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_MAXNUM_MCDI2 1016
+
+
+/***********************************/
+/* MC_CMD_PLUGIN_GET_META_MSG
+ * Returns the simple metadata for a specific plugin request message. This
+ * supplies information necessary for the host to know how to build an
+ * MC_CMD_PLUGIN_REQ request.
+ */
+#define	MC_CMD_PLUGIN_GET_META_MSG 0x1b1
+#define	MC_CMD_PLUGIN_GET_META_MSG_MSGSET 0x1b1
+#undef	MC_CMD_0x1b1_PRIVILEGE_CTG
+
+#define	MC_CMD_0x1b1_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_GET_META_MSG_IN msgrequest */
+#define	MC_CMD_PLUGIN_GET_META_MSG_IN_LEN 8
+/* Handle returned by MC_CMD_PLUGIN_ALLOC_OUT */
+#define	MC_CMD_PLUGIN_GET_META_MSG_IN_HANDLE_OFST 0
+#define	MC_CMD_PLUGIN_GET_META_MSG_IN_HANDLE_LEN 4
+/* Unique message ID to obtain */
+#define	MC_CMD_PLUGIN_GET_META_MSG_IN_ID_OFST 4
+#define	MC_CMD_PLUGIN_GET_META_MSG_IN_ID_LEN 4
+
+/* MC_CMD_PLUGIN_GET_META_MSG_OUT msgresponse */
+#define	MC_CMD_PLUGIN_GET_META_MSG_OUT_LEN 44
+/* Unique message ID. This is the same value as the input parameter; it exists
+ * to allow future MCDI extensions which enumerate all messages.
+ */
+#define	MC_CMD_PLUGIN_GET_META_MSG_OUT_ID_OFST 0
+#define	MC_CMD_PLUGIN_GET_META_MSG_OUT_ID_LEN 4
+/* Packed index number of this message, assigned by the MC to give each message
+ * a unique ID in an array to allow for more efficient storage/management.
+ */
+#define	MC_CMD_PLUGIN_GET_META_MSG_OUT_INDEX_OFST 4
+#define	MC_CMD_PLUGIN_GET_META_MSG_OUT_INDEX_LEN 4
+/* Short human-readable codename for this message. This is conventionally
+ * formatted as a C identifier in the basic ASCII character set with any spare
+ * bytes at the end set to 0, however this convention is not enforced by the MC
+ * so consumers must check for all potential malformations before using it for
+ * a trusted purpose.
+ */
+#define	MC_CMD_PLUGIN_GET_META_MSG_OUT_NAME_OFST 8
+#define	MC_CMD_PLUGIN_GET_META_MSG_OUT_NAME_LEN 32
+/* Number of bytes of data which must be passed from the host kernel to the MC
+ * for this message's payload, and which are passed back again in the response.
+ * The MC's plugin metadata loader will have validated that the number of bytes
+ * specified here will fit in to MC_CMD_PLUGIN_REQ_IN_DATA in a single MCDI
+ * message.
+ */
+#define	MC_CMD_PLUGIN_GET_META_MSG_OUT_DATA_SIZE_OFST 40
+#define	MC_CMD_PLUGIN_GET_META_MSG_OUT_DATA_SIZE_LEN 4
+
 /* PLUGIN_EXTENSION structuredef: Used within MC_CMD_PLUGIN_GET_ALL to describe
  * an individual extension.
  */
@@ -26561,6 +27173,100 @@
 #define	PLUGIN_EXTENSION_RESERVED_LBN 137
 #define	PLUGIN_EXTENSION_RESERVED_WIDTH 23
 
+
+/***********************************/
+/* MC_CMD_PLUGIN_GET_ALL
+ * Returns a list of all plugin extensions currently loaded and available. The
+ * UUIDs returned can be passed to MC_CMD_PLUGIN_ALLOC in order to obtain more
+ * detailed metadata via the MC_CMD_PLUGIN_GET_META_* family of requests. The
+ * ADMIN_GROUP field collects how extensions are grouped in to units which are
+ * loaded/unloaded together; extensions with the same value are in the same
+ * group.
+ */
+#define	MC_CMD_PLUGIN_GET_ALL 0x1b2
+#define	MC_CMD_PLUGIN_GET_ALL_MSGSET 0x1b2
+#undef	MC_CMD_0x1b2_PRIVILEGE_CTG
+
+#define	MC_CMD_0x1b2_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_GET_ALL_IN msgrequest */
+#define	MC_CMD_PLUGIN_GET_ALL_IN_LEN 4
+/* Additional options for querying. Note that if neither FLAG_INCLUDE_ENABLED
+ * nor FLAG_INCLUDE_DISABLED are specified then the result set will be empty.
+ */
+#define	MC_CMD_PLUGIN_GET_ALL_IN_FLAGS_OFST 0
+#define	MC_CMD_PLUGIN_GET_ALL_IN_FLAGS_LEN 4
+#define	MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_ENABLED_OFST 0
+#define	MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_ENABLED_LBN 0
+#define	MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_ENABLED_WIDTH 1
+#define	MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_DISABLED_OFST 0
+#define	MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_DISABLED_LBN 1
+#define	MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_DISABLED_WIDTH 1
+
+/* MC_CMD_PLUGIN_GET_ALL_OUT msgresponse */
+#define	MC_CMD_PLUGIN_GET_ALL_OUT_LENMIN 0
+#define	MC_CMD_PLUGIN_GET_ALL_OUT_LENMAX 240
+#define	MC_CMD_PLUGIN_GET_ALL_OUT_LENMAX_MCDI2 1020
+#define	MC_CMD_PLUGIN_GET_ALL_OUT_LEN(num) (0+20*(num))
+#define	MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_NUM(len) (((len)-0)/20)
+/* The list of available plugin extensions, as an array of PLUGIN_EXTENSION
+ * structs.
+ */
+#define	MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_OFST 0
+#define	MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_LEN 20
+#define	MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_MINNUM 0
+#define	MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_MAXNUM 12
+#define	MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_MAXNUM_MCDI2 51
+
+
+/***********************************/
+/* MC_CMD_PLUGIN_REQ
+ * Send a command to a plugin. A plugin may define an arbitrary number of
+ * 'messages' which it allows applications on the host system to send, each
+ * identified by a 32-bit ID.
+ */
+#define	MC_CMD_PLUGIN_REQ 0x1b3
+#define	MC_CMD_PLUGIN_REQ_MSGSET 0x1b3
+#undef	MC_CMD_0x1b3_PRIVILEGE_CTG
+
+#define	MC_CMD_0x1b3_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_REQ_IN msgrequest */
+#define	MC_CMD_PLUGIN_REQ_IN_LENMIN 8
+#define	MC_CMD_PLUGIN_REQ_IN_LENMAX 252
+#define	MC_CMD_PLUGIN_REQ_IN_LENMAX_MCDI2 1020
+#define	MC_CMD_PLUGIN_REQ_IN_LEN(num) (8+1*(num))
+#define	MC_CMD_PLUGIN_REQ_IN_DATA_NUM(len) (((len)-8)/1)
+/* Handle returned by MC_CMD_PLUGIN_ALLOC_OUT */
+#define	MC_CMD_PLUGIN_REQ_IN_HANDLE_OFST 0
+#define	MC_CMD_PLUGIN_REQ_IN_HANDLE_LEN 4
+/* Message ID defined by the plugin author */
+#define	MC_CMD_PLUGIN_REQ_IN_ID_OFST 4
+#define	MC_CMD_PLUGIN_REQ_IN_ID_LEN 4
+/* Data blob being the parameter to the message. This must be of the length
+ * specified by MC_CMD_PLUGIN_GET_META_MSG_IN_MCDI_PARAM_SIZE.
+ */
+#define	MC_CMD_PLUGIN_REQ_IN_DATA_OFST 8
+#define	MC_CMD_PLUGIN_REQ_IN_DATA_LEN 1
+#define	MC_CMD_PLUGIN_REQ_IN_DATA_MINNUM 0
+#define	MC_CMD_PLUGIN_REQ_IN_DATA_MAXNUM 244
+#define	MC_CMD_PLUGIN_REQ_IN_DATA_MAXNUM_MCDI2 1012
+
+/* MC_CMD_PLUGIN_REQ_OUT msgresponse */
+#define	MC_CMD_PLUGIN_REQ_OUT_LENMIN 0
+#define	MC_CMD_PLUGIN_REQ_OUT_LENMAX 252
+#define	MC_CMD_PLUGIN_REQ_OUT_LENMAX_MCDI2 1020
+#define	MC_CMD_PLUGIN_REQ_OUT_LEN(num) (0+1*(num))
+#define	MC_CMD_PLUGIN_REQ_OUT_DATA_NUM(len) (((len)-0)/1)
+/* The input data, as transformed and/or updated by the plugin's eBPF. Will be
+ * the same size as the input DATA parameter.
+ */
+#define	MC_CMD_PLUGIN_REQ_OUT_DATA_OFST 0
+#define	MC_CMD_PLUGIN_REQ_OUT_DATA_LEN 1
+#define	MC_CMD_PLUGIN_REQ_OUT_DATA_MINNUM 0
+#define	MC_CMD_PLUGIN_REQ_OUT_DATA_MAXNUM 252
+#define	MC_CMD_PLUGIN_REQ_OUT_DATA_MAXNUM_MCDI2 1020
+
 /* DESC_ADDR_REGION structuredef: Describes a contiguous region of DESC_ADDR
  * space that maps to a contiguous region of TRGT_ADDR space. Addresses
  * DESC_ADDR in the range [DESC_ADDR_BASE:DESC_ADDR_BASE + 1 <<
@@ -27219,6 +27925,38 @@
 #define	MC_CMD_VIRTIO_TEST_FEATURES_OUT_LEN 0
 
 
+/***********************************/
+/* MC_CMD_VIRTIO_GET_CAPABILITIES
+ * Get virtio capabilities supported by the device. Returns general virtio
+ * capabilities and limitations of the hardware / firmware implementation
+ * (hardware device as a whole), rather than that of individual configured
+ * virtio devices. At present, only the absolute maximum number of queues
+ * allowed on multi-queue devices is returned. Response is expected to be
+ * extended as necessary in the future.
+ */
+#define	MC_CMD_VIRTIO_GET_CAPABILITIES 0x1d3
+#define	MC_CMD_VIRTIO_GET_CAPABILITIES_MSGSET 0x1d3
+#undef	MC_CMD_0x1d3_PRIVILEGE_CTG
+
+#define	MC_CMD_0x1d3_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_VIRTIO_GET_CAPABILITIES_IN msgrequest */
+#define	MC_CMD_VIRTIO_GET_CAPABILITIES_IN_LEN 4
+/* Type of device to get capabilities for. Matches the device id as defined by
+ * the virtio spec.
+ */
+#define	MC_CMD_VIRTIO_GET_CAPABILITIES_IN_DEVICE_ID_OFST 0
+#define	MC_CMD_VIRTIO_GET_CAPABILITIES_IN_DEVICE_ID_LEN 4
+/*            Enum values, see field(s): */
+/*               MC_CMD_VIRTIO_GET_FEATURES/MC_CMD_VIRTIO_GET_FEATURES_IN/DEVICE_ID */
+
+/* MC_CMD_VIRTIO_GET_CAPABILITIES_OUT msgresponse */
+#define	MC_CMD_VIRTIO_GET_CAPABILITIES_OUT_LEN 4
+/* Maximum number of queues supported for a single device instance */
+#define	MC_CMD_VIRTIO_GET_CAPABILITIES_OUT_MAX_QUEUES_OFST 0
+#define	MC_CMD_VIRTIO_GET_CAPABILITIES_OUT_MAX_QUEUES_LEN 4
+
+
 /***********************************/
 /* MC_CMD_VIRTIO_INIT_QUEUE
  * Create a virtio virtqueue. Fails with EALREADY if the queue already exists.
@@ -27490,6 +28228,24 @@
 #define	PCIE_FUNCTION_INTF_LBN 32
 #define	PCIE_FUNCTION_INTF_WIDTH 32
 
+/* QUEUE_ID structuredef: Structure representing an absolute queue identifier
+ * (absolute VI number + VI relative queue number). On Keystone, a VI can
+ * contain multiple queues (at present, up to 2), each with separate controls
+ * for direction. This structure is required to uniquely identify the absolute
+ * source queue for descriptor proxy functions.
+ */
+#define	QUEUE_ID_LEN 4
+/* Absolute VI number */
+#define	QUEUE_ID_ABS_VI_OFST 0
+#define	QUEUE_ID_ABS_VI_LEN 2
+#define	QUEUE_ID_ABS_VI_LBN 0
+#define	QUEUE_ID_ABS_VI_WIDTH 16
+/* Relative queue number within the VI */
+#define	QUEUE_ID_REL_QUEUE_LBN 16
+#define	QUEUE_ID_REL_QUEUE_WIDTH 1
+#define	QUEUE_ID_RESERVED_LBN 17
+#define	QUEUE_ID_RESERVED_WIDTH 15
+
 
 /***********************************/
 /* MC_CMD_DESC_PROXY_FUNC_CREATE
@@ -28088,7 +28844,11 @@
  * Enable descriptor proxying for function into target event queue. Returns VI
  * allocation info for the proxy source function, so that the caller can map
  * absolute VI IDs from descriptor proxy events back to the originating
- * function.
+ * function. This is a legacy function that only supports single queue proxy
+ * devices. It is also limited in that it can only be called after host driver
+ * attach (once VI allocation is known) and will return MC_CMD_ERR_ENOTCONN
+ * otherwise. For new code, see MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE which
+ * supports multi-queue devices and has no dependency on host driver attach.
  */
 #define	MC_CMD_DESC_PROXY_FUNC_ENABLE 0x178
 #define	MC_CMD_DESC_PROXY_FUNC_ENABLE_MSGSET 0x178
@@ -28119,9 +28879,46 @@
 #define	MC_CMD_DESC_PROXY_FUNC_ENABLE_OUT_VI_BASE_LEN 4
 
 
+/***********************************/
+/* MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE
+ * Enable descriptor proxying for a source queue on a host function into target
+ * event queue. Source queue number is a relative virtqueue number on the
+ * source function (0 to max_virtqueues-1). For a multi-queue device, the
+ * caller must enable all source queues individually. To retrieve absolute VI
+ * information for the source function (so that VI IDs from descriptor proxy
+ * events can be mapped back to source function / queue) see
+ * MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO
+ */
+#define	MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE 0x1d0
+#define	MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_MSGSET 0x1d0
+#undef	MC_CMD_0x1d0_PRIVILEGE_CTG
+
+#define	MC_CMD_0x1d0_PRIVILEGE_CTG SRIOV_CTG_ADMIN
+
+/* MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN msgrequest */
+#define	MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_LEN 12
+/* Handle to descriptor proxy function (as returned by
+ * MC_CMD_DESC_PROXY_FUNC_OPEN)
+ */
+#define	MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_HANDLE_OFST 0
+#define	MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_HANDLE_LEN 4
+/* Source relative queue number to enable proxying on */
+#define	MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_SOURCE_QUEUE_OFST 4
+#define	MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_SOURCE_QUEUE_LEN 4
+/* Descriptor proxy sink queue (caller function relative). Must be extended
+ * width event queue
+ */
+#define	MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_TARGET_EVQ_OFST 8
+#define	MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_TARGET_EVQ_LEN 4
+
+/* MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_OUT msgresponse */
+#define	MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_OUT_LEN 0
+
+
 /***********************************/
 /* MC_CMD_DESC_PROXY_FUNC_DISABLE
- * Disable descriptor proxying for function
+ * Disable descriptor proxying for function. For multi-queue functions,
+ * disables all queues.
  */
 #define	MC_CMD_DESC_PROXY_FUNC_DISABLE 0x179
 #define	MC_CMD_DESC_PROXY_FUNC_DISABLE_MSGSET 0x179
@@ -28141,6 +28938,77 @@
 #define	MC_CMD_DESC_PROXY_FUNC_DISABLE_OUT_LEN 0
 
 
+/***********************************/
+/* MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE
+ * Disable descriptor proxying for a specific source queue on a function.
+ */
+#define	MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE 0x1d1
+#define	MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_MSGSET 0x1d1
+#undef	MC_CMD_0x1d1_PRIVILEGE_CTG
+
+#define	MC_CMD_0x1d1_PRIVILEGE_CTG SRIOV_CTG_ADMIN
+
+/* MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN msgrequest */
+#define	MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN_LEN 8
+/* Handle to descriptor proxy function (as returned by
+ * MC_CMD_DESC_PROXY_FUNC_OPEN)
+ */
+#define	MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN_HANDLE_OFST 0
+#define	MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN_HANDLE_LEN 4
+/* Source relative queue number to disable proxying on */
+#define	MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN_SOURCE_QUEUE_OFST 4
+#define	MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN_SOURCE_QUEUE_LEN 4
+
+/* MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_OUT msgresponse */
+#define	MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_OUT_LEN 0
+
+
+/***********************************/
+/* MC_CMD_DESC_PROXY_GET_VI_INFO
+ * Returns absolute VI allocation information for the descriptor proxy source
+ * function referenced by HANDLE, so that the caller can map absolute VI IDs
+ * from descriptor proxy events back to the originating function and queue. The
+ * call is only valid after the host driver for the source function has
+ * attached (after receiving a driver attach event for the descriptor proxy
+ * function) and will fail with ENOTCONN otherwise.
+ */
+#define	MC_CMD_DESC_PROXY_GET_VI_INFO 0x1d2
+#define	MC_CMD_DESC_PROXY_GET_VI_INFO_MSGSET 0x1d2
+#undef	MC_CMD_0x1d2_PRIVILEGE_CTG
+
+#define	MC_CMD_0x1d2_PRIVILEGE_CTG SRIOV_CTG_ADMIN
+
+/* MC_CMD_DESC_PROXY_GET_VI_INFO_IN msgrequest */
+#define	MC_CMD_DESC_PROXY_GET_VI_INFO_IN_LEN 4
+/* Handle to descriptor proxy function (as returned by
+ * MC_CMD_DESC_PROXY_FUNC_OPEN)
+ */
+#define	MC_CMD_DESC_PROXY_GET_VI_INFO_IN_HANDLE_OFST 0
+#define	MC_CMD_DESC_PROXY_GET_VI_INFO_IN_HANDLE_LEN 4
+
+/* MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT msgresponse */
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_LENMIN 0
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_LENMAX 252
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_LENMAX_MCDI2 1020
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_LEN(num) (0+4*(num))
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_NUM(len) (((len)-0)/4)
+/* VI information (VI ID + VI relative queue number) for each of the source
+ * queues (in order from 0 to max_virtqueues-1), as array of QUEUE_ID
+ * structures.
+ */
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_OFST 0
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_LEN 4
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_MINNUM 0
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_MAXNUM 63
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_MAXNUM_MCDI2 255
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_ABS_VI_OFST 0
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_ABS_VI_LEN 2
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_REL_QUEUE_LBN 16
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_REL_QUEUE_WIDTH 1
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_RESERVED_LBN 17
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_RESERVED_WIDTH 15
+
+
 /***********************************/
 /* MC_CMD_GET_ADDR_SPC_ID
  * Get Address space identifier for use in mem2mem descriptors for a given
@@ -29384,9 +30252,12 @@
 #define	MC_CMD_MAE_GET_CAPS_OUT_ENCAP_TYPE_L2GRE_OFST 4
 #define	MC_CMD_MAE_GET_CAPS_OUT_ENCAP_TYPE_L2GRE_LBN 3
 #define	MC_CMD_MAE_GET_CAPS_OUT_ENCAP_TYPE_L2GRE_WIDTH 1
-/* The total number of counters available to allocate. */
+/* Deprecated alias for AR_COUNTERS. */
 #define	MC_CMD_MAE_GET_CAPS_OUT_COUNTERS_OFST 8
 #define	MC_CMD_MAE_GET_CAPS_OUT_COUNTERS_LEN 4
+/* The total number of AR counters available to allocate. */
+#define	MC_CMD_MAE_GET_CAPS_OUT_AR_COUNTERS_OFST 8
+#define	MC_CMD_MAE_GET_CAPS_OUT_AR_COUNTERS_LEN 4
 /* The total number of counters lists available to allocate. A value of zero
  * indicates that counter lists are not supported by the NIC. (But single
  * counters may still be.)
@@ -29429,6 +30300,87 @@
 #define	MC_CMD_MAE_GET_CAPS_OUT_API_VER_OFST 48
 #define	MC_CMD_MAE_GET_CAPS_OUT_API_VER_LEN 4
 
+/* MC_CMD_MAE_GET_CAPS_V2_OUT msgresponse */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_LEN 60
+/* The number of field IDs that the NIC supports. Any field with a ID greater
+ * than or equal to the value returned in this field must be treated as having
+ * a support level of MAE_FIELD_UNSUPPORTED in all requests.
+ */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_MATCH_FIELD_COUNT_OFST 0
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_MATCH_FIELD_COUNT_LEN 4
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPES_SUPPORTED_OFST 4
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPES_SUPPORTED_LEN 4
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_VXLAN_OFST 4
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_VXLAN_LBN 0
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_VXLAN_WIDTH 1
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_NVGRE_OFST 4
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_NVGRE_LBN 1
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_NVGRE_WIDTH 1
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_GENEVE_OFST 4
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_GENEVE_LBN 2
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_GENEVE_WIDTH 1
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_L2GRE_OFST 4
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_L2GRE_LBN 3
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_L2GRE_WIDTH 1
+/* Deprecated alias for AR_COUNTERS. */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTERS_OFST 8
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTERS_LEN 4
+/* The total number of AR counters available to allocate. */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_AR_COUNTERS_OFST 8
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_AR_COUNTERS_LEN 4
+/* The total number of counters lists available to allocate. A value of zero
+ * indicates that counter lists are not supported by the NIC. (But single
+ * counters may still be.)
+ */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTER_LISTS_OFST 12
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTER_LISTS_LEN 4
+/* The total number of encap header structures available to allocate. */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_HEADER_LIMIT_OFST 16
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_HEADER_LIMIT_LEN 4
+/* Reserved. Should be zero. */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_RSVD_OFST 20
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_RSVD_LEN 4
+/* The total number of action sets available to allocate. */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_SETS_OFST 24
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_SETS_LEN 4
+/* The total number of action set lists available to allocate. */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_SET_LISTS_OFST 28
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_SET_LISTS_LEN 4
+/* The total number of outer rules available to allocate. */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_OUTER_RULES_OFST 32
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_OUTER_RULES_LEN 4
+/* The total number of action rules available to allocate. */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_RULES_OFST 36
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_RULES_LEN 4
+/* The number of priorities available for ACTION_RULE filters. It is invalid to
+ * install a MATCH_ACTION filter with a priority number >= ACTION_PRIOS.
+ */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_PRIOS_OFST 40
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_PRIOS_LEN 4
+/* The number of priorities available for OUTER_RULE filters. It is invalid to
+ * install an OUTER_RULE filter with a priority number >= OUTER_PRIOS.
+ */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_OUTER_PRIOS_OFST 44
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_OUTER_PRIOS_LEN 4
+/* MAE API major version. Currently 1. If this field is not present in the
+ * response (i.e. response shorter than 384 bits), then its value is zero. If
+ * the value does not match the client's expectations, the client should raise
+ * a fatal error.
+ */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_API_VER_OFST 48
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_API_VER_LEN 4
+/* Mask of supported counter types. Each bit position corresponds to a value of
+ * the MAE_COUNTER_TYPE enum. If this field is missing (i.e. V1 response),
+ * clients must assume that only AR counters are supported (i.e.
+ * COUNTER_TYPES_SUPPORTED==0x1). See also
+ * MC_CMD_MAE_COUNTERS_STREAM_START/COUNTER_TYPES_MASK.
+ */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTER_TYPES_SUPPORTED_OFST 52
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTER_TYPES_SUPPORTED_LEN 4
+/* The total number of conntrack counters available to allocate. */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_CT_COUNTERS_OFST 56
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_CT_COUNTERS_LEN 4
+
 
 /***********************************/
 /* MC_CMD_MAE_GET_AR_CAPS
@@ -29495,8 +30447,8 @@
 
 /***********************************/
 /* MC_CMD_MAE_COUNTER_ALLOC
- * Allocate match-action-engine counters, which can be referenced in Action
- * Rules.
+ * Allocate match-action-engine counters, which can be referenced in various
+ * tables.
  */
 #define	MC_CMD_MAE_COUNTER_ALLOC 0x143
 #define	MC_CMD_MAE_COUNTER_ALLOC_MSGSET 0x143
@@ -29504,12 +30456,25 @@
 
 #define	MC_CMD_0x143_PRIVILEGE_CTG SRIOV_CTG_MAE
 
-/* MC_CMD_MAE_COUNTER_ALLOC_IN msgrequest */
+/* MC_CMD_MAE_COUNTER_ALLOC_IN msgrequest: Using this is equivalent to using V2
+ * with COUNTER_TYPE=AR.
+ */
 #define	MC_CMD_MAE_COUNTER_ALLOC_IN_LEN 4
 /* The number of counters that the driver would like allocated */
 #define	MC_CMD_MAE_COUNTER_ALLOC_IN_REQUESTED_COUNT_OFST 0
 #define	MC_CMD_MAE_COUNTER_ALLOC_IN_REQUESTED_COUNT_LEN 4
 
+/* MC_CMD_MAE_COUNTER_ALLOC_V2_IN msgrequest */
+#define	MC_CMD_MAE_COUNTER_ALLOC_V2_IN_LEN 8
+/* The number of counters that the driver would like allocated */
+#define	MC_CMD_MAE_COUNTER_ALLOC_V2_IN_REQUESTED_COUNT_OFST 0
+#define	MC_CMD_MAE_COUNTER_ALLOC_V2_IN_REQUESTED_COUNT_LEN 4
+/* Which type of counter to allocate. */
+#define	MC_CMD_MAE_COUNTER_ALLOC_V2_IN_COUNTER_TYPE_OFST 4
+#define	MC_CMD_MAE_COUNTER_ALLOC_V2_IN_COUNTER_TYPE_LEN 4
+/*            Enum values, see field(s): */
+/*               MAE_COUNTER_TYPE */
+
 /* MC_CMD_MAE_COUNTER_ALLOC_OUT msgresponse */
 #define	MC_CMD_MAE_COUNTER_ALLOC_OUT_LENMIN 12
 #define	MC_CMD_MAE_COUNTER_ALLOC_OUT_LENMAX 252
@@ -29518,7 +30483,8 @@
 #define	MC_CMD_MAE_COUNTER_ALLOC_OUT_COUNTER_ID_NUM(len) (((len)-8)/4)
 /* Generation count. Packets with generation count >= GENERATION_COUNT will
  * contain valid counter values for counter IDs allocated in this call, unless
- * the counter values are zero and zero squash is enabled.
+ * the counter values are zero and zero squash is enabled. Note that there is
+ * an independent GENERATION_COUNT object per counter type.
  */
 #define	MC_CMD_MAE_COUNTER_ALLOC_OUT_GENERATION_COUNT_OFST 0
 #define	MC_CMD_MAE_COUNTER_ALLOC_OUT_GENERATION_COUNT_LEN 4
@@ -29548,7 +30514,9 @@
 
 #define	MC_CMD_0x144_PRIVILEGE_CTG SRIOV_CTG_MAE
 
-/* MC_CMD_MAE_COUNTER_FREE_IN msgrequest */
+/* MC_CMD_MAE_COUNTER_FREE_IN msgrequest: Using this is equivalent to using V2
+ * with COUNTER_TYPE=AR.
+ */
 #define	MC_CMD_MAE_COUNTER_FREE_IN_LENMIN 8
 #define	MC_CMD_MAE_COUNTER_FREE_IN_LENMAX 132
 #define	MC_CMD_MAE_COUNTER_FREE_IN_LENMAX_MCDI2 132
@@ -29564,6 +30532,23 @@
 #define	MC_CMD_MAE_COUNTER_FREE_IN_FREE_COUNTER_ID_MAXNUM 32
 #define	MC_CMD_MAE_COUNTER_FREE_IN_FREE_COUNTER_ID_MAXNUM_MCDI2 32
 
+/* MC_CMD_MAE_COUNTER_FREE_V2_IN msgrequest */
+#define	MC_CMD_MAE_COUNTER_FREE_V2_IN_LEN 136
+/* The number of counter IDs to be freed. */
+#define	MC_CMD_MAE_COUNTER_FREE_V2_IN_COUNTER_ID_COUNT_OFST 0
+#define	MC_CMD_MAE_COUNTER_FREE_V2_IN_COUNTER_ID_COUNT_LEN 4
+/* An array containing the counter IDs to be freed. */
+#define	MC_CMD_MAE_COUNTER_FREE_V2_IN_FREE_COUNTER_ID_OFST 4
+#define	MC_CMD_MAE_COUNTER_FREE_V2_IN_FREE_COUNTER_ID_LEN 4
+#define	MC_CMD_MAE_COUNTER_FREE_V2_IN_FREE_COUNTER_ID_MINNUM 1
+#define	MC_CMD_MAE_COUNTER_FREE_V2_IN_FREE_COUNTER_ID_MAXNUM 32
+#define	MC_CMD_MAE_COUNTER_FREE_V2_IN_FREE_COUNTER_ID_MAXNUM_MCDI2 32
+/* Which type of counter to free. */
+#define	MC_CMD_MAE_COUNTER_FREE_V2_IN_COUNTER_TYPE_OFST 132
+#define	MC_CMD_MAE_COUNTER_FREE_V2_IN_COUNTER_TYPE_LEN 4
+/*            Enum values, see field(s): */
+/*               MAE_COUNTER_TYPE */
+
 /* MC_CMD_MAE_COUNTER_FREE_OUT msgresponse */
 #define	MC_CMD_MAE_COUNTER_FREE_OUT_LENMIN 12
 #define	MC_CMD_MAE_COUNTER_FREE_OUT_LENMAX 136
@@ -29572,11 +30557,13 @@
 #define	MC_CMD_MAE_COUNTER_FREE_OUT_FREED_COUNTER_ID_NUM(len) (((len)-8)/4)
 /* Generation count. A packet with generation count == GENERATION_COUNT will
  * contain the final values for these counter IDs, unless the counter values
- * are zero and zero squash is enabled. Receiving a packet with generation
- * count > GENERATION_COUNT guarantees that no more values will be written for
- * these counters. If values for these counter IDs are present, the counter ID
- * has been reallocated. A counter ID will not be reallocated within a single
- * read cycle as this would merge increments from the 'old' and 'new' counters.
+ * are zero and zero squash is enabled. Note that the GENERATION_COUNT value is
+ * specific to the COUNTER_TYPE (IDENTIFIER field in packet header). Receiving
+ * a packet with generation count > GENERATION_COUNT guarantees that no more
+ * values will be written for these counters. If values for these counter IDs
+ * are present, the counter ID has been reallocated. A counter ID will not be
+ * reallocated within a single read cycle as this would merge increments from
+ * the 'old' and 'new' counters.
  */
 #define	MC_CMD_MAE_COUNTER_FREE_OUT_GENERATION_COUNT_OFST 0
 #define	MC_CMD_MAE_COUNTER_FREE_OUT_GENERATION_COUNT_LEN 4
@@ -29616,7 +30603,9 @@
 
 #define	MC_CMD_0x151_PRIVILEGE_CTG SRIOV_CTG_MAE
 
-/* MC_CMD_MAE_COUNTERS_STREAM_START_IN msgrequest */
+/* MC_CMD_MAE_COUNTERS_STREAM_START_IN msgrequest: Using V1 is equivalent to V2
+ * with COUNTER_TYPES_MASK=0x1 (i.e. AR counters only).
+ */
 #define	MC_CMD_MAE_COUNTERS_STREAM_START_IN_LEN 8
 /* The RxQ to write packets to. */
 #define	MC_CMD_MAE_COUNTERS_STREAM_START_IN_QID_OFST 0
@@ -29634,6 +30623,35 @@
 #define	MC_CMD_MAE_COUNTERS_STREAM_START_IN_COUNTER_STALL_EN_LBN 1
 #define	MC_CMD_MAE_COUNTERS_STREAM_START_IN_COUNTER_STALL_EN_WIDTH 1
 
+/* MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN msgrequest */
+#define	MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_LEN 12
+/* The RxQ to write packets to. */
+#define	MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_QID_OFST 0
+#define	MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_QID_LEN 2
+/* Maximum size in bytes of packets that may be written to the RxQ. */
+#define	MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_PACKET_SIZE_OFST 2
+#define	MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_PACKET_SIZE_LEN 2
+/* Optional flags. */
+#define	MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_FLAGS_OFST 4
+#define	MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_FLAGS_LEN 4
+#define	MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_ZERO_SQUASH_DISABLE_OFST 4
+#define	MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_ZERO_SQUASH_DISABLE_LBN 0
+#define	MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_ZERO_SQUASH_DISABLE_WIDTH 1
+#define	MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_COUNTER_STALL_EN_OFST 4
+#define	MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_COUNTER_STALL_EN_LBN 1
+#define	MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_COUNTER_STALL_EN_WIDTH 1
+/* Mask of which counter types should be reported. Each bit position
+ * corresponds to a value of the MAE_COUNTER_TYPE enum. For example a value of
+ * 0x3 requests both AR and CT counters. A value of zero is invalid. Counter
+ * types not selected by the mask value won't be included in the stream. If a
+ * client wishes to change which counter types are reported, it must first call
+ * MAE_COUNTERS_STREAM_STOP, then restart it with the new mask value.
+ * Requesting a counter type which isn't supported by firmware (reported in
+ * MC_CMD_MAE_GET_CAPS/COUNTER_TYPES_SUPPORTED) will result in ENOTSUP.
+ */
+#define	MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_COUNTER_TYPES_MASK_OFST 8
+#define	MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_COUNTER_TYPES_MASK_LEN 4
+
 /* MC_CMD_MAE_COUNTERS_STREAM_START_OUT msgresponse */
 #define	MC_CMD_MAE_COUNTERS_STREAM_START_OUT_LEN 4
 #define	MC_CMD_MAE_COUNTERS_STREAM_START_OUT_FLAGS_OFST 0
@@ -29661,14 +30679,32 @@
 
 /* MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT msgresponse */
 #define	MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT_LEN 4
-/* Generation count. The final set of counter values will be written out in
- * packets with count == GENERATION_COUNT. An empty packet with count >
- * GENERATION_COUNT indicates that no more counter values will be written to
- * this stream.
+/* Generation count for AR counters. The final set of AR counter values will be
+ * written out in packets with count == GENERATION_COUNT. An empty packet with
+ * count > GENERATION_COUNT indicates that no more counter values of this type
+ * will be written to this stream.
  */
 #define	MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT_GENERATION_COUNT_OFST 0
 #define	MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT_GENERATION_COUNT_LEN 4
 
+/* MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT msgresponse */
+#define	MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_LENMIN 4
+#define	MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_LENMAX 32
+#define	MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_LENMAX_MCDI2 32
+#define	MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_LEN(num) (0+4*(num))
+#define	MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_NUM(len) (((len)-0)/4)
+/* Array of generation counts, indexed by MAE_COUNTER_TYPE. Note that since
+ * MAE_COUNTER_TYPE_AR==0, this response is backwards-compatible with V1. The
+ * final set of counter values will be written out in packets with count ==
+ * GENERATION_COUNT. An empty packet with count > GENERATION_COUNT indicates
+ * that no more counter values of this type will be written to this stream.
+ */
+#define	MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_OFST 0
+#define	MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_LEN 4
+#define	MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_MINNUM 1
+#define	MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_MAXNUM 8
+#define	MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_MAXNUM_MCDI2 8
+
 
 /***********************************/
 /* MC_CMD_MAE_COUNTERS_STREAM_GIVE_CREDITS
@@ -29941,9 +30977,10 @@
 #define	MC_CMD_MAE_ACTION_SET_ALLOC_IN_COUNTER_LIST_ID_LEN 4
 /* If a driver only wished to update one counter within this action set, then
  * it can supply a COUNTER_ID instead of allocating a single-element counter
- * list. This field should be set to COUNTER_ID_NULL if this behaviour is not
- * required. It is not valid to supply a non-NULL value for both
- * COUNTER_LIST_ID and COUNTER_ID.
+ * list. The ID must have been allocated with COUNTER_TYPE=AR. This field
+ * should be set to COUNTER_ID_NULL if this behaviour is not required. It is
+ * not valid to supply a non-NULL value for both COUNTER_LIST_ID and
+ * COUNTER_ID.
  */
 #define	MC_CMD_MAE_ACTION_SET_ALLOC_IN_COUNTER_ID_OFST 28
 #define	MC_CMD_MAE_ACTION_SET_ALLOC_IN_COUNTER_ID_LEN 4
@@ -30021,9 +31058,10 @@
 #define	MC_CMD_MAE_ACTION_SET_ALLOC_V2_IN_COUNTER_LIST_ID_LEN 4
 /* If a driver only wished to update one counter within this action set, then
  * it can supply a COUNTER_ID instead of allocating a single-element counter
- * list. This field should be set to COUNTER_ID_NULL if this behaviour is not
- * required. It is not valid to supply a non-NULL value for both
- * COUNTER_LIST_ID and COUNTER_ID.
+ * list. The ID must have been allocated with COUNTER_TYPE=AR. This field
+ * should be set to COUNTER_ID_NULL if this behaviour is not required. It is
+ * not valid to supply a non-NULL value for both COUNTER_LIST_ID and
+ * COUNTER_ID.
  */
 #define	MC_CMD_MAE_ACTION_SET_ALLOC_V2_IN_COUNTER_ID_OFST 28
 #define	MC_CMD_MAE_ACTION_SET_ALLOC_V2_IN_COUNTER_ID_LEN 4
@@ -30352,7 +31390,8 @@
 #define	MAE_ACTION_RULE_RESPONSE_LOOKUP_CONTROL_LBN 64
 #define	MAE_ACTION_RULE_RESPONSE_LOOKUP_CONTROL_WIDTH 32
 /* Counter ID to increment if DO_CT or DO_RECIRC is set. Must be set to
- * COUNTER_ID_NULL otherwise.
+ * COUNTER_ID_NULL otherwise. Counter ID must have been allocated with
+ * COUNTER_TYPE=AR.
  */
 #define	MAE_ACTION_RULE_RESPONSE_COUNTER_ID_OFST 12
 #define	MAE_ACTION_RULE_RESPONSE_COUNTER_ID_LEN 4
@@ -30710,6 +31749,108 @@
 #define	MAE_MPORT_DESC_VNIC_PLUGIN_TBD_LBN 352
 #define	MAE_MPORT_DESC_VNIC_PLUGIN_TBD_WIDTH 32
 
+/* MAE_MPORT_DESC_V2 structuredef */
+#define	MAE_MPORT_DESC_V2_LEN 56
+#define	MAE_MPORT_DESC_V2_MPORT_ID_OFST 0
+#define	MAE_MPORT_DESC_V2_MPORT_ID_LEN 4
+#define	MAE_MPORT_DESC_V2_MPORT_ID_LBN 0
+#define	MAE_MPORT_DESC_V2_MPORT_ID_WIDTH 32
+/* Reserved for future purposes, contains information independent of caller */
+#define	MAE_MPORT_DESC_V2_FLAGS_OFST 4
+#define	MAE_MPORT_DESC_V2_FLAGS_LEN 4
+#define	MAE_MPORT_DESC_V2_FLAGS_LBN 32
+#define	MAE_MPORT_DESC_V2_FLAGS_WIDTH 32
+#define	MAE_MPORT_DESC_V2_CALLER_FLAGS_OFST 8
+#define	MAE_MPORT_DESC_V2_CALLER_FLAGS_LEN 4
+#define	MAE_MPORT_DESC_V2_CAN_RECEIVE_ON_OFST 8
+#define	MAE_MPORT_DESC_V2_CAN_RECEIVE_ON_LBN 0
+#define	MAE_MPORT_DESC_V2_CAN_RECEIVE_ON_WIDTH 1
+#define	MAE_MPORT_DESC_V2_CAN_DELIVER_TO_OFST 8
+#define	MAE_MPORT_DESC_V2_CAN_DELIVER_TO_LBN 1
+#define	MAE_MPORT_DESC_V2_CAN_DELIVER_TO_WIDTH 1
+#define	MAE_MPORT_DESC_V2_CAN_DELETE_OFST 8
+#define	MAE_MPORT_DESC_V2_CAN_DELETE_LBN 2
+#define	MAE_MPORT_DESC_V2_CAN_DELETE_WIDTH 1
+#define	MAE_MPORT_DESC_V2_IS_ZOMBIE_OFST 8
+#define	MAE_MPORT_DESC_V2_IS_ZOMBIE_LBN 3
+#define	MAE_MPORT_DESC_V2_IS_ZOMBIE_WIDTH 1
+#define	MAE_MPORT_DESC_V2_CALLER_FLAGS_LBN 64
+#define	MAE_MPORT_DESC_V2_CALLER_FLAGS_WIDTH 32
+/* Not the ideal name; it's really the type of thing connected to the m-port */
+#define	MAE_MPORT_DESC_V2_MPORT_TYPE_OFST 12
+#define	MAE_MPORT_DESC_V2_MPORT_TYPE_LEN 4
+/* enum: Connected to a MAC... */
+#define	MAE_MPORT_DESC_V2_MPORT_TYPE_NET_PORT 0x0
+/* enum: Adds metadata and delivers to another m-port */
+#define	MAE_MPORT_DESC_V2_MPORT_TYPE_ALIAS 0x1
+/* enum: Connected to a VNIC. */
+#define	MAE_MPORT_DESC_V2_MPORT_TYPE_VNIC 0x2
+#define	MAE_MPORT_DESC_V2_MPORT_TYPE_LBN 96
+#define	MAE_MPORT_DESC_V2_MPORT_TYPE_WIDTH 32
+/* 128-bit value available to drivers for m-port identification. */
+#define	MAE_MPORT_DESC_V2_UUID_OFST 16
+#define	MAE_MPORT_DESC_V2_UUID_LEN 16
+#define	MAE_MPORT_DESC_V2_UUID_LBN 128
+#define	MAE_MPORT_DESC_V2_UUID_WIDTH 128
+/* Big wadge of space reserved for other common properties */
+#define	MAE_MPORT_DESC_V2_RESERVED_OFST 32
+#define	MAE_MPORT_DESC_V2_RESERVED_LEN 8
+#define	MAE_MPORT_DESC_V2_RESERVED_LO_OFST 32
+#define	MAE_MPORT_DESC_V2_RESERVED_LO_LEN 4
+#define	MAE_MPORT_DESC_V2_RESERVED_LO_LBN 256
+#define	MAE_MPORT_DESC_V2_RESERVED_LO_WIDTH 32
+#define	MAE_MPORT_DESC_V2_RESERVED_HI_OFST 36
+#define	MAE_MPORT_DESC_V2_RESERVED_HI_LEN 4
+#define	MAE_MPORT_DESC_V2_RESERVED_HI_LBN 288
+#define	MAE_MPORT_DESC_V2_RESERVED_HI_WIDTH 32
+#define	MAE_MPORT_DESC_V2_RESERVED_LBN 256
+#define	MAE_MPORT_DESC_V2_RESERVED_WIDTH 64
+/* Logical port index. Only valid when type NET Port. */
+#define	MAE_MPORT_DESC_V2_NET_PORT_IDX_OFST 40
+#define	MAE_MPORT_DESC_V2_NET_PORT_IDX_LEN 4
+#define	MAE_MPORT_DESC_V2_NET_PORT_IDX_LBN 320
+#define	MAE_MPORT_DESC_V2_NET_PORT_IDX_WIDTH 32
+/* The m-port delivered to */
+#define	MAE_MPORT_DESC_V2_ALIAS_DELIVER_MPORT_ID_OFST 40
+#define	MAE_MPORT_DESC_V2_ALIAS_DELIVER_MPORT_ID_LEN 4
+#define	MAE_MPORT_DESC_V2_ALIAS_DELIVER_MPORT_ID_LBN 320
+#define	MAE_MPORT_DESC_V2_ALIAS_DELIVER_MPORT_ID_WIDTH 32
+/* The type of thing that owns the VNIC */
+#define	MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_OFST 40
+#define	MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_LEN 4
+#define	MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_FUNCTION 0x1 /* enum */
+#define	MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_PLUGIN 0x2 /* enum */
+#define	MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_LBN 320
+#define	MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_WIDTH 32
+/* The PCIe interface on which the function lives. CJK: We need an enumeration
+ * of interfaces that we extend as new interface (types) appear. This belongs
+ * elsewhere and should be referenced from here
+ */
+#define	MAE_MPORT_DESC_V2_VNIC_FUNCTION_INTERFACE_OFST 44
+#define	MAE_MPORT_DESC_V2_VNIC_FUNCTION_INTERFACE_LEN 4
+#define	MAE_MPORT_DESC_V2_VNIC_FUNCTION_INTERFACE_LBN 352
+#define	MAE_MPORT_DESC_V2_VNIC_FUNCTION_INTERFACE_WIDTH 32
+#define	MAE_MPORT_DESC_V2_VNIC_FUNCTION_PF_IDX_OFST 48
+#define	MAE_MPORT_DESC_V2_VNIC_FUNCTION_PF_IDX_LEN 2
+#define	MAE_MPORT_DESC_V2_VNIC_FUNCTION_PF_IDX_LBN 384
+#define	MAE_MPORT_DESC_V2_VNIC_FUNCTION_PF_IDX_WIDTH 16
+#define	MAE_MPORT_DESC_V2_VNIC_FUNCTION_VF_IDX_OFST 50
+#define	MAE_MPORT_DESC_V2_VNIC_FUNCTION_VF_IDX_LEN 2
+/* enum: Indicates that the function is a PF */
+#define	MAE_MPORT_DESC_V2_VF_IDX_NULL 0xffff
+#define	MAE_MPORT_DESC_V2_VNIC_FUNCTION_VF_IDX_LBN 400
+#define	MAE_MPORT_DESC_V2_VNIC_FUNCTION_VF_IDX_WIDTH 16
+/* Reserved. Should be ignored for now. */
+#define	MAE_MPORT_DESC_V2_VNIC_PLUGIN_TBD_OFST 44
+#define	MAE_MPORT_DESC_V2_VNIC_PLUGIN_TBD_LEN 4
+#define	MAE_MPORT_DESC_V2_VNIC_PLUGIN_TBD_LBN 352
+#define	MAE_MPORT_DESC_V2_VNIC_PLUGIN_TBD_WIDTH 32
+/* A client handle for the VNIC's owner. Only valid for type VNIC. */
+#define	MAE_MPORT_DESC_V2_VNIC_CLIENT_HANDLE_OFST 52
+#define	MAE_MPORT_DESC_V2_VNIC_CLIENT_HANDLE_LEN 4
+#define	MAE_MPORT_DESC_V2_VNIC_CLIENT_HANDLE_LBN 416
+#define	MAE_MPORT_DESC_V2_VNIC_CLIENT_HANDLE_WIDTH 32
+
 
 /***********************************/
 /* MC_CMD_MAE_MPORT_ENUMERATE
-- 
2.30.2


^ permalink raw reply	[relevance 2%]

* Re: [dpdk-dev] [PATCH v2 4/5] cryptodev: update fast path APIs to use new flat array
  2021-10-11 12:43  3%   ` [dpdk-dev] [PATCH v2 4/5] cryptodev: update fast path APIs to use new flat array Akhil Goyal
@ 2021-10-11 14:54  0%     ` Zhang, Roy Fan
  0 siblings, 0 replies; 200+ results
From: Zhang, Roy Fan @ 2021-10-11 14:54 UTC (permalink / raw)
  To: Akhil Goyal, dev
  Cc: thomas, david.marchand, hemant.agrawal, anoobj, De Lara Guarch,
	Pablo, Trahe, Fiona, Doherty, Declan, matan, g.singh,
	jianjay.zhou, asomalap, ruifeng.wang, Ananyev, Konstantin,
	Nicolau, Radu, ajit.khaparde, rnagadheeraj, adwivedi, Power,
	Ciara

Hi Akhil,

> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Monday, October 11, 2021 1:43 PM
> To: dev@dpdk.org
> Cc: thomas@monjalon.net; david.marchand@redhat.com;
> hemant.agrawal@nxp.com; anoobj@marvell.com; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>; Trahe, Fiona <fiona.trahe@intel.com>;
> Doherty, Declan <declan.doherty@intel.com>; matan@nvidia.com;
> g.singh@nxp.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> jianjay.zhou@huawei.com; asomalap@amd.com; ruifeng.wang@arm.com;
> Ananyev, Konstantin <konstantin.ananyev@intel.com>; Nicolau, Radu
> <radu.nicolau@intel.com>; ajit.khaparde@broadcom.com;
> rnagadheeraj@marvell.com; adwivedi@marvell.com; Power, Ciara
> <ciara.power@intel.com>; Akhil Goyal <gakhil@marvell.com>
> Subject: [PATCH v2 4/5] cryptodev: update fast path APIs to use new flat
> array
> 
> Rework fast-path cryptodev functions to use rte_crypto_fp_ops[].
> While it is an API/ABI breakage, this change is intended to be
> transparent for both users (no changes in user app is required) and
> PMD developers (no changes in PMD is required).
> 
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> ---
>  lib/cryptodev/rte_cryptodev.h | 27 +++++++++++++++++----------
>  1 file changed, 17 insertions(+), 10 deletions(-)
> 
> diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
> index ce0dca72be..739ad529e5 100644
> --- a/lib/cryptodev/rte_cryptodev.h
> +++ b/lib/cryptodev/rte_cryptodev.h
> @@ -1832,13 +1832,18 @@ static inline uint16_t
>  rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
>  		struct rte_crypto_op **ops, uint16_t nb_ops)
>  {
> -	struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
> +	struct rte_crypto_fp_ops *fp_ops;

We may need to use const for fp_ops since we only call the function pointers in it.
 
> +	void *qp;
> 
>  	rte_cryptodev_trace_dequeue_burst(dev_id, qp_id, (void **)ops,
> nb_ops);
> -	nb_ops = (*dev->dequeue_burst)
> -			(dev->data->queue_pairs[qp_id], ops, nb_ops);
> +
> +	fp_ops = &rte_crypto_fp_ops[dev_id];
> +	qp = fp_ops->qp.data[qp_id];
> +
> +	nb_ops = fp_ops->dequeue_burst(qp, ops, nb_ops);
> +
>  #ifdef RTE_CRYPTO_CALLBACKS
> -	if (unlikely(dev->deq_cbs != NULL)) {
> +	if (unlikely(fp_ops->qp.deq_cb != NULL)) {
>  		struct rte_cryptodev_cb_rcu *list;
>  		struct rte_cryptodev_cb *cb;
> 
> @@ -1848,7 +1853,7 @@ rte_cryptodev_dequeue_burst(uint8_t dev_id,
> uint16_t qp_id,
>  		 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory
> order is
>  		 * not required.
>  		 */
> -		list = &dev->deq_cbs[qp_id];
> +		list = (struct rte_cryptodev_cb_rcu *)&fp_ops-
> >qp.deq_cb[qp_id];
>  		rte_rcu_qsbr_thread_online(list->qsbr, 0);
>  		cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED);
> 
> @@ -1899,10 +1904,13 @@ static inline uint16_t
>  rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
>  		struct rte_crypto_op **ops, uint16_t nb_ops)
>  {
> -	struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
> +	struct rte_crypto_fp_ops *fp_ops;

Same as above

> +	void *qp;
> 
> +	fp_ops = &rte_crypto_fp_ops[dev_id];
> +	qp = fp_ops->qp.data[qp_id];
>  #ifdef RTE_CRYPTO_CALLBACKS
> -	if (unlikely(dev->enq_cbs != NULL)) {
> +	if (unlikely(fp_ops->qp.enq_cb != NULL)) {
>  		struct rte_cryptodev_cb_rcu *list;
>  		struct rte_cryptodev_cb *cb;
> 
> @@ -1912,7 +1920,7 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id,
> uint16_t qp_id,
>  		 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory
> order is
>  		 * not required.
>  		 */
> -		list = &dev->enq_cbs[qp_id];
> +		list = (struct rte_cryptodev_cb_rcu *)&fp_ops-
> >qp.enq_cb[qp_id];
>  		rte_rcu_qsbr_thread_online(list->qsbr, 0);
>  		cb = __atomic_load_n(&list->next, __ATOMIC_RELAXED);
> 
> @@ -1927,8 +1935,7 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id,
> uint16_t qp_id,
>  #endif
> 
>  	rte_cryptodev_trace_enqueue_burst(dev_id, qp_id, (void **)ops,
> nb_ops);
> -	return (*dev->enqueue_burst)(
> -			dev->data->queue_pairs[qp_id], ops, nb_ops);
> +	return fp_ops->enqueue_burst(qp, ops, nb_ops);
>  }
> 
> 
> --
> 2.25.1

Other than the minor comments above
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure
  2021-10-09 12:05  0%           ` fengchengwen
  2021-10-11  1:18  0%             ` fengchengwen
  2021-10-11  8:35  0%             ` Andrew Rybchenko
@ 2021-10-11 15:15  0%             ` Ananyev, Konstantin
  2 siblings, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2021-10-11 15:15 UTC (permalink / raw)
  To: fengchengwen, dev
  Cc: Li, Xiaoyun, anoobj, jerinj, ndabilpuram, adwivedi,
	shepard.siegel, ed.czeck, john.miller, irusskikh, ajit.khaparde,
	somnath.kotur, rahul.lakkireddy, hemant.agrawal, sachin.saxena,
	Wang, Haiyue, Daley, John, hyonkim, Zhang, Qi Z, Wang, Xiao W,
	humin29, yisen.zhuang, oulijun, Xing, Beilei, Wu, Jingjing, Yang,
	Qiming, matan, viacheslavo, sthemmin, longli, heinrich.kuhn,
	kirankumark, andrew.rybchenko, mczekaj, jiawenwu, jianwang,
	maxime.coquelin, Xia, Chenbo, thomas, Yigit, Ferruh, mdr,
	Jayatheerthan, Jay


> > Copy public function pointers (rx_pkt_burst(), etc.) and related
> > pointers to internal data from rte_eth_dev structure into a
> > separate flat array. That array will remain in a public header.
> > The intention here is to make rte_eth_dev and related structures internal.
> > That should allow future possible changes to core eth_dev structures
> > to be transparent to the user and help to avoid ABI/API breakages.
> > The plan is to keep minimal part of data from rte_eth_dev public,
> > so we still can use inline functions for fast-path calls
> > (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
> > The whole idea beyond this new schema:
> > 1. PMDs keep to setup fast-path function pointers and related data
> >    inside rte_eth_dev struct in the same way they did it before.
> > 2. Inside rte_eth_dev_start() and inside rte_eth_dev_probing_finish()
> >    (for secondary process) we call eth_dev_fp_ops_setup, which
> >    copies these function and data pointers into rte_eth_fp_ops[port_id].
> > 3. Inside rte_eth_dev_stop() and inside rte_eth_dev_release_port()
> >    we call eth_dev_fp_ops_reset(), which resets rte_eth_fp_ops[port_id]
> >    into some dummy values.
> > 4. fast-path ethdev API (rte_eth_rx_burst(), etc.) will use that new
> >    flat array to call PMD specific functions.
> > That approach should allow us to make rte_eth_devices[] private
> > without introducing regression and help to avoid changes in drivers code.
> >
> > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > ---
> >  lib/ethdev/ethdev_private.c  | 52 ++++++++++++++++++++++++++++++++++
> >  lib/ethdev/ethdev_private.h  |  7 +++++
> >  lib/ethdev/rte_ethdev.c      | 27 ++++++++++++++++++
> >  lib/ethdev/rte_ethdev_core.h | 55 ++++++++++++++++++++++++++++++++++++
> >  4 files changed, 141 insertions(+)
> >
> > diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
> > index 012cf73ca2..3eeda6e9f9 100644
> > --- a/lib/ethdev/ethdev_private.c
> > +++ b/lib/ethdev/ethdev_private.c
> > @@ -174,3 +174,55 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
> >  		RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
> >  	return str == NULL ? -1 : 0;
> >  }
> > +
> > +static uint16_t
> > +dummy_eth_rx_burst(__rte_unused void *rxq,
> > +		__rte_unused struct rte_mbuf **rx_pkts,
> > +		__rte_unused uint16_t nb_pkts)
> > +{
> > +	RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
> > +	rte_errno = ENOTSUP;
> > +	return 0;
> > +}
> > +
> > +static uint16_t
> > +dummy_eth_tx_burst(__rte_unused void *txq,
> > +		__rte_unused struct rte_mbuf **tx_pkts,
> > +		__rte_unused uint16_t nb_pkts)
> > +{
> > +	RTE_ETHDEV_LOG(ERR, "tx_pkt_burst for unconfigured port\n");
> > +	rte_errno = ENOTSUP;
> > +	return 0;
> > +}
> > +
> > +void
> > +eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo)
> 
> The port_id parameter is preferable, this will hide rte_eth_fp_ops as much as possible.

Why do we need to hide it here?
rte_eth_fp_ops is a public structure, and it is a helper function that
just resets fields of this structure to some predefined dummy values.
Nice and simple, so I prefer to keep it like that. 

> 
> > +{
> > +	static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
> > +	static const struct rte_eth_fp_ops dummy_ops = {
> > +		.rx_pkt_burst = dummy_eth_rx_burst,
> > +		.tx_pkt_burst = dummy_eth_tx_burst,
> > +		.rxq = {.data = dummy_data, .clbk = dummy_data,},
> > +		.txq = {.data = dummy_data, .clbk = dummy_data,},
> > +	};
> > +
> > +	*fpo = dummy_ops;
> > +}
> > +
> > +void
> > +eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> > +		const struct rte_eth_dev *dev)
> 
> Because fp_ops and eth_dev is a one-to-one correspondence. It's better only use
> port_id parameter.

Same as above:
All this internal helper function does - copies some fields from one structure to another.
Both structures are visible by ethdev layer.
No point to add extra assumptions and complexity here. 

> 
> > +{
> > +	fpo->rx_pkt_burst = dev->rx_pkt_burst;
> > +	fpo->tx_pkt_burst = dev->tx_pkt_burst;
> > +	fpo->tx_pkt_prepare = dev->tx_pkt_prepare;
> > +	fpo->rx_queue_count = dev->rx_queue_count;
> > +	fpo->rx_descriptor_status = dev->rx_descriptor_status;
> > +	fpo->tx_descriptor_status = dev->tx_descriptor_status;
> > +
> > +	fpo->rxq.data = dev->data->rx_queues;
> > +	fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
> > +
> > +	fpo->txq.data = dev->data->tx_queues;
> > +	fpo->txq.clbk = (void **)(uintptr_t)dev->pre_tx_burst_cbs;
> > +}
> > diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
> > index 3724429577..5721be7bdc 100644
> > --- a/lib/ethdev/ethdev_private.h
> > +++ b/lib/ethdev/ethdev_private.h
> > @@ -26,4 +26,11 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
> >  /* Parse devargs value for representor parameter. */
> >  int rte_eth_devargs_parse_representor_ports(char *str, void *data);
> >
> > +/* reset eth fast-path API to dummy values */
> > +void eth_dev_fp_ops_reset(struct rte_eth_fp_ops *fpo);
> > +
> > +/* setup eth fast-path API to ethdev values */
> > +void eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> > +		const struct rte_eth_dev *dev);
> 
> Some drivers control the transmit/receive function during operation. E.g.
> for hns3 driver, when detect reset, primary process will set rx/tx burst to dummy, after
> process reset, primary process will set the correct rx/tx burst. During this process, the
> send and receive threads are still working, but the bursts they call are changed. So:

This text above is a bit too cryptic for me...
Are you saying that your driver changes rte_eth_dev.rx_pkt_burst(/ tx_pkt_burst) on the fly
(after dev_start() and before dev_stop())?
If so, then generally speaking, it is a bad idea.
While it might works for some limited scenarios, right now it is not supported by ethdev framework,
and might introduce a lot of problems. 

> 1. it is recommended that trace be deleted from the dummy function.

You are talking about:
RTE_ETHDEV_LOG(ERR, "rx_pkt_burst for unconfigured port\n");
right?
Dummy function is supposed to be set only when device is not able to do RX/TX properly
(not attached, or attached but not configured, or attached and configured, but not started).
Obviously if app calls rx/tx_burst for such port it is a major issue, that should be flagged imemdiatelly.
So I believe having log here makes a perfect sense here. 

> 2. public the eth_dev_fp_ops_reset/setup interface for driver usage.

You mean move their declarations into ethdev_driver.h?
I suppose that could be done, but still wonder why driver would need to
call these functions directly?
 
> > +
> >  #endif /* _ETH_PRIVATE_H_ */
> > diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> > index c8abda6dd7..9f7a0cbb8c 100644
> > --- a/lib/ethdev/rte_ethdev.c
> > +++ b/lib/ethdev/rte_ethdev.c
> > @@ -44,6 +44,9 @@
> >  static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
> >  struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
> >
> > +/* public fast-path API */
> > +struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
> > +
> >  /* spinlock for eth device callbacks */
> >  static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
> >
> > @@ -578,6 +581,8 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
> >  		rte_eth_dev_callback_process(eth_dev,
> >  				RTE_ETH_EVENT_DESTROY, NULL);
> >
> > +	eth_dev_fp_ops_reset(rte_eth_fp_ops + eth_dev->data->port_id);
> > +
> >  	rte_spinlock_lock(&eth_dev_shared_data->ownership_lock);
> >
> >  	eth_dev->state = RTE_ETH_DEV_UNUSED;
> > @@ -1787,6 +1792,9 @@ rte_eth_dev_start(uint16_t port_id)
> >  		(*dev->dev_ops->link_update)(dev, 0);
> >  	}
> >
> > +	/* expose selection of PMD fast-path functions */
> > +	eth_dev_fp_ops_setup(rte_eth_fp_ops + port_id, dev);
> > +
> >  	rte_ethdev_trace_start(port_id);
> >  	return 0;
> >  }
> > @@ -1809,6 +1817,9 @@ rte_eth_dev_stop(uint16_t port_id)
> >  		return 0;
> >  	}
> >
> > +	/* point fast-path functions to dummy ones */
> > +	eth_dev_fp_ops_reset(rte_eth_fp_ops + port_id);
> > +
> >  	dev->data->dev_started = 0;
> >  	ret = (*dev->dev_ops->dev_stop)(dev);
> >  	rte_ethdev_trace_stop(port_id, ret);
> > @@ -4567,6 +4578,14 @@ rte_eth_mirror_rule_reset(uint16_t port_id, uint8_t rule_id)
> >  	return eth_err(port_id, (*dev->dev_ops->mirror_rule_reset)(dev, rule_id));
> >  }
> >
> > +RTE_INIT(eth_dev_init_fp_ops)
> > +{
> > +	uint32_t i;
> > +
> > +	for (i = 0; i != RTE_DIM(rte_eth_fp_ops); i++)
> > +		eth_dev_fp_ops_reset(rte_eth_fp_ops + i);
> > +}
> > +
> >  RTE_INIT(eth_dev_init_cb_lists)
> >  {
> >  	uint16_t i;
> > @@ -4735,6 +4754,14 @@ rte_eth_dev_probing_finish(struct rte_eth_dev *dev)
> >  	if (dev == NULL)
> >  		return;
> >
> > +	/*
> > +	 * for secondary process, at that point we expect device
> > +	 * to be already 'usable', so shared data and all function pointers
> > +	 * for fast-path devops have to be setup properly inside rte_eth_dev.
> > +	 */
> > +	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
> > +		eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev);
> > +
> >  	rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL);
> >
> >  	dev->state = RTE_ETH_DEV_ATTACHED;
> > diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
> > index 51cd68de94..d5853dff86 100644
> > --- a/lib/ethdev/rte_ethdev_core.h
> > +++ b/lib/ethdev/rte_ethdev_core.h
> > @@ -50,6 +50,61 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
> >  typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
> >  /**< @internal Check the status of a Tx descriptor */
> >
> > +/**
> > + * @internal
> > + * Structure used to hold opaque pointers to internal ethdev Rx/Tx
> > + * queues data.
> > + * The main purpose to expose these pointers at all - allow compiler
> > + * to fetch this data for fast-path ethdev inline functions in advance.
> > + */
> > +struct rte_ethdev_qdata {
> > +	void **data;
> > +	/**< points to array of internal queue data pointers */
> > +	void **clbk;
> > +	/**< points to array of queue callback data pointers */
> > +};
> > +
> > +/**
> > + * @internal
> > + * fast-path ethdev functions and related data are hold in a flat array.
> > + * One entry per ethdev.
> > + * On 64-bit systems contents of this structure occupy exactly two 64B lines.
> > + * On 32-bit systems contents of this structure fits into one 64B line.
> > + */
> > +struct rte_eth_fp_ops {
> > +
> > +	/**
> > +	 * Rx fast-path functions and related data.
> > +	 * 64-bit systems: occupies first 64B line
> > +	 */
> > +	eth_rx_burst_t rx_pkt_burst;
> > +	/**< PMD receive function. */
> > +	eth_rx_queue_count_t rx_queue_count;
> > +	/**< Get the number of used RX descriptors. */
> > +	eth_rx_descriptor_status_t rx_descriptor_status;
> > +	/**< Check the status of a Rx descriptor. */
> > +	struct rte_ethdev_qdata rxq;
> > +	/**< Rx queues data. */
> > +	uintptr_t reserved1[3];
> > +
> > +	/**
> > +	 * Tx fast-path functions and related data.
> > +	 * 64-bit systems: occupies second 64B line
> > +	 */
> > +	eth_tx_burst_t tx_pkt_burst;
> 
> Why not place rx_pkt_burst/tx_pkt_burst/rxq /txq to the first cacheline ?
> Other function, e.g. rx_queue_count/descriptor_status are low frequency call functions.

I suppose you are talking about layout like that:
struct rte_eth_fp_ops {
   /* first 64B line */
   rx_pkt_burst;
   tx_pkt_burst;
   tx_pkt_prepare;
   struct rte_ethdev_qdata rxq;
   struct rte_ethdev_qdata txq;
   reserved1[1];
   /* second 64B line */
  ...
};

I thought about such ability, even tried it, but I didn't see any performance gain.
From other side current layout seems better to me from structural point:
it is more uniform and easy to extend in future (both RX and TX data occupies
separate 64B line, each have equal rom for extension).    
 
> > +	/**< PMD transmit function. */
> > +	eth_tx_prep_t tx_pkt_prepare;
> > +	/**< PMD transmit prepare function. */
> > +	eth_tx_descriptor_status_t tx_descriptor_status;
> > +	/**< Check the status of a Tx descriptor. */
> > +	struct rte_ethdev_qdata txq;
> > +	/**< Tx queues data. */
> > +	uintptr_t reserved2[3];
> > +
> > +} __rte_cache_aligned;
> > +
> > +extern struct rte_eth_fp_ops rte_eth_fp_ops[RTE_MAX_ETHPORTS];
> > +
> >
> >  /**
> >   * @internal
> >


^ permalink raw reply	[relevance 0%]

Results 10601-10800 of ~18000   |  | reverse | sort options + mbox downloads above
-- links below jump to the message on this page --
2020-05-12 20:40     [dpdk-dev] [RFC v2] hash: unify crc32 API header for x86 and ARM pbhagavatula
2021-10-03 23:00  1% ` [dpdk-dev] [PATCH v3 1/2] hash: split x86 and SW hash CRC intrinsics pbhagavatula
2021-10-04  5:52  1%   ` [dpdk-dev] [PATCH v4 " pbhagavatula
2021-03-18  6:34     [dpdk-dev] [PATCH 1/6] baseband: introduce NXP LA12xx driver Hemant Agrawal
2021-10-07  9:33     ` [dpdk-dev] [PATCH v9 0/8] baseband: add " nipun.gupta
2021-10-07  9:33  4%   ` [dpdk-dev] [PATCH v9 1/8] bbdev: add device info related to data endianness assumption nipun.gupta
2021-10-11  4:32     ` [dpdk-dev] [PATCH v10 0/8] baseband: add NXP LA12xx driver nipun.gupta
2021-10-11  4:32  4%   ` [dpdk-dev] [PATCH v10 1/8] bbdev: add device info related to data endianness nipun.gupta
2021-05-27 15:28     [dpdk-dev] [PATCH] net: introduce IPv4 ihl and version fields Gregory Etelson
2021-10-04 12:13  4% ` [dpdk-dev] [PATCH v4] " Gregory Etelson
2021-06-24 10:28     [dpdk-dev] [PATCH 1/2] security: enforce semantics for Tx inline processing Akhil Goyal
2021-09-15 16:29     ` [dpdk-dev] [PATCH v6 0/3] security: Improve inline fast path routines Nithin Dabilpuram
2021-09-15 16:30       ` [dpdk-dev] [PATCH v6 2/3] security: add option for faster udata or mdata access Nithin Dabilpuram
2021-09-27 17:10  0%     ` Thomas Monjalon
2021-09-28  8:24  0%       ` [dpdk-dev] [EXT] " Akhil Goyal
2021-07-02 13:18     [dpdk-dev] [PATCH] dmadev: introduce DMA device library Chengwen Feng
2021-09-24 10:53  3% ` [dpdk-dev] [PATCH v23 0/6] support dmadev Chengwen Feng
2021-10-09  9:33  3% ` [dpdk-dev] [PATCH v24 " Chengwen Feng
2021-07-12 16:17     [dpdk-dev] [PATCH] ethdev: fix representor port ID search by name Andrew Rybchenko
2021-09-13 11:26     ` [dpdk-dev] [PATCH v5] " Andrew Rybchenko
2021-09-29 11:13  0%   ` Singh, Aman Deep
2021-09-30 12:03  0%     ` Andrew Rybchenko
2021-09-30 12:51  0%       ` Singh, Aman Deep
2021-09-30 13:40  0%         ` Andrew Rybchenko
2021-10-01 11:39  0%   ` Andrew Rybchenko
2021-10-08  8:39  0%     ` Xueming(Steven) Li
2021-10-08  9:27  4% ` [dpdk-dev] [PATCH v6] " Andrew Rybchenko
2021-10-11 12:30  4% ` [dpdk-dev] [PATCH v7] " Andrew Rybchenko
2021-10-11 12:53  4% ` [dpdk-dev] [PATCH v8] " Andrew Rybchenko
2021-07-13 13:35     [dpdk-dev] [PATCH 00/10] new features for ipsec and security libraries Radu Nicolau
2021-10-11 11:29     ` [dpdk-dev] [PATCH v8 " Radu Nicolau
2021-10-11 11:29  5%   ` [dpdk-dev] [PATCH v8 01/10] security: add ESN field to ipsec_xform Radu Nicolau
2021-10-11 11:29  5%   ` [dpdk-dev] [PATCH v8 03/10] security: add UDP params for IPsec NAT-T Radu Nicolau
2021-07-27  3:41     [dpdk-dev] [RFC] ethdev: change queue release callback Xueming Li
2021-07-28  7:40     ` Andrew Rybchenko
2021-08-09 14:39       ` Singh, Aman Deep
2021-08-09 15:31         ` Ferruh Yigit
2021-08-10  8:03           ` Xueming(Steven) Li
2021-08-10  8:54             ` Ferruh Yigit
2021-08-10  9:07               ` Xueming(Steven) Li
2021-08-11 11:57                 ` Ferruh Yigit
2021-09-26 11:25  0%               ` Xueming(Steven) Li
2021-07-27  3:42     [dpdk-dev] [RFC] ethdev: introduce shared Rx queue Xueming Li
2021-08-11 14:04     ` [dpdk-dev] [PATCH v2 01/15] " Xueming Li
2021-08-17  9:33       ` Jerin Jacob
2021-08-17 11:31         ` Xueming(Steven) Li
2021-08-17 15:11           ` Jerin Jacob
2021-08-18 11:14             ` Xueming(Steven) Li
2021-08-19  5:26               ` Jerin Jacob
2021-08-19 12:09                 ` Xueming(Steven) Li
2021-08-26 11:58                   ` Jerin Jacob
2021-08-28 14:16                     ` Xueming(Steven) Li
2021-08-30  9:31                       ` Jerin Jacob
2021-09-15 14:45                         ` Xueming(Steven) Li
2021-09-16  4:16                           ` Jerin Jacob
2021-09-28  5:50  0%                         ` Xueming(Steven) Li
2021-07-31 18:13     [dpdk-dev] [PATCH 0/4] cryptodev and security ABI improvements Akhil Goyal
2021-10-08 20:45  3% ` [dpdk-dev] [PATCH v2 1/3] cryptodev: remove LIST_END enumerators Akhil Goyal
2021-10-08 20:45  3%   ` [dpdk-dev] [PATCH v2 3/3] security: add reserved bitfields Akhil Goyal
2021-10-11  8:31  0%     ` Thomas Monjalon
2021-10-11 10:46  0%   ` [dpdk-dev] [PATCH v2 1/3] cryptodev: remove LIST_END enumerators Zhang, Roy Fan
2021-08-18  9:07     [dpdk-dev] [PATCH 0/4] net/mlx5: implicit mempool registration Dmitry Kozlyuk
2021-09-29 14:52     ` dkozlyuk
2021-09-29 14:52  4%   ` [dpdk-dev] [PATCH v2 2/4] mempool: add non-IO flag dkozlyuk
2021-08-19 21:31     [dpdk-dev] [PATCH v14 0/9] eal: Add EAL API for threading Narcisa Ana Maria Vasile
2021-10-08 22:40  3% ` [dpdk-dev] [PATCH v15 " Narcisa Ana Maria Vasile
2021-10-09  7:41  3%   ` [dpdk-dev] [PATCH v16 " Narcisa Ana Maria Vasile
2021-08-20 16:28     [dpdk-dev] [RFC 0/7] hide eth dev related structures Konstantin Ananyev
2021-08-26 12:37     ` Jerin Jacob
2021-09-14 13:33       ` Ananyev, Konstantin
2021-09-15  9:45         ` Jerin Jacob
2021-09-22 15:08  0%       ` Ananyev, Konstantin
2021-09-27 16:14  0%         ` Jerin Jacob
2021-09-28  9:37  0%           ` Ananyev, Konstantin
2021-09-22 14:09  4% ` [dpdk-dev] [RFC v2 0/5] " Konstantin Ananyev
2021-09-22 14:09  1%   ` [dpdk-dev] [RFC v2 2/5] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
2021-09-23  5:51  0%     ` Wang, Haiyue
2021-09-22 14:09  2%   ` [dpdk-dev] [RFC v2 3/5] ethdev: copy ethdev 'burst' API into separate structure Konstantin Ananyev
2021-09-23  5:58  0%     ` Wang, Haiyue
2021-09-27 18:01  0%       ` Jerin Jacob
2021-09-28  9:42  0%         ` Ananyev, Konstantin
2021-09-22 14:09  2%   ` [dpdk-dev] [RFC v2 4/5] ethdev: make burst functions to use new flat array Konstantin Ananyev
2021-10-01 14:02  4%   ` [dpdk-dev] [PATCH v3 0/7] hide eth dev related structures Konstantin Ananyev
2021-10-01 14:02  6%     ` [dpdk-dev] [PATCH v3 2/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
2021-10-01 14:02  2%     ` [dpdk-dev] [PATCH v3 3/7] ethdev: copy ethdev 'fast' API into separate structure Konstantin Ananyev
2021-10-01 14:02  2%     ` [dpdk-dev] [PATCH v3 4/7] ethdev: make burst functions to use new flat array Konstantin Ananyev
2021-10-01 16:46  3%       ` Ferruh Yigit
2021-10-01 17:40  0%         ` Ananyev, Konstantin
2021-10-04  8:46  3%           ` Ferruh Yigit
2021-10-04  9:20  0%             ` Ananyev, Konstantin
2021-10-04 10:13  3%               ` Ferruh Yigit
2021-10-04 11:17  0%                 ` Ananyev, Konstantin
2021-10-01 14:02  9%     ` [dpdk-dev] [PATCH v3 7/7] ethdev: hide eth dev related structures Konstantin Ananyev
2021-10-01 17:02  0%     ` [dpdk-dev] [PATCH v3 0/7] " Ferruh Yigit
2021-10-04 13:55  4%     ` [dpdk-dev] [PATCH v4 " Konstantin Ananyev
2021-10-04 13:55  6%       ` [dpdk-dev] [PATCH v4 2/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
2021-10-04 13:55  2%       ` [dpdk-dev] [PATCH v4 3/7] ethdev: copy ethdev 'fast' API into separate structure Konstantin Ananyev
2021-10-05 13:09  0%         ` Thomas Monjalon
2021-10-05 16:41  0%           ` Ananyev, Konstantin
2021-10-04 13:56  2%       ` [dpdk-dev] [PATCH v4 4/7] ethdev: make burst functions to use new flat array Konstantin Ananyev
2021-10-05  9:54  0%         ` David Marchand
2021-10-05 10:13  0%           ` Ananyev, Konstantin
2021-10-04 13:56  8%       ` [dpdk-dev] [PATCH v4 7/7] ethdev: hide eth dev related structures Konstantin Ananyev
2021-10-05 10:04  0%         ` David Marchand
2021-10-05 10:43  0%           ` Ferruh Yigit
2021-10-06 16:42  0%       ` [dpdk-dev] [PATCH v4 0/7] " Ali Alnubani
2021-10-06 17:26  0%         ` Ali Alnubani
2021-10-07 11:27  4%       ` [dpdk-dev] [PATCH v5 " Konstantin Ananyev
2021-10-07 11:27  6%         ` [dpdk-dev] [PATCH v5 3/7] ethdev: change input parameters for rx_queue_count Konstantin Ananyev
2021-10-11  8:06  0%           ` Andrew Rybchenko
2021-10-07 11:27  2%         ` [dpdk-dev] [PATCH v5 4/7] ethdev: copy fast-path API into separate structure Konstantin Ananyev
2021-10-09 12:05  0%           ` fengchengwen
2021-10-11  1:18  0%             ` fengchengwen
2021-10-11  8:35  0%             ` Andrew Rybchenko
2021-10-11 15:15  0%             ` Ananyev, Konstantin
2021-10-11  8:25  0%           ` Andrew Rybchenko
2021-10-07 11:27  2%         ` [dpdk-dev] [PATCH v5 5/7] ethdev: make fast-path functions to use new flat array Konstantin Ananyev
2021-10-11  9:02  0%           ` Andrew Rybchenko
2021-10-07 11:27  9%         ` [dpdk-dev] [PATCH v5 7/7] ethdev: hide eth dev related structures Konstantin Ananyev
2021-10-08 18:13  0%         ` [dpdk-dev] [PATCH v5 0/7] " Slava Ovsiienko
2021-10-11  9:22  0%         ` Andrew Rybchenko
2021-08-23 13:03     [dpdk-dev] [PATCH v8] eal: remove sys/queue.h from public headers William Tu
2021-08-24 16:21     ` [dpdk-dev] [PATCH v9] " William Tu
2021-09-20 20:11  0%   ` Narcisa Ana Maria Vasile
2021-09-30 22:16  0%     ` William Tu
2021-10-01  7:27  0%       ` David Marchand
2021-10-01 10:34  0%     ` Thomas Monjalon
2021-08-23 19:40     [dpdk-dev] [RFC 01/15] eventdev: make driver interface as internal pbhagavatula
2021-10-03  8:26     ` [dpdk-dev] [PATCH v2 01/13] " pbhagavatula
2021-10-03  8:27  2%   ` [dpdk-dev] [PATCH v2 04/13] eventdev: move inline APIs into separate structure pbhagavatula
2021-10-06  6:49       ` [dpdk-dev] [PATCH v3 01/14] eventdev: make driver interface as internal pbhagavatula
2021-10-06  6:50  2%     ` [dpdk-dev] [PATCH v3 04/14] eventdev: move inline APIs into separate structure pbhagavatula
2021-10-06  6:50         ` [dpdk-dev] [PATCH v3 14/14] eventdev: mark trace variables as internal pbhagavatula
2021-10-06  7:11  5%       ` David Marchand
2021-08-26 14:57     [dpdk-dev] [RFC 0/7] make rte_intr_handle internal Harman Kalra
2021-09-03 12:40     ` [dpdk-dev] [PATCH v1 " Harman Kalra
2021-09-03 12:40       ` [dpdk-dev] [PATCH v1 2/7] eal/interrupts: implement get set APIs Harman Kalra
2021-10-03 18:05  3%     ` Dmitry Kozlyuk
2021-10-04 10:37  0%       ` [dpdk-dev] [EXT] " Harman Kalra
2021-09-23  8:20  0%   ` [dpdk-dev] [PATCH v1 0/7] make rte_intr_handle internal David Marchand
2021-10-05 12:14  4% ` [dpdk-dev] [PATCH v2 0/6] " Harman Kalra
2021-10-05 12:14  1%   ` [dpdk-dev] [PATCH v2 2/6] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
2021-10-05 16:07  0% ` [dpdk-dev] [RFC 0/7] make rte_intr_handle internal Stephen Hemminger
2021-08-27  6:56     [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
2021-10-11 14:48     ` [dpdk-dev] [PATCH v2 " Andrew Rybchenko
2021-10-11 14:48  2%   ` [dpdk-dev] [PATCH v2 01/38] common/sfc_efx/base: update MCDI headers Andrew Rybchenko
2021-08-29 12:51     [dpdk-dev] [PATCH 0/8] cryptodev: hide internal strutures Akhil Goyal
2021-10-11 12:43     ` [dpdk-dev] [PATCH v2 0/5] cryptodev: hide internal structures Akhil Goyal
2021-10-11 12:43  2%   ` [dpdk-dev] [PATCH v2 3/5] cryptodev: move inline APIs into separate structure Akhil Goyal
2021-10-11 14:45  0%     ` Zhang, Roy Fan
2021-10-11 12:43  3%   ` [dpdk-dev] [PATCH v2 4/5] cryptodev: update fast path APIs to use new flat array Akhil Goyal
2021-10-11 14:54  0%     ` Zhang, Roy Fan
2021-08-30 19:59     [dpdk-dev] [PATCH v2] eventdev: update crypto adapter metadata structures Shijith Thotton
2021-08-31  7:56     ` [dpdk-dev] [PATCH v3] " Shijith Thotton
2021-09-07  8:53       ` Gujjar, Abhinandan S
2021-09-07 10:37         ` Shijith Thotton
2021-09-07 17:30           ` Gujjar, Abhinandan S
2021-09-08  7:42             ` Shijith Thotton
2021-09-08  7:53               ` Gujjar, Abhinandan S
2021-09-14  4:36                 ` Shijith Thotton
2021-09-14  4:46                   ` Anoob Joseph
2021-09-19 14:25  0%                 ` Gujjar, Abhinandan S
2021-09-19 18:49  3%                   ` Akhil Goyal
2021-09-20  5:54  3%                     ` Gujjar, Abhinandan S
2021-09-27 15:22  3%   ` [dpdk-dev] [PATCH v4] doc: remove event crypto metadata deprecation note Shijith Thotton
2021-10-03  9:48  0%     ` Gujjar, Abhinandan S
2021-09-01  5:30     [dpdk-dev] [PATCH 0/2] *** support IOMMU for DMA device *** Xuan Ding
2021-09-17  5:25     ` [dpdk-dev] [PATCH v2 0/2] support IOMMU for DMA device Xuan Ding
2021-09-17  5:25       ` [dpdk-dev] [PATCH v2 2/2] vhost: enable IOMMU for async vhost Xuan Ding
2021-09-23 14:39         ` Hu, Jiayu
2021-09-23 14:56  3%       ` Maxime Coquelin
2021-09-24  1:53  4%         ` Xia, Chenbo
2021-09-24  7:13  0%           ` Maxime Coquelin
2021-09-24  7:35  4%             ` Xia, Chenbo
2021-09-24  8:18  3%               ` Ding, Xuan
2021-09-25 10:03  3% ` [dpdk-dev] [PATCH v3 0/2] support IOMMU for DMA device Xuan Ding
2021-09-25 10:33  3% ` [dpdk-dev] [PATCH v4 " Xuan Ding
2021-09-27  7:48  3% ` [dpdk-dev] [PATCH v5 " Xuan Ding
2021-09-29  2:41  3% ` [dpdk-dev] [PATCH v6 " Xuan Ding
2021-10-11  7:59  3% ` [dpdk-dev] [PATCH v7 0/2] Support " Xuan Ding
2021-09-01 12:20     [dpdk-dev] [PATCH] net/softnic: remove experimental table from API Jasvinder Singh
2021-09-01 13:48     ` Kinsella, Ray
2021-09-15  7:33       ` Ferruh Yigit
2021-09-30 15:42  3%     ` David Marchand
2021-09-01 14:47     [dpdk-dev] [PATCH 0/4] drivers/qat: isolate implementations of qat generations Arek Kusztal
2021-10-01 16:59     ` [dpdk-dev] [PATCH v2 00/10] " Fan Zhang
2021-10-01 16:59  4%   ` [dpdk-dev] [PATCH v2 10/10] doc: update release note Fan Zhang
2021-09-03  0:47     [dpdk-dev] [PATCH 0/5] Packet capture framework enhancements Stephen Hemminger
2021-09-24 15:21     ` [dpdk-dev] [PATCH v11 00/12] " Stephen Hemminger
2021-09-24 15:21  1%   ` [dpdk-dev] [PATCH v11 06/12] pdump: support pcapng and filtering Stephen Hemminger
2021-09-24 15:22  1%   ` [dpdk-dev] [PATCH v11 11/12] doc: changes for new pcapng and dumpcap Stephen Hemminger
2021-10-01 16:26     ` [dpdk-dev] [PATCH v12 00/12] Packet capture framework update Stephen Hemminger
2021-10-01 16:26  1%   ` [dpdk-dev] [PATCH v12 06/12] pdump: support pcapng and filtering Stephen Hemminger
2021-10-01 16:27  1%   ` [dpdk-dev] [PATCH v12 11/12] doc: changes for new pcapng and dumpcap Stephen Hemminger
2021-09-07 14:11     [dpdk-dev] [RFC PATCH v6 0/5] Add PIE support for HQoS library Liguzinski, WojciechX
2021-09-22  7:46  3% ` [dpdk-dev] [RFC PATCH v7 " Liguzinski, WojciechX
2021-09-23  9:45  3%   ` [dpdk-dev] [RFC PATCH v8 " Liguzinski, WojciechX
2021-10-11  7:55  3%     ` [dpdk-dev] [PATCH v9 " Liguzinski, WojciechX
2021-09-07 16:32     [dpdk-dev] [PATCH v2 0/6] Add SA lifetime in security Anoob Joseph
2021-09-28 10:07     ` [dpdk-dev] [PATCH v3 " Anoob Joseph
2021-09-28 10:07  8%   ` [dpdk-dev] [PATCH v3 1/6] security: add SA lifetime configuration Anoob Joseph
2021-09-28 10:59       ` [dpdk-dev] [PATCH v4 0/6] Add SA lifetime in security Anoob Joseph
2021-09-28 10:59  8%     ` [dpdk-dev] [PATCH v4 1/6] security: add SA lifetime configuration Anoob Joseph
2021-09-08  8:25     [dpdk-dev] [PATCH 0/3] add option to configure UDP ports verification Tejasree Kondoj
2021-09-08  8:25     ` [dpdk-dev] [PATCH 1/3] security: " Tejasree Kondoj
2021-09-28 16:11  0%   ` Akhil Goyal
2021-09-08 10:59     [dpdk-dev] [PATCH] net: promote make rarp packet API as stable Xiao Wang
2021-09-16 11:38     ` Olivier Matz
2021-10-02  8:57  0%   ` David Marchand
2021-09-10  2:23     [dpdk-dev] [PATCH 0/8] Removal of PCI bus ABIs Chenbo Xia
2021-09-18  2:24     ` [dpdk-dev] [PATCH v2 0/7] " Chenbo Xia
2021-09-18  2:24  3%   ` [dpdk-dev] [PATCH v2 6/7] kni: replace unused variable definition with reserved bytes Chenbo Xia
2021-09-18  2:24  2%   ` [dpdk-dev] [PATCH v2 7/7] bus/pci: remove ABIs in PCI bus Chenbo Xia
2021-09-29  7:38       ` [dpdk-dev] [PATCH v2 0/7] Removal of PCI bus ABIs Xia, Chenbo
2021-09-30  8:45         ` David Marchand
2021-10-04 13:37           ` David Marchand
2021-10-04 15:56             ` Harris, James R
2021-10-06  4:25               ` Xia, Chenbo
2021-10-08  6:15  4%             ` Liu, Changpeng
2021-10-08  7:08  0%               ` David Marchand
2021-10-08  7:44  0%                 ` Liu, Changpeng
2021-10-11  6:58  0%                   ` Xia, Chenbo
2021-09-10 12:30     [dpdk-dev] [PATCH v1 1/7] eal: promote IPC API's to stable Anatoly Burakov
2021-09-10 15:51     ` Kinsella, Ray
2021-09-29  5:48  3%   ` David Marchand
2021-09-10 14:16     [dpdk-dev] [RFC 1/3] ethdev: update modify field flow action Viacheslav Ovsiienko
2021-10-01 19:52     ` [dpdk-dev] [PATCH 0/3] " Viacheslav Ovsiienko
2021-10-01 19:52  9%   ` [dpdk-dev] [PATCH 1/3] " Viacheslav Ovsiienko
2021-10-04  9:38  0%     ` Ori Kam
2021-10-10 23:45     ` [dpdk-dev] [PATCH v2 0/5] " Viacheslav Ovsiienko
2021-10-10 23:45  8%   ` [dpdk-dev] [PATCH v2 1/5] " Viacheslav Ovsiienko
2021-10-11  9:54  3%     ` Andrew Rybchenko
2021-09-10 23:16     [dpdk-dev] [PATCH] cmdline: make cmdline structure opaque Dmitry Kozlyuk
2021-09-10 23:16     ` [dpdk-dev] [PATCH] cmdline: reduce ABI Dmitry Kozlyuk
2021-09-20 11:11  4%   ` David Marchand
2021-09-20 11:21  4%     ` Olivier Matz
2021-09-28 19:47  4%   ` [dpdk-dev] [PATCH v2 0/2] " Dmitry Kozlyuk
2021-09-28 19:47  4%     ` [dpdk-dev] [PATCH v2 1/2] cmdline: make struct cmdline opaque Dmitry Kozlyuk
2021-10-05  0:55  4%     ` [dpdk-dev] [PATCH v3 0/2] cmdline: reduce ABI Dmitry Kozlyuk
2021-10-05  0:55  4%       ` [dpdk-dev] [PATCH v3 1/2] cmdline: make struct cmdline opaque Dmitry Kozlyuk
2021-10-05  0:55  3%       ` [dpdk-dev] [PATCH v3 2/2] cmdline: make struct rdline opaque Dmitry Kozlyuk
2021-10-05 20:15  4%       ` [dpdk-dev] [PATCH v4 0/2] cmdline: reduce ABI Dmitry Kozlyuk
2021-10-05 20:15  4%         ` [dpdk-dev] [PATCH v4 1/2] cmdline: make struct cmdline opaque Dmitry Kozlyuk
2021-10-05 20:15  3%         ` [dpdk-dev] [PATCH v4 2/2] cmdline: make struct rdline opaque Dmitry Kozlyuk
2021-10-07 22:10  4%         ` [dpdk-dev] [PATCH v5 0/2] cmdline: reduce ABI Dmitry Kozlyuk
2021-10-07 22:10  4%           ` [dpdk-dev] [PATCH v5 1/2] cmdline: make struct cmdline opaque Dmitry Kozlyuk
2021-10-07 22:10  3%           ` [dpdk-dev] [PATCH v5 2/2] cmdline: make struct rdline opaque Dmitry Kozlyuk
2021-09-13  8:44     [dpdk-dev] [PATCH] eal: promote rte_mcfg_get_single_file_segment to stable ABI Jakub Grajciar -X (jgrajcia - PANTHEON TECH SRO at Cisco)
2021-09-28 19:39  4% ` David Marchand
2021-09-15 15:16     [dpdk-dev] [PATCH] flow_classify: remove eperimental tag from the API Bernard Iremonger
2021-09-22 16:48     ` [dpdk-dev] [PATCH v2] " Bernard Iremonger
2021-09-23 10:09  3%   ` Kevin Traynor
2021-09-15 21:40     [dpdk-dev] [PATCH 0/5] lib: Windows compatibility renaming Dmitry Kozlyuk
2021-10-01 16:33     ` [dpdk-dev] [PATCH v2 0/2] net: " Dmitry Kozlyuk
2021-10-01 16:33  1%   ` [dpdk-dev] [PATCH v2 1/2] net: rename Ethernet header fields Dmitry Kozlyuk
2021-10-07 22:07       ` [dpdk-dev] [PATCH v3 0/2] net: Windows compatibility renaming Dmitry Kozlyuk
2021-10-07 22:07  1%     ` [dpdk-dev] [PATCH v3 1/2] net: rename Ethernet header fields Dmitry Kozlyuk
2021-09-17  2:15     [dpdk-dev] [PATCH v2 2/2] app/test: Delete cmdline free function zhihongx.peng
2021-10-08  6:41  4% ` [dpdk-dev] [PATCH v3 1/2] lib/cmdline: release cl when cmdline exit zhihongx.peng
2021-10-11  5:20  0%   ` Peng, ZhihongX
2021-10-11  8:25  0%   ` Dmitry Kozlyuk
2021-09-22 22:17  4% [dpdk-dev] Minutes of Technical Board Meeting, 2021-09-08 Honnappa Nagarahalli
2021-09-24  4:38  5% [dpdk-dev] Minutes of Technical Board Meeting, 2021-Sep-22 Jerin Jacob Kollanukkaran
2021-09-28 12:07     [dpdk-dev] [PATCH v2 0/3] add option to configure tunnel header verification Tejasree Kondoj
2021-09-28 12:07  5% ` [dpdk-dev] [PATCH v2 1/3] security: " Tejasree Kondoj
2021-09-28 13:26     [dpdk-dev] [PATCH 0/3] add SA config option for inner pkt csum Archana Muniganti
2021-09-28 13:26  5% ` [dpdk-dev] [PATCH 1/3] security: " Archana Muniganti
2021-09-28 15:23     [dpdk-dev] [PATCH] ethdev: remove deprecated shared counter attribute Andrew Rybchenko
2021-09-28 15:58  4% ` Stephen Hemminger
2021-09-28 16:25  0%   ` Andrew Rybchenko
2021-09-29  3:25     [dpdk-dev] [PATCH v2 0/3] add option to configure UDP ports verification Tejasree Kondoj
2021-09-29  3:25  5% ` [dpdk-dev] [PATCH v2 1/3] security: " Tejasree Kondoj
2021-09-29  9:08     [dpdk-dev] [PATCH v2 0/3] add SA config option for inner pkt csum Archana Muniganti
2021-09-29  9:08  5% ` [dpdk-dev] [PATCH v2 1/3] security: " Archana Muniganti
2021-09-29 10:56  0%   ` Ananyev, Konstantin
2021-09-29 11:03  0%     ` Anoob Joseph
2021-09-29 11:39  0%       ` Ananyev, Konstantin
2021-09-30  5:05  0%         ` Anoob Joseph
2021-09-30  9:09  0%           ` Ananyev, Konstantin
2021-09-29 11:23     [dpdk-dev] [PATCH v3 0/3] " Archana Muniganti
2021-09-29 11:23  5% ` [dpdk-dev] [PATCH v3 1/3] security: " Archana Muniganti
2021-09-29 21:48     [dpdk-dev] [PATCH 0/3] mbuf: offload flags namespace Olivier Matz
2021-09-29 21:48  1% ` [dpdk-dev] [PATCH 3/3] mbuf: add rte prefix to offload flags Olivier Matz
2021-09-30 12:58     [dpdk-dev] [PATCH v4 0/3] add SA config option for inner pkt csum Archana Muniganti
2021-09-30 12:58  4% ` [dpdk-dev] [PATCH v4 1/3] security: " Archana Muniganti
2021-10-03 21:09  0%   ` Ananyev, Konstantin
2021-09-30 14:50     [dpdk-dev] [PATCH 0/3] crypto/security session framework rework Akhil Goyal
2021-09-30 14:50  1% ` [dpdk-dev] [PATCH 1/3] security: rework session framework Akhil Goyal
2021-09-30 14:50  1% ` [dpdk-dev] [PATCH 3/3] cryptodev: " Akhil Goyal
2021-10-01 15:53  0%   ` Zhang, Roy Fan
2021-10-04 19:07  0%     ` Akhil Goyal
2021-09-30 19:38  4% [dpdk-dev] DPDK Release Status Meeting 2021-09-23 Mcnamara, John
2021-09-30 19:59  3% [dpdk-dev] DPDK Release Status Meeting 2021-09-30 Mcnamara, John
2021-10-02  0:02     [dpdk-dev] [PATCH v3 1/7] ethdev: allocate max space for internal queue array Pavan Nikhilesh Bhagavatula
2021-10-03 21:05  3% ` Ananyev, Konstantin
2021-10-02  0:14  0% [dpdk-dev] [PATCH v3 3/7] ethdev: copy ethdev 'fast' API into separate structure Pavan Nikhilesh Bhagavatula
2021-10-03 20:58  0% ` Ananyev, Konstantin
2021-10-03 21:10  0%   ` Pavan Nikhilesh Bhagavatula
2021-10-04  6:36 12% [dpdk-dev] [PATCH] cryptodev: extend data-unit length field Matan Azrad
2021-10-04 12:55  4% [dpdk-dev] [PATCH v1] ci: update machine meson option to platform Juraj Linkeš
2021-10-04 13:29  4% ` [dpdk-dev] [PATCH v2] " Juraj Linkeš
2021-10-11 13:40  4%   ` [dpdk-dev] [PATCH v3] " Juraj Linkeš
2021-10-05  9:16  4% [dpdk-dev] [PATCH] sort symbols map David Marchand
2021-10-05 14:16  0% ` Kinsella, Ray
2021-10-05 14:31  0%   ` David Marchand
2021-10-05 15:06  0%   ` David Marchand
2021-10-11 11:36  0% ` Dumitrescu, Cristian
2021-10-06 20:58     [dpdk-dev] [PATCH v9] bbdev: add device info related to data endianness assumption Nicolas Chautru
2021-10-06 20:58  4% ` Nicolas Chautru
2021-10-07 12:01  0%   ` Tom Rix
2021-10-07 15:19  0%     ` Chautru, Nicolas
2021-10-07 13:13  0%   ` [dpdk-dev] [EXT] " Akhil Goyal
2021-10-07 15:41  0%     ` Chautru, Nicolas
2021-10-07 16:49  0%       ` Nipun Gupta
2021-10-07 18:58  0%         ` Chautru, Nicolas
2021-10-08  4:34  0%           ` Nipun Gupta

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).