DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/3] mbuf: offload flags namespace
@ 2021-09-29 21:48 Olivier Matz
  2021-09-29 21:48 ` [dpdk-dev] [PATCH 1/3] mbuf: remove deprecated offload flags Olivier Matz
                   ` (3 more replies)
  0 siblings, 4 replies; 20+ messages in thread
From: Olivier Matz @ 2021-09-29 21:48 UTC (permalink / raw)
  To: dev

This patchset fixes the mbuf offload flags namespace by adding an RTE_
prefix to the definitions. The old flags remain usable, but a deprecation
warning is issued at compilation.

Note for committers: I can rebase this patchset quite easily (most of
the work is scripted), so don't hesitate to sync with to get an updated
version.

Olivier Matz (3):
  mbuf: remove deprecated offload flags
  cocci: add cocci script to rename mbuf offload flags
  mbuf: add rte prefix to offload flags

 app/test-pmd/csumonly.c                       |  62 +--
 app/test-pmd/flowgen.c                        |   8 +-
 app/test-pmd/ieee1588fwd.c                    |   6 +-
 app/test-pmd/macfwd.c                         |   8 +-
 app/test-pmd/macswap_common.h                 |  12 +-
 app/test-pmd/txonly.c                         |   8 +-
 app/test-pmd/util.c                           |  18 +-
 app/test/test_ipsec.c                         |   4 +-
 app/test/test_mbuf.c                          | 144 +++---
 .../cocci/prefix_mbuf_offload_flags.cocci     | 441 ++++++++++++++++++
 doc/guides/nics/bnxt.rst                      |   8 +-
 doc/guides/nics/enic.rst                      |   8 +-
 doc/guides/nics/features.rst                  |  70 +--
 doc/guides/nics/ixgbe.rst                     |   2 +-
 doc/guides/nics/mlx5.rst                      |   6 +-
 .../generic_segmentation_offload_lib.rst      |   4 +-
 doc/guides/prog_guide/mbuf_lib.rst            |  18 +-
 doc/guides/prog_guide/metrics_lib.rst         |   2 +-
 doc/guides/prog_guide/rte_flow.rst            |  14 +-
 doc/guides/rel_notes/deprecation.rst          |  10 -
 doc/guides/rel_notes/release_21_11.rst        |   4 +
 drivers/compress/mlx5/mlx5_compress.c         |   2 +-
 drivers/crypto/mlx5/mlx5_crypto.c             |   2 +-
 drivers/event/octeontx/ssovf_worker.c         |  36 +-
 drivers/event/octeontx/ssovf_worker.h         |   2 +-
 drivers/event/octeontx2/otx2_worker.h         |   2 +-
 drivers/net/af_packet/rte_eth_af_packet.c     |   4 +-
 drivers/net/atlantic/atl_rxtx.c               |  46 +-
 drivers/net/avp/avp_ethdev.c                  |   8 +-
 drivers/net/axgbe/axgbe_rxtx.c                |  64 +--
 drivers/net/axgbe/axgbe_rxtx_vec_sse.c        |   2 +-
 drivers/net/bnx2x/bnx2x.c                     |   2 +-
 drivers/net/bnx2x/bnx2x_rxtx.c                |   2 +-
 drivers/net/bnxt/bnxt_rxr.c                   |  50 +-
 drivers/net/bnxt/bnxt_rxr.h                   |  32 +-
 drivers/net/bnxt/bnxt_txr.c                   |  40 +-
 drivers/net/bnxt/bnxt_txr.h                   |  38 +-
 drivers/net/bonding/rte_eth_bond_pmd.c        |   2 +-
 drivers/net/cnxk/cn10k_ethdev.c               |  18 +-
 drivers/net/cnxk/cn10k_rx.h                   |  26 +-
 drivers/net/cnxk/cn10k_tx.h                   | 172 +++----
 drivers/net/cnxk/cn9k_ethdev.c                |  18 +-
 drivers/net/cnxk/cn9k_rx.h                    |  26 +-
 drivers/net/cnxk/cn9k_tx.h                    | 170 +++----
 drivers/net/cnxk/cnxk_ethdev.h                |  10 +-
 drivers/net/cnxk/cnxk_lookup.c                |  40 +-
 drivers/net/cxgbe/sge.c                       |  46 +-
 drivers/net/dpaa/dpaa_ethdev.h                |   7 +-
 drivers/net/dpaa/dpaa_rxtx.c                  |  10 +-
 drivers/net/dpaa2/dpaa2_rxtx.c                |  30 +-
 drivers/net/e1000/em_rxtx.c                   |  39 +-
 drivers/net/e1000/igb_rxtx.c                  |  81 ++--
 drivers/net/ena/ena_ethdev.c                  |  53 ++-
 drivers/net/enetc/enetc_rxtx.c                |  44 +-
 drivers/net/enic/enic_main.c                  |  10 +-
 drivers/net/enic/enic_res.c                   |  12 +-
 drivers/net/enic/enic_rxtx.c                  |  24 +-
 drivers/net/enic/enic_rxtx_common.h           |  18 +-
 drivers/net/enic/enic_rxtx_vec_avx2.c         |  80 ++--
 drivers/net/fm10k/fm10k_rxtx.c                |  43 +-
 drivers/net/fm10k/fm10k_rxtx_vec.c            |  25 +-
 drivers/net/hinic/hinic_pmd_rx.c              |  22 +-
 drivers/net/hinic/hinic_pmd_tx.c              |  56 +--
 drivers/net/hinic/hinic_pmd_tx.h              |  13 +-
 drivers/net/hns3/hns3_ethdev.h                |   2 +-
 drivers/net/hns3/hns3_rxtx.c                  | 108 ++---
 drivers/net/hns3/hns3_rxtx.h                  |  25 +-
 drivers/net/hns3/hns3_rxtx_vec_neon.h         |   2 +-
 drivers/net/hns3/hns3_rxtx_vec_sve.c          |   2 +-
 drivers/net/i40e/i40e_rxtx.c                  | 157 +++----
 drivers/net/i40e/i40e_rxtx_vec_altivec.c      |  22 +-
 drivers/net/i40e/i40e_rxtx_vec_avx2.c         |  70 +--
 drivers/net/i40e/i40e_rxtx_vec_avx512.c       |  62 +--
 drivers/net/i40e/i40e_rxtx_vec_neon.c         |  50 +-
 drivers/net/i40e/i40e_rxtx_vec_sse.c          |  60 +--
 drivers/net/iavf/iavf_rxtx.c                  |  90 ++--
 drivers/net/iavf/iavf_rxtx.h                  |  28 +-
 drivers/net/iavf/iavf_rxtx_vec_avx2.c         | 140 +++---
 drivers/net/iavf/iavf_rxtx_vec_avx512.c       | 140 +++---
 drivers/net/iavf/iavf_rxtx_vec_common.h       |  16 +-
 drivers/net/iavf/iavf_rxtx_vec_sse.c          | 112 ++---
 drivers/net/ice/ice_rxtx.c                    | 107 +++--
 drivers/net/ice/ice_rxtx_vec_avx2.c           | 158 +++----
 drivers/net/ice/ice_rxtx_vec_avx512.c         | 158 +++----
 drivers/net/ice/ice_rxtx_vec_common.h         |  16 +-
 drivers/net/ice/ice_rxtx_vec_sse.c            | 112 ++---
 drivers/net/igc/igc_txrx.c                    |  67 +--
 drivers/net/ionic/ionic_rxtx.c                |  59 ++-
 drivers/net/ixgbe/ixgbe_ethdev.c              |   4 +-
 drivers/net/ixgbe/ixgbe_rxtx.c                | 113 +++--
 drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c       |  38 +-
 drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c        |  44 +-
 drivers/net/liquidio/lio_rxtx.c               |  16 +-
 drivers/net/mlx4/mlx4_rxtx.c                  |  22 +-
 drivers/net/mlx5/mlx5_flow.c                  |   2 +-
 drivers/net/mlx5/mlx5_rx.c                    |  18 +-
 drivers/net/mlx5/mlx5_rx.h                    |   4 +-
 drivers/net/mlx5/mlx5_rxq.c                   |   2 +-
 drivers/net/mlx5/mlx5_rxtx.c                  |  18 +-
 drivers/net/mlx5/mlx5_rxtx_vec_altivec.h      |  76 +--
 drivers/net/mlx5/mlx5_rxtx_vec_neon.h         |  36 +-
 drivers/net/mlx5/mlx5_rxtx_vec_sse.h          |  38 +-
 drivers/net/mlx5/mlx5_tx.h                    | 102 ++--
 drivers/net/mvneta/mvneta_ethdev.h            |   6 +-
 drivers/net/mvneta/mvneta_rxtx.c              |  16 +-
 drivers/net/mvpp2/mrvl_ethdev.c               |  22 +-
 drivers/net/netvsc/hn_rxtx.c                  |  28 +-
 drivers/net/nfp/nfp_rxtx.c                    |  26 +-
 drivers/net/octeontx/octeontx_rxtx.h          |  38 +-
 drivers/net/octeontx2/otx2_ethdev.c           |  18 +-
 drivers/net/octeontx2/otx2_lookup.c           |  40 +-
 drivers/net/octeontx2/otx2_rx.c               |  12 +-
 drivers/net/octeontx2/otx2_rx.h               |  22 +-
 drivers/net/octeontx2/otx2_tx.c               |  86 ++--
 drivers/net/octeontx2/otx2_tx.h               |  70 +--
 drivers/net/qede/qede_rxtx.c                  | 104 ++---
 drivers/net/qede/qede_rxtx.h                  |  20 +-
 drivers/net/sfc/sfc_dp_tx.h                   |  14 +-
 drivers/net/sfc/sfc_ef100_rx.c                |  18 +-
 drivers/net/sfc/sfc_ef100_tx.c                |  52 +--
 drivers/net/sfc/sfc_ef10_essb_rx.c            |   6 +-
 drivers/net/sfc/sfc_ef10_rx.c                 |   6 +-
 drivers/net/sfc/sfc_ef10_rx_ev.h              |  16 +-
 drivers/net/sfc/sfc_ef10_tx.c                 |  18 +-
 drivers/net/sfc/sfc_rx.c                      |  22 +-
 drivers/net/sfc/sfc_tso.c                     |   2 +-
 drivers/net/sfc/sfc_tso.h                     |   2 +-
 drivers/net/sfc/sfc_tx.c                      |   4 +-
 drivers/net/tap/rte_eth_tap.c                 |  28 +-
 drivers/net/thunderx/nicvf_rxtx.c             |  24 +-
 drivers/net/thunderx/nicvf_rxtx.h             |   2 +-
 drivers/net/txgbe/txgbe_ethdev.c              |   4 +-
 drivers/net/txgbe/txgbe_rxtx.c                | 172 +++----
 drivers/net/vhost/rte_eth_vhost.c             |   2 +-
 drivers/net/virtio/virtio_rxtx.c              |  14 +-
 drivers/net/virtio/virtio_rxtx_packed.h       |   6 +-
 drivers/net/virtio/virtqueue.h                |  14 +-
 drivers/net/vmxnet3/vmxnet3_rxtx.c            |  59 ++-
 drivers/regex/mlx5/mlx5_regex_fastpath.c      |   2 +-
 examples/bpf/t2.c                             |   4 +-
 examples/ip_fragmentation/main.c              |   2 +-
 examples/ip_reassembly/main.c                 |   2 +-
 examples/ipsec-secgw/esp.c                    |   6 +-
 examples/ipsec-secgw/ipsec-secgw.c            |  20 +-
 examples/ipsec-secgw/ipsec_worker.c           |  12 +-
 examples/ipsec-secgw/sa.c                     |   2 +-
 examples/ptpclient/ptpclient.c                |   4 +-
 examples/qos_meter/main.c                     |  12 +-
 examples/vhost/main.c                         |  12 +-
 lib/ethdev/rte_ethdev.h                       |   4 +-
 lib/ethdev/rte_flow.h                         |  33 +-
 lib/eventdev/rte_event_eth_rx_adapter.c       |   4 +-
 lib/gso/gso_common.h                          |  40 +-
 lib/gso/gso_tunnel_tcp4.c                     |   2 +-
 lib/gso/rte_gso.c                             |  10 +-
 lib/gso/rte_gso.h                             |   4 +-
 lib/ipsec/esp_inb.c                           |  10 +-
 lib/ipsec/esp_outb.c                          |   4 +-
 lib/ipsec/misc.h                              |   2 +-
 lib/ipsec/rte_ipsec_group.h                   |   6 +-
 lib/ipsec/sa.c                                |   2 +-
 lib/mbuf/rte_mbuf.c                           | 220 ++++-----
 lib/mbuf/rte_mbuf.h                           |  30 +-
 lib/mbuf/rte_mbuf_core.h                      | 427 ++++++++++-------
 lib/mbuf/rte_mbuf_dyn.c                       |   2 +-
 lib/net/rte_ether.h                           |   6 +-
 lib/net/rte_ip.h                              |   4 +-
 lib/net/rte_net.h                             |  22 +-
 lib/pipeline/rte_table_action.c               |  10 +-
 lib/vhost/virtio_net.c                        |  42 +-
 170 files changed, 3526 insertions(+), 3009 deletions(-)
 create mode 100644 devtools/cocci/prefix_mbuf_offload_flags.cocci

-- 
2.30.2


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [dpdk-dev] [PATCH 1/3] mbuf: remove deprecated offload flags
  2021-09-29 21:48 [dpdk-dev] [PATCH 0/3] mbuf: offload flags namespace Olivier Matz
@ 2021-09-29 21:48 ` Olivier Matz
  2021-10-04  8:29   ` David Marchand
  2021-09-29 21:48 ` [dpdk-dev] [PATCH 2/3] cocci: add cocci script to rename mbuf " Olivier Matz
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 20+ messages in thread
From: Olivier Matz @ 2021-09-29 21:48 UTC (permalink / raw)
  To: dev

The flags PKT_TX_VLAN_PKT, PKT_TX_QINQ_PKT, PKT_RX_EIP_CKSUM_BAD are
marked as deprecated since commit 380a7aab1ae2 ("mbuf: rename deprecated
VLAN flags") (2017). Remove their definitions from rte_mbuf_core.h,
and replace their usages.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 app/test-pmd/flowgen.c                    |  4 +--
 app/test-pmd/macfwd.c                     |  4 +--
 app/test-pmd/txonly.c                     |  4 +--
 doc/guides/rel_notes/deprecation.rst      |  5 ----
 drivers/net/af_packet/rte_eth_af_packet.c |  2 +-
 drivers/net/avp/avp_ethdev.c              |  4 +--
 drivers/net/axgbe/axgbe_rxtx.c            |  2 +-
 drivers/net/bnx2x/bnx2x.c                 |  2 +-
 drivers/net/bnxt/bnxt_txr.c               |  8 +++---
 drivers/net/cxgbe/sge.c                   |  4 +--
 drivers/net/dpaa2/dpaa2_rxtx.c            |  6 ++---
 drivers/net/e1000/em_rxtx.c               |  4 +--
 drivers/net/e1000/igb_rxtx.c              |  6 ++---
 drivers/net/fm10k/fm10k_rxtx.c            |  4 +--
 drivers/net/hinic/hinic_pmd_tx.c          |  2 +-
 drivers/net/hns3/hns3_rxtx.c              | 14 +++++-----
 drivers/net/i40e/i40e_rxtx.c              | 10 +++----
 drivers/net/iavf/iavf_rxtx.c              |  6 ++---
 drivers/net/iavf/iavf_rxtx.h              |  2 +-
 drivers/net/igc/igc_txrx.c                |  6 ++---
 drivers/net/ionic/ionic_rxtx.c            |  4 +--
 drivers/net/ixgbe/ixgbe_rxtx.c            |  6 ++---
 drivers/net/mlx5/mlx5_tx.h                | 24 ++++++++---------
 drivers/net/netvsc/hn_rxtx.c              |  2 +-
 drivers/net/nfp/nfp_rxtx.c                |  2 +-
 drivers/net/qede/qede_rxtx.c              |  2 +-
 drivers/net/qede/qede_rxtx.h              |  2 +-
 drivers/net/sfc/sfc_ef100_tx.c            |  4 +--
 drivers/net/sfc/sfc_ef10_tx.c             |  2 +-
 drivers/net/sfc/sfc_tx.c                  |  2 +-
 drivers/net/txgbe/txgbe_rxtx.c            |  8 +++---
 drivers/net/vhost/rte_eth_vhost.c         |  2 +-
 drivers/net/virtio/virtio_rxtx.c          |  2 +-
 drivers/net/vmxnet3/vmxnet3_rxtx.c        |  4 +--
 examples/vhost/main.c                     |  2 +-
 lib/mbuf/rte_mbuf_core.h                  | 33 ++---------------------
 36 files changed, 83 insertions(+), 117 deletions(-)

diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index 0d3664a64d..d0360b4363 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -100,9 +100,9 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
 
 	tx_offloads = ports[fs->tx_port].dev_conf.txmode.offloads;
 	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
-		ol_flags |= PKT_TX_VLAN_PKT;
+		ol_flags |= PKT_TX_VLAN;
 	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
-		ol_flags |= PKT_TX_QINQ_PKT;
+		ol_flags |= PKT_TX_QINQ;
 	if (tx_offloads	& DEV_TX_OFFLOAD_MACSEC_INSERT)
 		ol_flags |= PKT_TX_MACSEC;
 
diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c
index 0568ea794d..21be8bb470 100644
--- a/app/test-pmd/macfwd.c
+++ b/app/test-pmd/macfwd.c
@@ -73,9 +73,9 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
 	txp = &ports[fs->tx_port];
 	tx_offloads = txp->dev_conf.txmode.offloads;
 	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
-		ol_flags = PKT_TX_VLAN_PKT;
+		ol_flags = PKT_TX_VLAN;
 	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
-		ol_flags |= PKT_TX_QINQ_PKT;
+		ol_flags |= PKT_TX_QINQ;
 	if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
 		ol_flags |= PKT_TX_MACSEC;
 	for (i = 0; i < nb_rx; i++) {
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index aed820f5d3..ab7cd622c7 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -353,9 +353,9 @@ pkt_burst_transmit(struct fwd_stream *fs)
 	vlan_tci = txp->tx_vlan_id;
 	vlan_tci_outer = txp->tx_vlan_id_outer;
 	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
-		ol_flags = PKT_TX_VLAN_PKT;
+		ol_flags = PKT_TX_VLAN;
 	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
-		ol_flags |= PKT_TX_QINQ_PKT;
+		ol_flags |= PKT_TX_QINQ;
 	if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
 		ol_flags |= PKT_TX_MACSEC;
 
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 05fc2fdee7..549e9416c4 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -159,11 +159,6 @@ Deprecation Notices
   will be limited to maximum 256 queues.
   Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` will be removed.
 
-* ethdev: The offload flag ``PKT_RX_EIP_CKSUM_BAD`` will be removed and
-  replaced by the new flag ``PKT_RX_OUTER_IP_CKSUM_BAD``. The new name is more
-  consistent with existing outer header checksum status flag naming, which
-  should help in reducing confusion about its usage.
-
 * i40e: As there are both i40evf and iavf pmd, the functions of them are
   duplicated. And now more and more advanced features are developed on iavf.
   To keep consistent with kernel driver's name
diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
index fcd8090399..294132b759 100644
--- a/drivers/net/af_packet/rte_eth_af_packet.c
+++ b/drivers/net/af_packet/rte_eth_af_packet.c
@@ -224,7 +224,7 @@ eth_af_packet_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		}
 
 		/* insert vlan info if necessary */
-		if (mbuf->ol_flags & PKT_TX_VLAN_PKT) {
+		if (mbuf->ol_flags & PKT_TX_VLAN) {
 			if (rte_vlan_insert(&mbuf)) {
 				rte_pktmbuf_free(mbuf);
 				continue;
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 623fa5e5ff..01553958be 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -1674,7 +1674,7 @@ avp_dev_copy_to_buffers(struct avp_dev *avp,
 	first_buf->nb_segs = count;
 	first_buf->pkt_len = total_length;
 
-	if (mbuf->ol_flags & PKT_TX_VLAN_PKT) {
+	if (mbuf->ol_flags & PKT_TX_VLAN) {
 		first_buf->ol_flags |= RTE_AVP_TX_VLAN_PKT;
 		first_buf->vlan_tci = mbuf->vlan_tci;
 	}
@@ -1905,7 +1905,7 @@ avp_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		pkt_buf->nb_segs = 1;
 		pkt_buf->next = NULL;
 
-		if (m->ol_flags & PKT_TX_VLAN_PKT) {
+		if (m->ol_flags & PKT_TX_VLAN) {
 			pkt_buf->ol_flags |= RTE_AVP_TX_VLAN_PKT;
 			pkt_buf->vlan_tci = m->vlan_tci;
 		}
diff --git a/drivers/net/axgbe/axgbe_rxtx.c b/drivers/net/axgbe/axgbe_rxtx.c
index 33f709a6bb..45b9bd3e39 100644
--- a/drivers/net/axgbe/axgbe_rxtx.c
+++ b/drivers/net/axgbe/axgbe_rxtx.c
@@ -811,7 +811,7 @@ static int axgbe_xmit_hw(struct axgbe_tx_queue *txq,
 		AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, CIC, 0x1);
 	rte_wmb();
 
-	if (mbuf->ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {
+	if (mbuf->ol_flags & (PKT_TX_VLAN | PKT_TX_QINQ)) {
 		/* Mark it as a CONTEXT descriptor */
 		AXGMAC_SET_BITS_LE(desc->desc3, TX_CONTEXT_DESC3,
 				  CTXT, 1);
diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c
index 9163b8b1fd..235c374180 100644
--- a/drivers/net/bnx2x/bnx2x.c
+++ b/drivers/net/bnx2x/bnx2x.c
@@ -2189,7 +2189,7 @@ int bnx2x_tx_encap(struct bnx2x_tx_queue *txq, struct rte_mbuf *m0)
 
 	tx_start_bd->nbd = rte_cpu_to_le_16(2);
 
-	if (m0->ol_flags & PKT_TX_VLAN_PKT) {
+	if (m0->ol_flags & PKT_TX_VLAN) {
 		tx_start_bd->vlan_or_ethertype =
 		    rte_cpu_to_le_16(m0->vlan_tci);
 		tx_start_bd->bd_flags.as_bitfield |=
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 47824334ae..5d3cdfa8f2 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -110,10 +110,10 @@ bnxt_xmit_need_long_bd(struct rte_mbuf *tx_pkt, struct bnxt_tx_queue *txq)
 {
 	if (tx_pkt->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_TCP_CKSUM |
 				PKT_TX_UDP_CKSUM | PKT_TX_IP_CKSUM |
-				PKT_TX_VLAN_PKT | PKT_TX_OUTER_IP_CKSUM |
+				PKT_TX_VLAN | PKT_TX_OUTER_IP_CKSUM |
 				PKT_TX_TUNNEL_GRE | PKT_TX_TUNNEL_VXLAN |
 				PKT_TX_TUNNEL_GENEVE | PKT_TX_IEEE1588_TMST |
-				PKT_TX_QINQ_PKT) ||
+				PKT_TX_QINQ) ||
 	     (BNXT_TRUFLOW_EN(txq->bp) &&
 	      (txq->bp->tx_cfa_action || txq->vfr_tx_cfa_action)))
 		return true;
@@ -200,13 +200,13 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 		vlan_tag_flags = 0;
 
 		/* HW can accelerate only outer vlan in QinQ mode */
-		if (tx_pkt->ol_flags & PKT_TX_QINQ_PKT) {
+		if (tx_pkt->ol_flags & PKT_TX_QINQ) {
 			vlan_tag_flags = TX_BD_LONG_CFA_META_KEY_VLAN_TAG |
 				tx_pkt->vlan_tci_outer;
 			outer_tpid_bd = txq->bp->outer_tpid_bd &
 				BNXT_OUTER_TPID_BD_MASK;
 			vlan_tag_flags |= outer_tpid_bd;
-		} else if (tx_pkt->ol_flags & PKT_TX_VLAN_PKT) {
+		} else if (tx_pkt->ol_flags & PKT_TX_VLAN) {
 			/* shurd: Should this mask at
 			 * TX_BD_LONG_CFA_META_VLAN_VID_MASK?
 			 */
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index e5f7721dc4..3299d6252e 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -1037,7 +1037,7 @@ static inline int tx_do_packet_coalesce(struct sge_eth_txq *txq,
 		cntrl = F_TXPKT_L4CSUM_DIS | F_TXPKT_IPCSUM_DIS;
 	}
 
-	if (mbuf->ol_flags & PKT_TX_VLAN_PKT) {
+	if (mbuf->ol_flags & PKT_TX_VLAN) {
 		txq->stats.vlan_ins++;
 		cntrl |= F_TXPKT_VLAN_VLD | V_TXPKT_VLAN(mbuf->vlan_tci);
 	}
@@ -1258,7 +1258,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
 		txq->stats.tx_cso += m->tso_segsz;
 	}
 
-	if (m->ol_flags & PKT_TX_VLAN_PKT) {
+	if (m->ol_flags & PKT_TX_VLAN) {
 		txq->stats.vlan_ins++;
 		cntrl |= F_TXPKT_VLAN_VLD | V_TXPKT_VLAN(m->vlan_tci);
 	}
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index f40369e2c3..f491f4d10a 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -1228,7 +1228,7 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 				    (*bufs)->nb_segs == 1 &&
 				    rte_mbuf_refcnt_read((*bufs)) == 1)) {
 					if (unlikely(((*bufs)->ol_flags
-						& PKT_TX_VLAN_PKT) ||
+						& PKT_TX_VLAN) ||
 						(eth_data->dev_conf.txmode.offloads
 						& DEV_TX_OFFLOAD_VLAN_INSERT))) {
 						ret = rte_vlan_insert(bufs);
@@ -1271,7 +1271,7 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 				goto send_n_return;
 			}
 
-			if (unlikely(((*bufs)->ol_flags & PKT_TX_VLAN_PKT) ||
+			if (unlikely(((*bufs)->ol_flags & PKT_TX_VLAN) ||
 				(eth_data->dev_conf.txmode.offloads
 				& DEV_TX_OFFLOAD_VLAN_INSERT))) {
 				int ret = rte_vlan_insert(bufs);
@@ -1532,7 +1532,7 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 				    (*bufs)->nb_segs == 1 &&
 				    rte_mbuf_refcnt_read((*bufs)) == 1)) {
 					if (unlikely((*bufs)->ol_flags
-						& PKT_TX_VLAN_PKT)) {
+						& PKT_TX_VLAN)) {
 					  ret = rte_vlan_insert(bufs);
 					  if (ret)
 						goto send_n_return;
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index dfd8f2fd00..0105e2d384 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -55,7 +55,7 @@
 		PKT_TX_IPV4 |           \
 		PKT_TX_IP_CKSUM |       \
 		PKT_TX_L4_MASK |        \
-		PKT_TX_VLAN_PKT)
+		PKT_TX_VLAN)
 
 #define E1000_TX_OFFLOAD_NOTSUP_MASK \
 		(PKT_TX_OFFLOAD_MASK ^ E1000_TX_OFFLOAD_MASK)
@@ -506,7 +506,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		popts_spec = 0;
 
 		/* Set VLAN Tag offload fields. */
-		if (ol_flags & PKT_TX_VLAN_PKT) {
+		if (ol_flags & PKT_TX_VLAN) {
 			cmd_type_len |= E1000_TXD_CMD_VLE;
 			popts_spec = tx_pkt->vlan_tci << E1000_TXD_VLAN_SHIFT;
 		}
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 278d5d2712..c630894052 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -54,7 +54,7 @@
 		PKT_TX_OUTER_IPV4 |	 \
 		PKT_TX_IPV6 |		 \
 		PKT_TX_IPV4 |		 \
-		PKT_TX_VLAN_PKT |		 \
+		PKT_TX_VLAN |		 \
 		PKT_TX_IP_CKSUM |		 \
 		PKT_TX_L4_MASK |		 \
 		PKT_TX_TCP_SEG |		 \
@@ -260,7 +260,7 @@ igbe_set_xmit_ctx(struct igb_tx_queue* txq,
 	/* Specify which HW CTX to upload. */
 	mss_l4len_idx = (ctx_idx << E1000_ADVTXD_IDX_SHIFT);
 
-	if (ol_flags & PKT_TX_VLAN_PKT)
+	if (ol_flags & PKT_TX_VLAN)
 		tx_offload_mask.data |= TX_VLAN_CMP_MASK;
 
 	/* check if TCP segmentation required for this packet */
@@ -369,7 +369,7 @@ tx_desc_vlan_flags_to_cmdtype(uint64_t ol_flags)
 	uint32_t cmdtype;
 	static uint32_t vlan_cmd[2] = {0, E1000_ADVTXD_DCMD_VLE};
 	static uint32_t tso_cmd[2] = {0, E1000_ADVTXD_DCMD_TSE};
-	cmdtype = vlan_cmd[(ol_flags & PKT_TX_VLAN_PKT) != 0];
+	cmdtype = vlan_cmd[(ol_flags & PKT_TX_VLAN) != 0];
 	cmdtype |= tso_cmd[(ol_flags & PKT_TX_TCP_SEG) != 0];
 	return cmdtype;
 }
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index 0a9a27aa5a..496e72a003 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -38,7 +38,7 @@ static inline void dump_rxd(union fm10k_rx_desc *rxd)
 #endif
 
 #define FM10K_TX_OFFLOAD_MASK (  \
-		PKT_TX_VLAN_PKT |        \
+		PKT_TX_VLAN |        \
 		PKT_TX_IPV6 |            \
 		PKT_TX_IPV4 |            \
 		PKT_TX_IP_CKSUM |        \
@@ -609,7 +609,7 @@ static inline void tx_xmit_pkt(struct fm10k_tx_queue *q, struct rte_mbuf *mb)
 		q->hw_ring[q->next_free].flags |= FM10K_TXD_FLAG_CSUM;
 
 	/* set vlan if requested */
-	if (mb->ol_flags & PKT_TX_VLAN_PKT)
+	if (mb->ol_flags & PKT_TX_VLAN)
 		q->hw_ring[q->next_free].vlan = mb->vlan_tci;
 	else
 		q->hw_ring[q->next_free].vlan = 0;
diff --git a/drivers/net/hinic/hinic_pmd_tx.c b/drivers/net/hinic/hinic_pmd_tx.c
index 669f82389c..e14937139d 100644
--- a/drivers/net/hinic/hinic_pmd_tx.c
+++ b/drivers/net/hinic/hinic_pmd_tx.c
@@ -592,7 +592,7 @@ hinic_fill_tx_offload_info(struct rte_mbuf *mbuf,
 	task->pkt_info2 = 0;
 
 	/* Base VLAN */
-	if (unlikely(ol_flags & PKT_TX_VLAN_PKT)) {
+	if (unlikely(ol_flags & PKT_TX_VLAN)) {
 		vlan_tag = mbuf->vlan_tci;
 		hinic_set_vlan_tx_offload(task, queue_info, vlan_tag,
 					  vlan_tag >> VLAN_PRIO_SHIFT);
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 481872e395..59ba9e7454 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -3190,11 +3190,11 @@ hns3_fill_first_desc(struct hns3_tx_queue *txq, struct hns3_desc *desc,
 	 * To avoid the VLAN of Tx descriptor is overwritten by PVID, it should
 	 * be added to the position close to the IP header when PVID is enabled.
 	 */
-	if (!txq->pvid_sw_shift_en && ol_flags & (PKT_TX_VLAN_PKT |
-				PKT_TX_QINQ_PKT)) {
+	if (!txq->pvid_sw_shift_en && ol_flags & (PKT_TX_VLAN |
+				PKT_TX_QINQ)) {
 		desc->tx.ol_type_vlan_len_msec |=
 				rte_cpu_to_le_32(BIT(HNS3_TXD_OVLAN_B));
-		if (ol_flags & PKT_TX_QINQ_PKT)
+		if (ol_flags & PKT_TX_QINQ)
 			desc->tx.outer_vlan_tag =
 					rte_cpu_to_le_16(rxm->vlan_tci_outer);
 		else
@@ -3202,8 +3202,8 @@ hns3_fill_first_desc(struct hns3_tx_queue *txq, struct hns3_desc *desc,
 					rte_cpu_to_le_16(rxm->vlan_tci);
 	}
 
-	if (ol_flags & PKT_TX_QINQ_PKT ||
-	    ((ol_flags & PKT_TX_VLAN_PKT) && txq->pvid_sw_shift_en)) {
+	if (ol_flags & PKT_TX_QINQ ||
+	    ((ol_flags & PKT_TX_VLAN) && txq->pvid_sw_shift_en)) {
 		desc->tx.type_cs_vlan_tso_len |=
 					rte_cpu_to_le_32(BIT(HNS3_TXD_VLAN_B));
 		desc->tx.vlan_tag = rte_cpu_to_le_16(rxm->vlan_tci);
@@ -3742,12 +3742,12 @@ hns3_vld_vlan_chk(struct hns3_tx_queue *txq, struct rte_mbuf *m)
 	 * implementation function named hns3_prep_pkts to inform users that
 	 * these packets will be discarded.
 	 */
-	if (m->ol_flags & PKT_TX_QINQ_PKT)
+	if (m->ol_flags & PKT_TX_QINQ)
 		return -EINVAL;
 
 	eh = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 	if (eh->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN)) {
-		if (m->ol_flags & PKT_TX_VLAN_PKT)
+		if (m->ol_flags & PKT_TX_VLAN)
 			return -EINVAL;
 
 		/* Ensure the incoming packet is not a QinQ packet */
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 3eb82578b0..33a6a9e840 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -64,8 +64,8 @@
 		PKT_TX_L4_MASK |        \
 		PKT_TX_OUTER_IP_CKSUM | \
 		PKT_TX_TCP_SEG |        \
-		PKT_TX_QINQ_PKT |       \
-		PKT_TX_VLAN_PKT |	\
+		PKT_TX_QINQ |       \
+		PKT_TX_VLAN |	\
 		PKT_TX_TUNNEL_MASK |	\
 		I40E_TX_IEEE1588_TMST)
 
@@ -1006,7 +1006,7 @@ i40e_calc_context_desc(uint64_t flags)
 {
 	static uint64_t mask = PKT_TX_OUTER_IP_CKSUM |
 		PKT_TX_TCP_SEG |
-		PKT_TX_QINQ_PKT |
+		PKT_TX_QINQ |
 		PKT_TX_TUNNEL_MASK;
 
 #ifdef RTE_LIBRTE_IEEE1588
@@ -1151,7 +1151,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Descriptor based VLAN insertion */
-		if (ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {
+		if (ol_flags & (PKT_TX_VLAN | PKT_TX_QINQ)) {
 			td_cmd |= I40E_TX_DESC_CMD_IL2TAG1;
 			td_tag = tx_pkt->vlan_tci;
 		}
@@ -1200,7 +1200,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 
 			ctx_txd->tunneling_params =
 				rte_cpu_to_le_32(cd_tunneling_params);
-			if (ol_flags & PKT_TX_QINQ_PKT) {
+			if (ol_flags & PKT_TX_QINQ) {
 				cd_l2tag2 = tx_pkt->vlan_tci_outer;
 				cd_type_cmd_tso_mss |=
 					((uint64_t)I40E_TX_CTX_DESC_IL2TAG2 <<
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 87afc0b4cb..cba1ba8052 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -2074,7 +2074,7 @@ iavf_calc_context_desc(uint64_t flags, uint8_t vlan_flag)
 {
 	if (flags & PKT_TX_TCP_SEG)
 		return 1;
-	if (flags & PKT_TX_VLAN_PKT &&
+	if (flags & PKT_TX_VLAN &&
 	    vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2)
 		return 1;
 	return 0;
@@ -2260,7 +2260,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Descriptor based VLAN insertion */
-		if (ol_flags & PKT_TX_VLAN_PKT &&
+		if (ol_flags & PKT_TX_VLAN &&
 		    txq->vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG1) {
 			td_cmd |= IAVF_TX_DESC_CMD_IL2TAG1;
 			td_tag = tx_pkt->vlan_tci;
@@ -2301,7 +2301,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 				cd_type_cmd_tso_mss |=
 					iavf_set_tso_ctx(tx_pkt, tx_offload);
 
-			if (ol_flags & PKT_TX_VLAN_PKT &&
+			if (ol_flags & PKT_TX_VLAN &&
 			   txq->vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2) {
 				cd_type_cmd_tso_mss |= IAVF_TX_CTX_DESC_IL2TAG2
 					<< IAVF_TXD_CTX_QW1_CMD_SHIFT;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index e210b913d6..6c3fdbb3b2 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -62,7 +62,7 @@
 		PKT_TX_OUTER_IPV4 |		 \
 		PKT_TX_IPV6 |			 \
 		PKT_TX_IPV4 |			 \
-		PKT_TX_VLAN_PKT |		 \
+		PKT_TX_VLAN |		 \
 		PKT_TX_IP_CKSUM |		 \
 		PKT_TX_L4_MASK |		 \
 		PKT_TX_TCP_SEG)
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index b5489eedd2..9848afd9ca 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -78,7 +78,7 @@
 		PKT_TX_OUTER_IPV4 |	\
 		PKT_TX_IPV6 |		\
 		PKT_TX_IPV4 |		\
-		PKT_TX_VLAN_PKT |	\
+		PKT_TX_VLAN |	\
 		PKT_TX_IP_CKSUM |	\
 		PKT_TX_L4_MASK |	\
 		PKT_TX_TCP_SEG |	\
@@ -1530,7 +1530,7 @@ igc_set_xmit_ctx(struct igc_tx_queue *txq,
 	/* Specify which HW CTX to upload. */
 	mss_l4len_idx = (ctx_curr << IGC_ADVTXD_IDX_SHIFT);
 
-	if (ol_flags & PKT_TX_VLAN_PKT)
+	if (ol_flags & PKT_TX_VLAN)
 		tx_offload_mask.vlan_tci = 0xffff;
 
 	/* check if TCP segmentation required for this packet */
@@ -1604,7 +1604,7 @@ tx_desc_vlan_flags_to_cmdtype(uint64_t ol_flags)
 	uint32_t cmdtype;
 	static uint32_t vlan_cmd[2] = {0, IGC_ADVTXD_DCMD_VLE};
 	static uint32_t tso_cmd[2] = {0, IGC_ADVTXD_DCMD_TSE};
-	cmdtype = vlan_cmd[(ol_flags & PKT_TX_VLAN_PKT) != 0];
+	cmdtype = vlan_cmd[(ol_flags & PKT_TX_VLAN) != 0];
 	cmdtype |= tso_cmd[(ol_flags & IGC_TX_OFFLOAD_SEG) != 0];
 	return cmdtype;
 }
diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
index b83ea1bcaa..431435eea0 100644
--- a/drivers/net/ionic/ionic_rxtx.c
+++ b/drivers/net/ionic/ionic_rxtx.c
@@ -356,7 +356,7 @@ ionic_tx_tso(struct ionic_tx_qcq *txq, struct rte_mbuf *txm)
 	uint32_t offset = 0;
 	bool start, done;
 	bool encap;
-	bool has_vlan = !!(txm->ol_flags & PKT_TX_VLAN_PKT);
+	bool has_vlan = !!(txm->ol_flags & PKT_TX_VLAN);
 	uint16_t vlan_tci = txm->vlan_tci;
 	uint64_t ol_flags = txm->ol_flags;
 
@@ -495,7 +495,7 @@ ionic_tx(struct ionic_tx_qcq *txq, struct rte_mbuf *txm)
 	if (opcode == IONIC_TXQ_DESC_OPCODE_CSUM_NONE)
 		stats->no_csum++;
 
-	has_vlan = (ol_flags & PKT_TX_VLAN_PKT);
+	has_vlan = (ol_flags & PKT_TX_VLAN);
 	encap = ((ol_flags & PKT_TX_OUTER_IP_CKSUM) ||
 			(ol_flags & PKT_TX_OUTER_UDP_CKSUM)) &&
 			((ol_flags & PKT_TX_OUTER_IPV4) ||
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index bfdfd5e755..717ae8f775 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -64,7 +64,7 @@
 		PKT_TX_OUTER_IPV4 |		 \
 		PKT_TX_IPV6 |			 \
 		PKT_TX_IPV4 |			 \
-		PKT_TX_VLAN_PKT |		 \
+		PKT_TX_VLAN |		 \
 		PKT_TX_IP_CKSUM |		 \
 		PKT_TX_L4_MASK |		 \
 		PKT_TX_TCP_SEG |		 \
@@ -384,7 +384,7 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 	/* Specify which HW CTX to upload. */
 	mss_l4len_idx |= (ctx_idx << IXGBE_ADVTXD_IDX_SHIFT);
 
-	if (ol_flags & PKT_TX_VLAN_PKT) {
+	if (ol_flags & PKT_TX_VLAN) {
 		tx_offload_mask.vlan_tci |= ~0;
 	}
 
@@ -543,7 +543,7 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
 {
 	uint32_t cmdtype = 0;
 
-	if (ol_flags & PKT_TX_VLAN_PKT)
+	if (ol_flags & PKT_TX_VLAN)
 		cmdtype |= IXGBE_ADVTXD_DCMD_VLE;
 	if (ol_flags & PKT_TX_TCP_SEG)
 		cmdtype |= IXGBE_ADVTXD_DCMD_TSE;
diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h
index 1a35919371..1efe912a06 100644
--- a/drivers/net/mlx5/mlx5_tx.h
+++ b/drivers/net/mlx5/mlx5_tx.h
@@ -538,7 +538,7 @@ txq_mbuf_to_swp(struct mlx5_txq_local *__rte_restrict loc,
 	 * should be set regardless of HW offload.
 	 */
 	off = loc->mbuf->outer_l2_len;
-	if (MLX5_TXOFF_CONFIG(VLAN) && ol & PKT_TX_VLAN_PKT)
+	if (MLX5_TXOFF_CONFIG(VLAN) && ol & PKT_TX_VLAN)
 		off += sizeof(struct rte_vlan_hdr);
 	set = (off >> 1) << 8; /* Outer L3 offset. */
 	off += loc->mbuf->outer_l3_len;
@@ -956,7 +956,7 @@ mlx5_tx_eseg_none(struct mlx5_txq_data *__rte_restrict txq __rte_unused,
 		       *RTE_FLOW_DYNF_METADATA(loc->mbuf) : 0 : 0;
 	/* Engage VLAN tag insertion feature if requested. */
 	if (MLX5_TXOFF_CONFIG(VLAN) &&
-	    loc->mbuf->ol_flags & PKT_TX_VLAN_PKT) {
+	    loc->mbuf->ol_flags & PKT_TX_VLAN) {
 		/*
 		 * We should get here only if device support
 		 * this feature correctly.
@@ -1814,7 +1814,7 @@ mlx5_tx_packet_multi_tso(struct mlx5_txq_data *__rte_restrict txq,
 	 * the required space in WQE ring buffer.
 	 */
 	dlen = rte_pktmbuf_pkt_len(loc->mbuf);
-	if (MLX5_TXOFF_CONFIG(VLAN) && loc->mbuf->ol_flags & PKT_TX_VLAN_PKT)
+	if (MLX5_TXOFF_CONFIG(VLAN) && loc->mbuf->ol_flags & PKT_TX_VLAN)
 		vlan = sizeof(struct rte_vlan_hdr);
 	inlen = loc->mbuf->l2_len + vlan +
 		loc->mbuf->l3_len + loc->mbuf->l4_len;
@@ -1929,7 +1929,7 @@ mlx5_tx_packet_multi_send(struct mlx5_txq_data *__rte_restrict txq,
 	/* Update sent data bytes counter. */
 	txq->stats.obytes += rte_pktmbuf_pkt_len(loc->mbuf);
 	if (MLX5_TXOFF_CONFIG(VLAN) &&
-	    loc->mbuf->ol_flags & PKT_TX_VLAN_PKT)
+	    loc->mbuf->ol_flags & PKT_TX_VLAN)
 		txq->stats.obytes += sizeof(struct rte_vlan_hdr);
 #endif
 	/*
@@ -2028,7 +2028,7 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
 	 * to estimate the required space for WQE.
 	 */
 	dlen = rte_pktmbuf_pkt_len(loc->mbuf);
-	if (MLX5_TXOFF_CONFIG(VLAN) && loc->mbuf->ol_flags & PKT_TX_VLAN_PKT)
+	if (MLX5_TXOFF_CONFIG(VLAN) && loc->mbuf->ol_flags & PKT_TX_VLAN)
 		vlan = sizeof(struct rte_vlan_hdr);
 	inlen = dlen + vlan;
 	/* Check against minimal length. */
@@ -2291,7 +2291,7 @@ mlx5_tx_burst_tso(struct mlx5_txq_data *__rte_restrict txq,
 		}
 		dlen = rte_pktmbuf_data_len(loc->mbuf);
 		if (MLX5_TXOFF_CONFIG(VLAN) &&
-		    loc->mbuf->ol_flags & PKT_TX_VLAN_PKT) {
+		    loc->mbuf->ol_flags & PKT_TX_VLAN) {
 			vlan = sizeof(struct rte_vlan_hdr);
 		}
 		/*
@@ -2416,7 +2416,7 @@ mlx5_tx_able_to_empw(struct mlx5_txq_data *__rte_restrict txq,
 		return MLX5_TXCMP_CODE_SINGLE;
 	/* Check if eMPW can be engaged. */
 	if (MLX5_TXOFF_CONFIG(VLAN) &&
-	    unlikely(loc->mbuf->ol_flags & PKT_TX_VLAN_PKT) &&
+	    unlikely(loc->mbuf->ol_flags & PKT_TX_VLAN) &&
 		(!MLX5_TXOFF_CONFIG(INLINE) ||
 		 unlikely((rte_pktmbuf_data_len(loc->mbuf) +
 			   sizeof(struct rte_vlan_hdr)) > txq->inlen_empw))) {
@@ -2478,7 +2478,7 @@ mlx5_tx_match_empw(struct mlx5_txq_data *__rte_restrict txq,
 		return false;
 	/* There must be no VLAN packets in eMPW loop. */
 	if (MLX5_TXOFF_CONFIG(VLAN))
-		MLX5_ASSERT(!(loc->mbuf->ol_flags & PKT_TX_VLAN_PKT));
+		MLX5_ASSERT(!(loc->mbuf->ol_flags & PKT_TX_VLAN));
 	/* Check if the scheduling is requested. */
 	if (MLX5_TXOFF_CONFIG(TXPP) &&
 	    loc->mbuf->ol_flags & txq->ts_mask)
@@ -2939,7 +2939,7 @@ mlx5_tx_burst_empw_inline(struct mlx5_txq_data *__rte_restrict txq,
 			}
 			/* Inline entire packet, optional VLAN insertion. */
 			if (MLX5_TXOFF_CONFIG(VLAN) &&
-			    loc->mbuf->ol_flags & PKT_TX_VLAN_PKT) {
+			    loc->mbuf->ol_flags & PKT_TX_VLAN) {
 				/*
 				 * The packet length must be checked in
 				 * mlx5_tx_able_to_empw() and packet
@@ -3004,7 +3004,7 @@ mlx5_tx_burst_empw_inline(struct mlx5_txq_data *__rte_restrict txq,
 			MLX5_ASSERT(room >= MLX5_WQE_DSEG_SIZE);
 			if (MLX5_TXOFF_CONFIG(VLAN))
 				MLX5_ASSERT(!(loc->mbuf->ol_flags &
-					    PKT_TX_VLAN_PKT));
+					    PKT_TX_VLAN));
 			mlx5_tx_dseg_ptr(txq, loc, dseg, dptr, dlen, olx);
 			/* We have to store mbuf in elts.*/
 			txq->elts[txq->elts_head++ & txq->elts_m] = loc->mbuf;
@@ -3149,7 +3149,7 @@ mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq,
 
 			inlen = rte_pktmbuf_data_len(loc->mbuf);
 			if (MLX5_TXOFF_CONFIG(VLAN) &&
-			    loc->mbuf->ol_flags & PKT_TX_VLAN_PKT) {
+			    loc->mbuf->ol_flags & PKT_TX_VLAN) {
 				vlan = sizeof(struct rte_vlan_hdr);
 				inlen += vlan;
 			}
@@ -3380,7 +3380,7 @@ mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq,
 			/* Update sent data bytes counter. */
 			txq->stats.obytes += rte_pktmbuf_data_len(loc->mbuf);
 			if (MLX5_TXOFF_CONFIG(VLAN) &&
-			    loc->mbuf->ol_flags & PKT_TX_VLAN_PKT)
+			    loc->mbuf->ol_flags & PKT_TX_VLAN)
 				txq->stats.obytes +=
 					sizeof(struct rte_vlan_hdr);
 #endif
diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c
index c6bf7cc132..afef7a96a3 100644
--- a/drivers/net/netvsc/hn_rxtx.c
+++ b/drivers/net/netvsc/hn_rxtx.c
@@ -1331,7 +1331,7 @@ static void hn_encap(struct rndis_packet_msg *pkt,
 					  NDIS_PKTINFO_TYPE_HASHVAL);
 	*pi_data = queue_id;
 
-	if (m->ol_flags & PKT_TX_VLAN_PKT) {
+	if (m->ol_flags & PKT_TX_VLAN) {
 		pi_data = hn_rndis_pktinfo_append(pkt, NDIS_VLAN_INFO_SIZE,
 						  NDIS_PKTINFO_TYPE_VLAN);
 		*pi_data = m->vlan_tci;
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 1402c5f84a..0dcaf525f6 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -929,7 +929,7 @@ nfp_net_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		nfp_net_tx_tso(txq, &txd, pkt);
 		nfp_net_tx_cksum(txq, &txd, pkt);
 
-		if ((pkt->ol_flags & PKT_TX_VLAN_PKT) &&
+		if ((pkt->ol_flags & PKT_TX_VLAN) &&
 		    (hw->cap & NFP_NET_CFG_CTRL_TXVLAN)) {
 			txd.flags |= PCIE_DESC_TX_VLAN;
 			txd.vlan = pkt->vlan_tci;
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 35cde561ba..050c6f5c32 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -2587,7 +2587,7 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Descriptor based VLAN insertion */
-		if (tx_ol_flags & PKT_TX_VLAN_PKT) {
+		if (tx_ol_flags & PKT_TX_VLAN) {
 			vlan = rte_cpu_to_le_16(mbuf->vlan_tci);
 			bd1_bd_flags_bf |=
 			    1 << ETH_TX_1ST_BD_FLAGS_VLAN_INSERTION_SHIFT;
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index c9334448c8..025ed6fff2 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -153,7 +153,7 @@
 				   PKT_TX_IPV6)
 
 #define QEDE_TX_OFFLOAD_MASK (QEDE_TX_CSUM_OFFLOAD_MASK | \
-			      PKT_TX_VLAN_PKT		| \
+			      PKT_TX_VLAN		| \
 			      PKT_TX_TUNNEL_MASK)
 
 #define QEDE_TX_OFFLOAD_NOTSUP_MASK \
diff --git a/drivers/net/sfc/sfc_ef100_tx.c b/drivers/net/sfc/sfc_ef100_tx.c
index 522e9a0d34..53d01612d1 100644
--- a/drivers/net/sfc/sfc_ef100_tx.c
+++ b/drivers/net/sfc/sfc_ef100_tx.c
@@ -382,7 +382,7 @@ sfc_ef100_tx_qdesc_send_create(const struct rte_mbuf *m, efx_oword_t *tx_desc)
 			ESF_GZ_TX_SEND_CSO_OUTER_L4, outer_l4,
 			ESF_GZ_TX_DESC_TYPE, ESE_GZ_TX_DESC_TYPE_SEND);
 
-	if (m->ol_flags & PKT_TX_VLAN_PKT) {
+	if (m->ol_flags & PKT_TX_VLAN) {
 		efx_oword_t tx_desc_extra_fields;
 
 		EFX_POPULATE_OWORD_2(tx_desc_extra_fields,
@@ -464,7 +464,7 @@ sfc_ef100_tx_qdesc_tso_create(const struct rte_mbuf *m,
 
 	EFX_OR_OWORD(*tx_desc, tx_desc_extra_fields);
 
-	if (m->ol_flags & PKT_TX_VLAN_PKT) {
+	if (m->ol_flags & PKT_TX_VLAN) {
 		EFX_POPULATE_OWORD_2(tx_desc_extra_fields,
 				ESF_GZ_TX_TSO_VLAN_INSERT_EN, 1,
 				ESF_GZ_TX_TSO_VLAN_INSERT_TCI, m->vlan_tci);
diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index ed43adb4ca..277fe6c6ca 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -805,7 +805,7 @@ sfc_ef10_simple_prepare_pkts(__rte_unused void *tx_queue,
 
 		/* ef10_simple does not support TSO and VLAN insertion */
 		if (unlikely(m->ol_flags &
-			     (PKT_TX_TCP_SEG | PKT_TX_VLAN_PKT))) {
+			     (PKT_TX_TCP_SEG | PKT_TX_VLAN))) {
 			rte_errno = ENOTSUP;
 			break;
 		}
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 49b239f4d2..936ae815ea 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -766,7 +766,7 @@ static unsigned int
 sfc_efx_tx_maybe_insert_tag(struct sfc_efx_txq *txq, struct rte_mbuf *m,
 			    efx_desc_t **pend)
 {
-	uint16_t this_tag = ((m->ol_flags & PKT_TX_VLAN_PKT) ?
+	uint16_t this_tag = ((m->ol_flags & PKT_TX_VLAN) ?
 			     m->vlan_tci : 0);
 
 	if (this_tag == txq->hw_vlan_tci)
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 1a261287d1..53da1b8450 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -54,7 +54,7 @@ static const u64 TXGBE_TX_OFFLOAD_MASK = (PKT_TX_IP_CKSUM |
 		PKT_TX_OUTER_IPV4 |
 		PKT_TX_IPV6 |
 		PKT_TX_IPV4 |
-		PKT_TX_VLAN_PKT |
+		PKT_TX_VLAN |
 		PKT_TX_L4_MASK |
 		PKT_TX_TCP_SEG |
 		PKT_TX_TUNNEL_MASK |
@@ -408,7 +408,7 @@ txgbe_set_xmit_ctx(struct txgbe_tx_queue *txq,
 		vlan_macip_lens |= TXGBE_TXD_MACLEN(tx_offload.l2_len);
 	}
 
-	if (ol_flags & PKT_TX_VLAN_PKT) {
+	if (ol_flags & PKT_TX_VLAN) {
 		tx_offload_mask.vlan_tci |= ~0;
 		vlan_macip_lens |= TXGBE_TXD_VLAN(tx_offload.vlan_tci);
 	}
@@ -496,7 +496,7 @@ tx_desc_cksum_flags_to_olinfo(uint64_t ol_flags)
 			tmp |= TXGBE_TXD_IPCS;
 		tmp |= TXGBE_TXD_L4CS;
 	}
-	if (ol_flags & PKT_TX_VLAN_PKT)
+	if (ol_flags & PKT_TX_VLAN)
 		tmp |= TXGBE_TXD_CC;
 
 	return tmp;
@@ -507,7 +507,7 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
 {
 	uint32_t cmdtype = 0;
 
-	if (ol_flags & PKT_TX_VLAN_PKT)
+	if (ol_flags & PKT_TX_VLAN)
 		cmdtype |= TXGBE_TXD_VLE;
 	if (ol_flags & PKT_TX_TCP_SEG)
 		cmdtype |= TXGBE_TXD_TSE;
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index a202931e9a..214a6ee4c8 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -444,7 +444,7 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 		struct rte_mbuf *m = bufs[i];
 
 		/* Do VLAN tag insertion */
-		if (m->ol_flags & PKT_TX_VLAN_PKT) {
+		if (m->ol_flags & PKT_TX_VLAN) {
 			int error = rte_vlan_insert(&m);
 			if (unlikely(error)) {
 				rte_pktmbuf_free(m);
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index 97ed69596a..65f08b775a 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -1747,7 +1747,7 @@ virtio_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts,
 #endif
 
 		/* Do VLAN tag insertion */
-		if (unlikely(m->ol_flags & PKT_TX_VLAN_PKT)) {
+		if (unlikely(m->ol_flags & PKT_TX_VLAN)) {
 			error = rte_vlan_insert(&m);
 			/* rte_vlan_insert() may change pointer
 			 * even in the case of failure
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index 5cf53d4de8..69e877f816 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -49,7 +49,7 @@
 #include "vmxnet3_ethdev.h"
 
 #define	VMXNET3_TX_OFFLOAD_MASK	( \
-		PKT_TX_VLAN_PKT | \
+		PKT_TX_VLAN | \
 		PKT_TX_IPV6 |     \
 		PKT_TX_IPV4 |     \
 		PKT_TX_L4_MASK |  \
@@ -520,7 +520,7 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 		/* Add VLAN tag if present */
 		gdesc = txq->cmd_ring.base + first2fill;
-		if (txm->ol_flags & PKT_TX_VLAN_PKT) {
+		if (txm->ol_flags & PKT_TX_VLAN) {
 			gdesc->txd.ti = 1;
 			gdesc->txd.tci = txm->vlan_tci;
 		}
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index d0bf1f31e3..1603c29fb5 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -1115,7 +1115,7 @@ virtio_tx_route(struct vhost_dev *vdev, struct rte_mbuf *m, uint16_t vlan_tag)
 			(vh->vlan_tci != vlan_tag_be))
 			vh->vlan_tci = vlan_tag_be;
 	} else {
-		m->ol_flags |= PKT_TX_VLAN_PKT;
+		m->ol_flags |= PKT_TX_VLAN;
 
 		/*
 		 * Find the right seg to adjust the data len when offset is
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index 9d8e3ddc86..93db9292c0 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -55,37 +55,12 @@ extern "C" {
  /** RX packet with FDIR match indicate. */
 #define PKT_RX_FDIR          (1ULL << 2)
 
-/**
- * Deprecated.
- * Checking this flag alone is deprecated: check the 2 bits of
- * PKT_RX_L4_CKSUM_MASK.
- * This flag was set when the L4 checksum of a packet was detected as
- * wrong by the hardware.
- */
-#define PKT_RX_L4_CKSUM_BAD  (1ULL << 3)
-
-/**
- * Deprecated.
- * Checking this flag alone is deprecated: check the 2 bits of
- * PKT_RX_IP_CKSUM_MASK.
- * This flag was set when the IP checksum of a packet was detected as
- * wrong by the hardware.
- */
-#define PKT_RX_IP_CKSUM_BAD  (1ULL << 4)
-
 /**
  * This flag is set when the outermost IP header checksum is detected as
  * wrong by the hardware.
  */
 #define PKT_RX_OUTER_IP_CKSUM_BAD (1ULL << 5)
 
-/**
- * Deprecated.
- * This flag has been renamed, use PKT_RX_OUTER_IP_CKSUM_BAD instead.
- */
-#define PKT_RX_EIP_CKSUM_BAD \
-	RTE_DEPRECATED(PKT_RX_EIP_CKSUM_BAD) PKT_RX_OUTER_IP_CKSUM_BAD
-
 /**
  * A vlan has been stripped by the hardware and its tci is saved in
  * mbuf->vlan_tci. This can only happen if vlan stripping is enabled
@@ -289,8 +264,6 @@ extern "C" {
  * mbuf 'vlan_tci' & 'vlan_tci_outer' must be valid when this flag is set.
  */
 #define PKT_TX_QINQ        (1ULL << 49)
-/** This old name is deprecated. */
-#define PKT_TX_QINQ_PKT    PKT_TX_QINQ
 
 /**
  * TCP segmentation offload. To enable this offload feature for a
@@ -358,8 +331,6 @@ extern "C" {
  * mbuf 'vlan_tci' field must be valid when this flag is set.
  */
 #define PKT_TX_VLAN          (1ULL << 57)
-/* this old name is deprecated */
-#define PKT_TX_VLAN_PKT      PKT_TX_VLAN
 
 /**
  * Offload the IP checksum of an external header in the hardware. The
@@ -391,14 +362,14 @@ extern "C" {
 		PKT_TX_OUTER_IPV6 |	 \
 		PKT_TX_OUTER_IPV4 |	 \
 		PKT_TX_OUTER_IP_CKSUM |  \
-		PKT_TX_VLAN_PKT |        \
+		PKT_TX_VLAN |        \
 		PKT_TX_IPV6 |		 \
 		PKT_TX_IPV4 |		 \
 		PKT_TX_IP_CKSUM |        \
 		PKT_TX_L4_MASK |         \
 		PKT_TX_IEEE1588_TMST |	 \
 		PKT_TX_TCP_SEG |         \
-		PKT_TX_QINQ_PKT |        \
+		PKT_TX_QINQ |        \
 		PKT_TX_TUNNEL_MASK |	 \
 		PKT_TX_MACSEC |		 \
 		PKT_TX_SEC_OFFLOAD |	 \
-- 
2.30.2


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [dpdk-dev] [PATCH 2/3] cocci: add cocci script to rename mbuf offload flags
  2021-09-29 21:48 [dpdk-dev] [PATCH 0/3] mbuf: offload flags namespace Olivier Matz
  2021-09-29 21:48 ` [dpdk-dev] [PATCH 1/3] mbuf: remove deprecated offload flags Olivier Matz
@ 2021-09-29 21:48 ` Olivier Matz
  2021-09-29 21:48 ` [dpdk-dev] [PATCH 3/3] mbuf: add rte prefix to " Olivier Matz
  2021-10-15 19:24 ` [dpdk-dev] [PATCH v2 0/4] mbuf: offload flags namespace Olivier Matz
  3 siblings, 0 replies; 20+ messages in thread
From: Olivier Matz @ 2021-09-29 21:48 UTC (permalink / raw)
  To: dev

The mbuf offload flags do not match the DPDK namespace (they are not
prefixed by RTE_). This coccinelle script is used in the next commit to
do the replacement in the code.

A draft script was initially submitted in commit d7595795b760 ("doc:
announce renaming of mbuf offload flags"), but dropped by mistake at
commit.

Link: http://inbox.dpdk.org/dev/20210730155700.32574-1-olivier.matz@6wind.com
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 .../cocci/prefix_mbuf_offload_flags.cocci     | 441 ++++++++++++++++++
 1 file changed, 441 insertions(+)
 create mode 100644 devtools/cocci/prefix_mbuf_offload_flags.cocci

diff --git a/devtools/cocci/prefix_mbuf_offload_flags.cocci b/devtools/cocci/prefix_mbuf_offload_flags.cocci
new file mode 100644
index 0000000000..cac83032b4
--- /dev/null
+++ b/devtools/cocci/prefix_mbuf_offload_flags.cocci
@@ -0,0 +1,441 @@
+//
+// Rename mbuf offload flags (and some other defines) to have
+// an RTE_ prefix.
+// This only replaces usages in C code, so it is advised to
+// replace missing occurences with:
+// for f in $(git ls-tree --full-tree -r --name-only HEAD); do
+//         if [ "$(file -b --mime-encoding $f)" = "binary" ]; then
+//                 continue
+//         fi
+//         sed -i \
+//             -e 's,EXT_ATTACHED_MBUF,RTE_MBUF_F_EXTERNAL,g' \
+//             -e 's,IND_ATTACHED_MBUF,RTE_MBUF_F_INDIRECT,g' \
+//             -e 's,EXT_ATTACHED_MBUF,RTE_MBUF_F_EXTERNAL,g' \
+//             -e 's,PKT_FIRST_FREE,RTE_MBUF_F_FIRST_FREE,g' \
+//             -e 's,PKT_LAST_FREE,RTE_MBUF_F_LAST_FREE,g' \
+//             -e 's,PKT_TX_DYNF_METADATA,RTE_MBUF_DYNFLAG_TX_METADATA,g' \
+//             -e 's,PKT_RX_DYNF_METADATA,RTE_MBUF_DYNFLAG_RX_METADATA,g' \
+//             -e 's,\<PKT_RX_,RTE_MBUF_F_RX_,g' \
+//             -e 's,\<PKT_TX_,RTE_MBUF_F_TX_,g' \
+//             $f
+// done
+//
+@@
+@@
+
+- PKT_RX_VLAN
++ RTE_MBUF_F_RX_VLAN
+
+@@
+@@
+
+- PKT_RX_RSS_HASH
++ RTE_MBUF_F_RX_RSS_HASH
+
+@@
+@@
+
+- PKT_RX_FDIR
++ RTE_MBUF_F_RX_FDIR
+
+@@
+@@
+
+- PKT_RX_L4_CKSUM_BAD
++ RTE_MBUF_F_RX_L4_CKSUM_BAD
+
+@@
+@@
+
+- PKT_RX_IP_CKSUM_BAD
++ RTE_MBUF_F_RX_IP_CKSUM_BAD
+
+@@
+@@
+
+- PKT_RX_OUTER_IP_CKSUM_BAD
++ RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD
+
+@@
+@@
+
+- PKT_RX_EIP_CKSUM_BAD
++ RTE_MBUF_F_RX_EIP_CKSUM_BAD
+
+@@
+@@
+
+- PKT_RX_VLAN_STRIPPED
++ RTE_MBUF_F_RX_VLAN_STRIPPED
+
+@@
+@@
+
+- PKT_RX_IP_CKSUM_MASK
++ RTE_MBUF_F_RX_IP_CKSUM_MASK
+
+@@
+@@
+
+- PKT_RX_IP_CKSUM_UNKNOWN
++ RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN
+
+@@
+@@
+
+- PKT_RX_IP_CKSUM_BAD
++ RTE_MBUF_F_RX_IP_CKSUM_BAD
+
+@@
+@@
+
+- PKT_RX_IP_CKSUM_GOOD
++ RTE_MBUF_F_RX_IP_CKSUM_GOOD
+
+@@
+@@
+
+- PKT_RX_IP_CKSUM_NONE
++ RTE_MBUF_F_RX_IP_CKSUM_NONE
+
+@@
+@@
+
+- PKT_RX_L4_CKSUM_MASK
++ RTE_MBUF_F_RX_L4_CKSUM_MASK
+
+@@
+@@
+
+- PKT_RX_L4_CKSUM_UNKNOWN
++ RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN
+
+@@
+@@
+
+- PKT_RX_L4_CKSUM_BAD
++ RTE_MBUF_F_RX_L4_CKSUM_BAD
+
+@@
+@@
+
+- PKT_RX_L4_CKSUM_GOOD
++ RTE_MBUF_F_RX_L4_CKSUM_GOOD
+
+@@
+@@
+
+- PKT_RX_L4_CKSUM_NONE
++ RTE_MBUF_F_RX_L4_CKSUM_NONE
+
+@@
+@@
+
+- PKT_RX_IEEE1588_PTP
++ RTE_MBUF_F_RX_IEEE1588_PTP
+
+@@
+@@
+
+- PKT_RX_IEEE1588_TMST
++ RTE_MBUF_F_RX_IEEE1588_TMST
+
+@@
+@@
+
+- PKT_RX_FDIR_ID
++ RTE_MBUF_F_RX_FDIR_ID
+
+@@
+@@
+
+- PKT_RX_FDIR_FLX
++ RTE_MBUF_F_RX_FDIR_FLX
+
+@@
+@@
+
+- PKT_RX_QINQ_STRIPPED
++ RTE_MBUF_F_RX_QINQ_STRIPPED
+
+@@
+@@
+
+- PKT_RX_LRO
++ RTE_MBUF_F_RX_LRO
+
+@@
+@@
+
+- PKT_RX_SEC_OFFLOAD
++ RTE_MBUF_F_RX_SEC_OFFLOAD
+
+@@
+@@
+
+- PKT_RX_SEC_OFFLOAD_FAILED
++ RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED
+
+@@
+@@
+
+- PKT_RX_QINQ
++ RTE_MBUF_F_RX_QINQ
+
+@@
+@@
+
+- PKT_RX_OUTER_L4_CKSUM_MASK
++ RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK
+
+@@
+@@
+
+- PKT_RX_OUTER_L4_CKSUM_UNKNOWN
++ RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
+
+@@
+@@
+
+- PKT_RX_OUTER_L4_CKSUM_BAD
++ RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD
+
+@@
+@@
+
+- PKT_RX_OUTER_L4_CKSUM_GOOD
++ RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD
+
+@@
+@@
+
+- PKT_RX_OUTER_L4_CKSUM_INVALID
++ RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID
+
+@@
+@@
+
+- PKT_FIRST_FREE
++ RTE_MBUF_F_FIRST_FREE
+
+@@
+@@
+
+- PKT_LAST_FREE
++ RTE_MBUF_F_LAST_FREE
+
+@@
+@@
+
+- PKT_TX_OUTER_UDP_CKSUM
++ RTE_MBUF_F_TX_OUTER_UDP_CKSUM
+
+@@
+@@
+
+- PKT_TX_UDP_SEG
++ RTE_MBUF_F_TX_UDP_SEG
+
+@@
+@@
+
+- PKT_TX_SEC_OFFLOAD
++ RTE_MBUF_F_TX_SEC_OFFLOAD
+
+@@
+@@
+
+- PKT_TX_MACSEC
++ RTE_MBUF_F_TX_MACSEC
+
+@@
+@@
+
+- PKT_TX_TUNNEL_VXLAN
++ RTE_MBUF_F_TX_TUNNEL_VXLAN
+
+@@
+@@
+
+- PKT_TX_TUNNEL_GRE
++ RTE_MBUF_F_TX_TUNNEL_GRE
+
+@@
+@@
+
+- PKT_TX_TUNNEL_IPIP
++ RTE_MBUF_F_TX_TUNNEL_IPIP
+
+@@
+@@
+
+- PKT_TX_TUNNEL_GENEVE
++ RTE_MBUF_F_TX_TUNNEL_GENEVE
+
+@@
+@@
+
+- PKT_TX_TUNNEL_MPLSINUDP
++ RTE_MBUF_F_TX_TUNNEL_MPLSINUDP
+
+@@
+@@
+
+- PKT_TX_TUNNEL_VXLAN_GPE
++ RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE
+
+@@
+@@
+
+- PKT_TX_TUNNEL_GTP
++ RTE_MBUF_F_TX_TUNNEL_GTP
+
+@@
+@@
+
+- PKT_TX_TUNNEL_IP
++ RTE_MBUF_F_TX_TUNNEL_IP
+
+@@
+@@
+
+- PKT_TX_TUNNEL_UDP
++ RTE_MBUF_F_TX_TUNNEL_UDP
+
+@@
+@@
+
+- PKT_TX_TUNNEL_MASK
++ RTE_MBUF_F_TX_TUNNEL_MASK
+
+@@
+@@
+
+- PKT_TX_QINQ
++ RTE_MBUF_F_TX_QINQ
+
+@@
+@@
+
+- PKT_TX_QINQ_PKT
++ RTE_MBUF_F_TX_QINQ_PKT
+
+@@
+@@
+
+- PKT_TX_TCP_SEG
++ RTE_MBUF_F_TX_TCP_SEG
+
+@@
+@@
+
+- PKT_TX_IEEE1588_TMST
++ RTE_MBUF_F_TX_IEEE1588_TMST
+
+@@
+@@
+
+- PKT_TX_L4_NO_CKSUM
++ RTE_MBUF_F_TX_L4_NO_CKSUM
+
+@@
+@@
+
+- PKT_TX_TCP_CKSUM
++ RTE_MBUF_F_TX_TCP_CKSUM
+
+@@
+@@
+
+- PKT_TX_SCTP_CKSUM
++ RTE_MBUF_F_TX_SCTP_CKSUM
+
+@@
+@@
+
+- PKT_TX_UDP_CKSUM
++ RTE_MBUF_F_TX_UDP_CKSUM
+
+@@
+@@
+
+- PKT_TX_L4_MASK
++ RTE_MBUF_F_TX_L4_MASK
+
+@@
+@@
+
+- PKT_TX_IP_CKSUM
++ RTE_MBUF_F_TX_IP_CKSUM
+
+@@
+@@
+
+- PKT_TX_IPV4
++ RTE_MBUF_F_TX_IPV4
+
+@@
+@@
+
+- PKT_TX_IPV6
++ RTE_MBUF_F_TX_IPV6
+
+@@
+@@
+
+- PKT_TX_VLAN
++ RTE_MBUF_F_TX_VLAN
+
+@@
+@@
+
+- PKT_TX_VLAN_PKT
++ RTE_MBUF_F_TX_VLAN_PKT
+
+@@
+@@
+
+- PKT_TX_OUTER_IP_CKSUM
++ RTE_MBUF_F_TX_OUTER_IP_CKSUM
+
+@@
+@@
+
+- PKT_TX_OUTER_IPV4
++ RTE_MBUF_F_TX_OUTER_IPV4
+
+@@
+@@
+
+- PKT_TX_OUTER_IPV6
++ RTE_MBUF_F_TX_OUTER_IPV6
+
+@@
+@@
+
+- PKT_TX_OFFLOAD_MASK
++ RTE_MBUF_F_TX_OFFLOAD_MASK
+
+@@
+@@
+
+- EXT_ATTACHED_MBUF
++ RTE_MBUF_F_EXTERNAL
+
+@@
+@@
+
+- IND_ATTACHED_MBUF
++ RTE_MBUF_F_INDIRECT
+
+@@
+@@
+
+- PKT_TX_DYNF_METADATA
++ RTE_MBUF_DYNFLAG_TX_METADATA
+
+@@
+@@
+
+- PKT_RX_DYNF_METADATA
++ RTE_MBUF_DYNFLAG_RX_METADATA
-- 
2.30.2


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [dpdk-dev] [PATCH 3/3] mbuf: add rte prefix to offload flags
  2021-09-29 21:48 [dpdk-dev] [PATCH 0/3] mbuf: offload flags namespace Olivier Matz
  2021-09-29 21:48 ` [dpdk-dev] [PATCH 1/3] mbuf: remove deprecated offload flags Olivier Matz
  2021-09-29 21:48 ` [dpdk-dev] [PATCH 2/3] cocci: add cocci script to rename mbuf " Olivier Matz
@ 2021-09-29 21:48 ` Olivier Matz
  2021-10-15 19:24 ` [dpdk-dev] [PATCH v2 0/4] mbuf: offload flags namespace Olivier Matz
  3 siblings, 0 replies; 20+ messages in thread
From: Olivier Matz @ 2021-09-29 21:48 UTC (permalink / raw)
  To: dev

Fix the mbuf offload flags namespace by adding an RTE_ prefix to the
name. The old flags remain usable, but a deprecation warning is issued
at compilation.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 app/test-pmd/csumonly.c                       |  62 +--
 app/test-pmd/flowgen.c                        |   8 +-
 app/test-pmd/ieee1588fwd.c                    |   6 +-
 app/test-pmd/macfwd.c                         |   8 +-
 app/test-pmd/macswap_common.h                 |  12 +-
 app/test-pmd/txonly.c                         |   8 +-
 app/test-pmd/util.c                           |  18 +-
 app/test/test_ipsec.c                         |   4 +-
 app/test/test_mbuf.c                          | 144 +++----
 doc/guides/nics/bnxt.rst                      |   8 +-
 doc/guides/nics/enic.rst                      |   8 +-
 doc/guides/nics/features.rst                  |  70 +--
 doc/guides/nics/ixgbe.rst                     |   2 +-
 doc/guides/nics/mlx5.rst                      |   6 +-
 .../generic_segmentation_offload_lib.rst      |   4 +-
 doc/guides/prog_guide/mbuf_lib.rst            |  18 +-
 doc/guides/prog_guide/metrics_lib.rst         |   2 +-
 doc/guides/prog_guide/rte_flow.rst            |  14 +-
 doc/guides/rel_notes/deprecation.rst          |   5 -
 doc/guides/rel_notes/release_21_11.rst        |   4 +
 drivers/compress/mlx5/mlx5_compress.c         |   2 +-
 drivers/crypto/mlx5/mlx5_crypto.c             |   2 +-
 drivers/event/octeontx/ssovf_worker.c         |  36 +-
 drivers/event/octeontx/ssovf_worker.h         |   2 +-
 drivers/event/octeontx2/otx2_worker.h         |   2 +-
 drivers/net/af_packet/rte_eth_af_packet.c     |   4 +-
 drivers/net/atlantic/atl_rxtx.c               |  46 +-
 drivers/net/avp/avp_ethdev.c                  |   8 +-
 drivers/net/axgbe/axgbe_rxtx.c                |  64 +--
 drivers/net/axgbe/axgbe_rxtx_vec_sse.c        |   2 +-
 drivers/net/bnx2x/bnx2x.c                     |   2 +-
 drivers/net/bnx2x/bnx2x_rxtx.c                |   2 +-
 drivers/net/bnxt/bnxt_rxr.c                   |  50 +--
 drivers/net/bnxt/bnxt_rxr.h                   |  32 +-
 drivers/net/bnxt/bnxt_txr.c                   |  40 +-
 drivers/net/bnxt/bnxt_txr.h                   |  38 +-
 drivers/net/bonding/rte_eth_bond_pmd.c        |   2 +-
 drivers/net/cnxk/cn10k_ethdev.c               |  18 +-
 drivers/net/cnxk/cn10k_rx.h                   |  26 +-
 drivers/net/cnxk/cn10k_tx.h                   | 172 ++++----
 drivers/net/cnxk/cn9k_ethdev.c                |  18 +-
 drivers/net/cnxk/cn9k_rx.h                    |  26 +-
 drivers/net/cnxk/cn9k_tx.h                    | 170 ++++----
 drivers/net/cnxk/cnxk_ethdev.h                |  10 +-
 drivers/net/cnxk/cnxk_lookup.c                |  40 +-
 drivers/net/cxgbe/sge.c                       |  46 +-
 drivers/net/dpaa/dpaa_ethdev.h                |   7 +-
 drivers/net/dpaa/dpaa_rxtx.c                  |  10 +-
 drivers/net/dpaa2/dpaa2_rxtx.c                |  30 +-
 drivers/net/e1000/em_rxtx.c                   |  39 +-
 drivers/net/e1000/igb_rxtx.c                  |  81 ++--
 drivers/net/ena/ena_ethdev.c                  |  53 ++-
 drivers/net/enetc/enetc_rxtx.c                |  44 +-
 drivers/net/enic/enic_main.c                  |  10 +-
 drivers/net/enic/enic_res.c                   |  12 +-
 drivers/net/enic/enic_rxtx.c                  |  24 +-
 drivers/net/enic/enic_rxtx_common.h           |  18 +-
 drivers/net/enic/enic_rxtx_vec_avx2.c         |  80 ++--
 drivers/net/fm10k/fm10k_rxtx.c                |  43 +-
 drivers/net/fm10k/fm10k_rxtx_vec.c            |  25 +-
 drivers/net/hinic/hinic_pmd_rx.c              |  22 +-
 drivers/net/hinic/hinic_pmd_tx.c              |  56 +--
 drivers/net/hinic/hinic_pmd_tx.h              |  13 +-
 drivers/net/hns3/hns3_ethdev.h                |   2 +-
 drivers/net/hns3/hns3_rxtx.c                  | 108 ++---
 drivers/net/hns3/hns3_rxtx.h                  |  25 +-
 drivers/net/hns3/hns3_rxtx_vec_neon.h         |   2 +-
 drivers/net/hns3/hns3_rxtx_vec_sve.c          |   2 +-
 drivers/net/i40e/i40e_rxtx.c                  | 157 ++++---
 drivers/net/i40e/i40e_rxtx_vec_altivec.c      |  22 +-
 drivers/net/i40e/i40e_rxtx_vec_avx2.c         |  70 +--
 drivers/net/i40e/i40e_rxtx_vec_avx512.c       |  62 +--
 drivers/net/i40e/i40e_rxtx_vec_neon.c         |  50 +--
 drivers/net/i40e/i40e_rxtx_vec_sse.c          |  60 +--
 drivers/net/iavf/iavf_rxtx.c                  |  90 ++--
 drivers/net/iavf/iavf_rxtx.h                  |  28 +-
 drivers/net/iavf/iavf_rxtx_vec_avx2.c         | 140 +++---
 drivers/net/iavf/iavf_rxtx_vec_avx512.c       | 140 +++---
 drivers/net/iavf/iavf_rxtx_vec_common.h       |  16 +-
 drivers/net/iavf/iavf_rxtx_vec_sse.c          | 112 ++---
 drivers/net/ice/ice_rxtx.c                    | 107 +++--
 drivers/net/ice/ice_rxtx_vec_avx2.c           | 158 +++----
 drivers/net/ice/ice_rxtx_vec_avx512.c         | 158 +++----
 drivers/net/ice/ice_rxtx_vec_common.h         |  16 +-
 drivers/net/ice/ice_rxtx_vec_sse.c            | 112 ++---
 drivers/net/igc/igc_txrx.c                    |  67 +--
 drivers/net/ionic/ionic_rxtx.c                |  59 ++-
 drivers/net/ixgbe/ixgbe_ethdev.c              |   4 +-
 drivers/net/ixgbe/ixgbe_rxtx.c                | 113 +++--
 drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c       |  38 +-
 drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c        |  44 +-
 drivers/net/liquidio/lio_rxtx.c               |  16 +-
 drivers/net/mlx4/mlx4_rxtx.c                  |  22 +-
 drivers/net/mlx5/mlx5_flow.c                  |   2 +-
 drivers/net/mlx5/mlx5_rx.c                    |  18 +-
 drivers/net/mlx5/mlx5_rx.h                    |   4 +-
 drivers/net/mlx5/mlx5_rxq.c                   |   2 +-
 drivers/net/mlx5/mlx5_rxtx.c                  |  18 +-
 drivers/net/mlx5/mlx5_rxtx_vec_altivec.h      |  76 ++--
 drivers/net/mlx5/mlx5_rxtx_vec_neon.h         |  36 +-
 drivers/net/mlx5/mlx5_rxtx_vec_sse.h          |  38 +-
 drivers/net/mlx5/mlx5_tx.h                    | 102 ++---
 drivers/net/mvneta/mvneta_ethdev.h            |   6 +-
 drivers/net/mvneta/mvneta_rxtx.c              |  16 +-
 drivers/net/mvpp2/mrvl_ethdev.c               |  22 +-
 drivers/net/netvsc/hn_rxtx.c                  |  28 +-
 drivers/net/nfp/nfp_rxtx.c                    |  26 +-
 drivers/net/octeontx/octeontx_rxtx.h          |  38 +-
 drivers/net/octeontx2/otx2_ethdev.c           |  18 +-
 drivers/net/octeontx2/otx2_lookup.c           |  40 +-
 drivers/net/octeontx2/otx2_rx.c               |  12 +-
 drivers/net/octeontx2/otx2_rx.h               |  22 +-
 drivers/net/octeontx2/otx2_tx.c               |  86 ++--
 drivers/net/octeontx2/otx2_tx.h               |  70 +--
 drivers/net/qede/qede_rxtx.c                  | 104 ++---
 drivers/net/qede/qede_rxtx.h                  |  20 +-
 drivers/net/sfc/sfc_dp_tx.h                   |  14 +-
 drivers/net/sfc/sfc_ef100_rx.c                |  18 +-
 drivers/net/sfc/sfc_ef100_tx.c                |  52 +--
 drivers/net/sfc/sfc_ef10_essb_rx.c            |   6 +-
 drivers/net/sfc/sfc_ef10_rx.c                 |   6 +-
 drivers/net/sfc/sfc_ef10_rx_ev.h              |  16 +-
 drivers/net/sfc/sfc_ef10_tx.c                 |  18 +-
 drivers/net/sfc/sfc_rx.c                      |  22 +-
 drivers/net/sfc/sfc_tso.c                     |   2 +-
 drivers/net/sfc/sfc_tso.h                     |   2 +-
 drivers/net/sfc/sfc_tx.c                      |   4 +-
 drivers/net/tap/rte_eth_tap.c                 |  28 +-
 drivers/net/thunderx/nicvf_rxtx.c             |  24 +-
 drivers/net/thunderx/nicvf_rxtx.h             |   2 +-
 drivers/net/txgbe/txgbe_ethdev.c              |   4 +-
 drivers/net/txgbe/txgbe_rxtx.c                | 172 ++++----
 drivers/net/vhost/rte_eth_vhost.c             |   2 +-
 drivers/net/virtio/virtio_rxtx.c              |  14 +-
 drivers/net/virtio/virtio_rxtx_packed.h       |   6 +-
 drivers/net/virtio/virtqueue.h                |  14 +-
 drivers/net/vmxnet3/vmxnet3_rxtx.c            |  59 ++-
 drivers/regex/mlx5/mlx5_regex_fastpath.c      |   2 +-
 examples/bpf/t2.c                             |   4 +-
 examples/ip_fragmentation/main.c              |   2 +-
 examples/ip_reassembly/main.c                 |   2 +-
 examples/ipsec-secgw/esp.c                    |   6 +-
 examples/ipsec-secgw/ipsec-secgw.c            |  20 +-
 examples/ipsec-secgw/ipsec_worker.c           |  12 +-
 examples/ipsec-secgw/sa.c                     |   2 +-
 examples/ptpclient/ptpclient.c                |   4 +-
 examples/qos_meter/main.c                     |  12 +-
 examples/vhost/main.c                         |  12 +-
 lib/ethdev/rte_ethdev.h                       |   4 +-
 lib/ethdev/rte_flow.h                         |  33 +-
 lib/eventdev/rte_event_eth_rx_adapter.c       |   4 +-
 lib/gso/gso_common.h                          |  40 +-
 lib/gso/gso_tunnel_tcp4.c                     |   2 +-
 lib/gso/rte_gso.c                             |  10 +-
 lib/gso/rte_gso.h                             |   4 +-
 lib/ipsec/esp_inb.c                           |  10 +-
 lib/ipsec/esp_outb.c                          |   4 +-
 lib/ipsec/misc.h                              |   2 +-
 lib/ipsec/rte_ipsec_group.h                   |   6 +-
 lib/ipsec/sa.c                                |   2 +-
 lib/mbuf/rte_mbuf.c                           | 220 +++++-----
 lib/mbuf/rte_mbuf.h                           |  30 +-
 lib/mbuf/rte_mbuf_core.h                      | 398 ++++++++++++------
 lib/mbuf/rte_mbuf_dyn.c                       |   2 +-
 lib/net/rte_ether.h                           |   6 +-
 lib/net/rte_ip.h                              |   4 +-
 lib/net/rte_net.h                             |  22 +-
 lib/pipeline/rte_table_action.c               |  10 +-
 lib/vhost/virtio_net.c                        |  42 +-
 169 files changed, 3085 insertions(+), 2975 deletions(-)

diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 38cc256533..826c06c4b5 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -481,12 +481,12 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 	if (info->ethertype == _htons(RTE_ETHER_TYPE_IPV4)) {
 		ipv4_hdr = l3_hdr;
 
-		ol_flags |= PKT_TX_IPV4;
+		ol_flags |= RTE_MBUF_F_TX_IPV4;
 		if (info->l4_proto == IPPROTO_TCP && tso_segsz) {
-			ol_flags |= PKT_TX_IP_CKSUM;
+			ol_flags |= RTE_MBUF_F_TX_IP_CKSUM;
 		} else {
 			if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) {
-				ol_flags |= PKT_TX_IP_CKSUM;
+				ol_flags |= RTE_MBUF_F_TX_IP_CKSUM;
 			} else {
 				ipv4_hdr->hdr_checksum = 0;
 				ipv4_hdr->hdr_checksum =
@@ -494,7 +494,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 			}
 		}
 	} else if (info->ethertype == _htons(RTE_ETHER_TYPE_IPV6))
-		ol_flags |= PKT_TX_IPV6;
+		ol_flags |= RTE_MBUF_F_TX_IPV6;
 	else
 		return 0; /* packet type not supported, nothing to do */
 
@@ -503,7 +503,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 		/* do not recalculate udp cksum if it was 0 */
 		if (udp_hdr->dgram_cksum != 0) {
 			if (tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
-				ol_flags |= PKT_TX_UDP_CKSUM;
+				ol_flags |= RTE_MBUF_F_TX_UDP_CKSUM;
 			} else {
 				udp_hdr->dgram_cksum = 0;
 				udp_hdr->dgram_cksum =
@@ -512,13 +512,13 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 			}
 		}
 		if (info->gso_enable)
-			ol_flags |= PKT_TX_UDP_SEG;
+			ol_flags |= RTE_MBUF_F_TX_UDP_SEG;
 	} else if (info->l4_proto == IPPROTO_TCP) {
 		tcp_hdr = (struct rte_tcp_hdr *)((char *)l3_hdr + info->l3_len);
 		if (tso_segsz)
-			ol_flags |= PKT_TX_TCP_SEG;
+			ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 		else if (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) {
-			ol_flags |= PKT_TX_TCP_CKSUM;
+			ol_flags |= RTE_MBUF_F_TX_TCP_CKSUM;
 		} else {
 			tcp_hdr->cksum = 0;
 			tcp_hdr->cksum =
@@ -526,7 +526,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 					info->ethertype);
 		}
 		if (info->gso_enable)
-			ol_flags |= PKT_TX_TCP_SEG;
+			ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 	} else if (info->l4_proto == IPPROTO_SCTP) {
 		sctp_hdr = (struct rte_sctp_hdr *)
 			((char *)l3_hdr + info->l3_len);
@@ -534,7 +534,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 		 * offloaded */
 		if ((tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
 			((ipv4_hdr->total_length & 0x3) == 0)) {
-			ol_flags |= PKT_TX_SCTP_CKSUM;
+			ol_flags |= RTE_MBUF_F_TX_SCTP_CKSUM;
 		} else {
 			sctp_hdr->cksum = 0;
 			/* XXX implement CRC32c, example available in
@@ -557,14 +557,14 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
 
 	if (info->outer_ethertype == _htons(RTE_ETHER_TYPE_IPV4)) {
 		ipv4_hdr->hdr_checksum = 0;
-		ol_flags |= PKT_TX_OUTER_IPV4;
+		ol_flags |= RTE_MBUF_F_TX_OUTER_IPV4;
 
 		if (tx_offloads	& DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
-			ol_flags |= PKT_TX_OUTER_IP_CKSUM;
+			ol_flags |= RTE_MBUF_F_TX_OUTER_IP_CKSUM;
 		else
 			ipv4_hdr->hdr_checksum = rte_ipv4_cksum(ipv4_hdr);
 	} else
-		ol_flags |= PKT_TX_OUTER_IPV6;
+		ol_flags |= RTE_MBUF_F_TX_OUTER_IPV6;
 
 	if (info->outer_l4_proto != IPPROTO_UDP)
 		return ol_flags;
@@ -573,7 +573,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
 		((char *)outer_l3_hdr + info->outer_l3_len);
 
 	if (tso_enabled)
-		ol_flags |= PKT_TX_TCP_SEG;
+		ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 
 	/* Skip SW outer UDP checksum generation if HW supports it */
 	if (tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) {
@@ -584,7 +584,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
 			udp_hdr->dgram_cksum
 				= rte_ipv6_phdr_cksum(ipv6_hdr, ol_flags);
 
-		ol_flags |= PKT_TX_OUTER_UDP_CKSUM;
+		ol_flags |= RTE_MBUF_F_TX_OUTER_UDP_CKSUM;
 		return ol_flags;
 	}
 
@@ -855,17 +855,17 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 		info.is_tunnel = 0;
 		info.pkt_len = rte_pktmbuf_pkt_len(m);
 		tx_ol_flags = m->ol_flags &
-			      (IND_ATTACHED_MBUF | EXT_ATTACHED_MBUF);
+			      (RTE_MBUF_F_INDIRECT | RTE_MBUF_F_EXTERNAL);
 		rx_ol_flags = m->ol_flags;
 
 		/* Update the L3/L4 checksum error packet statistics */
-		if ((rx_ol_flags & PKT_RX_IP_CKSUM_MASK) == PKT_RX_IP_CKSUM_BAD)
+		if ((rx_ol_flags & RTE_MBUF_F_RX_IP_CKSUM_MASK) == RTE_MBUF_F_RX_IP_CKSUM_BAD)
 			rx_bad_ip_csum += 1;
-		if ((rx_ol_flags & PKT_RX_L4_CKSUM_MASK) == PKT_RX_L4_CKSUM_BAD)
+		if ((rx_ol_flags & RTE_MBUF_F_RX_L4_CKSUM_MASK) == RTE_MBUF_F_RX_L4_CKSUM_BAD)
 			rx_bad_l4_csum += 1;
-		if (rx_ol_flags & PKT_RX_OUTER_L4_CKSUM_BAD)
+		if (rx_ol_flags & RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD)
 			rx_bad_outer_l4_csum += 1;
-		if (rx_ol_flags & PKT_RX_OUTER_IP_CKSUM_BAD)
+		if (rx_ol_flags & RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD)
 			rx_bad_outer_ip_csum += 1;
 
 		/* step 1: dissect packet, parsing optional vlan, ip4/ip6, vxlan
@@ -888,26 +888,26 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 					((char *)l3_hdr + info.l3_len);
 				parse_gtp(udp_hdr, &info);
 				if (info.is_tunnel) {
-					tx_ol_flags |= PKT_TX_TUNNEL_GTP;
+					tx_ol_flags |= RTE_MBUF_F_TX_TUNNEL_GTP;
 					goto tunnel_update;
 				}
 				parse_vxlan_gpe(udp_hdr, &info);
 				if (info.is_tunnel) {
 					tx_ol_flags |=
-						PKT_TX_TUNNEL_VXLAN_GPE;
+						RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE;
 					goto tunnel_update;
 				}
 				parse_vxlan(udp_hdr, &info,
 					    m->packet_type);
 				if (info.is_tunnel) {
 					tx_ol_flags |=
-						PKT_TX_TUNNEL_VXLAN;
+						RTE_MBUF_F_TX_TUNNEL_VXLAN;
 					goto tunnel_update;
 				}
 				parse_geneve(udp_hdr, &info);
 				if (info.is_tunnel) {
 					tx_ol_flags |=
-						PKT_TX_TUNNEL_GENEVE;
+						RTE_MBUF_F_TX_TUNNEL_GENEVE;
 					goto tunnel_update;
 				}
 			} else if (info.l4_proto == IPPROTO_GRE) {
@@ -917,14 +917,14 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 					((char *)l3_hdr + info.l3_len);
 				parse_gre(gre_hdr, &info);
 				if (info.is_tunnel)
-					tx_ol_flags |= PKT_TX_TUNNEL_GRE;
+					tx_ol_flags |= RTE_MBUF_F_TX_TUNNEL_GRE;
 			} else if (info.l4_proto == IPPROTO_IPIP) {
 				void *encap_ip_hdr;
 
 				encap_ip_hdr = (char *)l3_hdr + info.l3_len;
 				parse_encap_ip(encap_ip_hdr, &info);
 				if (info.is_tunnel)
-					tx_ol_flags |= PKT_TX_TUNNEL_IPIP;
+					tx_ol_flags |= RTE_MBUF_F_TX_TUNNEL_IPIP;
 			}
 		}
 
@@ -950,7 +950,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 		if (info.is_tunnel == 1) {
 			tx_ol_flags |= process_outer_cksums(outer_l3_hdr, &info,
 					tx_offloads,
-					!!(tx_ol_flags & PKT_TX_TCP_SEG));
+					!!(tx_ol_flags & RTE_MBUF_F_TX_TCP_SEG));
 		}
 
 		/* step 3: fill the mbuf meta data (flags and header lengths) */
@@ -1014,7 +1014,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 				"l4_proto=%d l4_len=%d flags=%s\n",
 				info.l2_len, rte_be_to_cpu_16(info.ethertype),
 				info.l3_len, info.l4_proto, info.l4_len, buf);
-			if (rx_ol_flags & PKT_RX_LRO)
+			if (rx_ol_flags & RTE_MBUF_F_RX_LRO)
 				printf("rx: m->lro_segsz=%u\n", m->tso_segsz);
 			if (info.is_tunnel == 1)
 				printf("rx: outer_l2_len=%d outer_ethertype=%x "
@@ -1035,17 +1035,17 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 				    DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
 				    (tx_offloads &
 				    DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) ||
-				    (tx_ol_flags & PKT_TX_OUTER_IPV6))
+				    (tx_ol_flags & RTE_MBUF_F_TX_OUTER_IPV6))
 					printf("tx: m->outer_l2_len=%d "
 						"m->outer_l3_len=%d\n",
 						m->outer_l2_len,
 						m->outer_l3_len);
 				if (info.tunnel_tso_segsz != 0 &&
-						(m->ol_flags & PKT_TX_TCP_SEG))
+						(m->ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 					printf("tx: m->tso_segsz=%d\n",
 						m->tso_segsz);
 			} else if (info.tso_segsz != 0 &&
-					(m->ol_flags & PKT_TX_TCP_SEG))
+					(m->ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 				printf("tx: m->tso_segsz=%d\n", m->tso_segsz);
 			rte_get_tx_ol_flag_list(m->ol_flags, buf, sizeof(buf));
 			printf("tx: flags=%s", buf);
diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index d0360b4363..8d9eeb057c 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -100,11 +100,11 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
 
 	tx_offloads = ports[fs->tx_port].dev_conf.txmode.offloads;
 	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
-		ol_flags |= PKT_TX_VLAN;
+		ol_flags |= RTE_MBUF_F_TX_VLAN;
 	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
-		ol_flags |= PKT_TX_QINQ;
+		ol_flags |= RTE_MBUF_F_TX_QINQ;
 	if (tx_offloads	& DEV_TX_OFFLOAD_MACSEC_INSERT)
-		ol_flags |= PKT_TX_MACSEC;
+		ol_flags |= RTE_MBUF_F_TX_MACSEC;
 
 	for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) {
 		if (!nb_pkt || !nb_clones) {
@@ -152,7 +152,7 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
 								   sizeof(*ip_hdr));
 			pkt->nb_segs		= 1;
 			pkt->pkt_len		= pkt_size;
-			pkt->ol_flags		&= EXT_ATTACHED_MBUF;
+			pkt->ol_flags		&= RTE_MBUF_F_EXTERNAL;
 			pkt->ol_flags		|= ol_flags;
 			pkt->vlan_tci		= vlan_tci;
 			pkt->vlan_tci_outer	= vlan_tci_outer;
diff --git a/app/test-pmd/ieee1588fwd.c b/app/test-pmd/ieee1588fwd.c
index 034f238c34..7f3a6c95e7 100644
--- a/app/test-pmd/ieee1588fwd.c
+++ b/app/test-pmd/ieee1588fwd.c
@@ -114,7 +114,7 @@ ieee1588_packet_fwd(struct fwd_stream *fs)
 	eth_hdr = rte_pktmbuf_mtod(mb, struct rte_ether_hdr *);
 	eth_type = rte_be_to_cpu_16(eth_hdr->ether_type);
 
-	if (! (mb->ol_flags & PKT_RX_IEEE1588_PTP)) {
+	if (! (mb->ol_flags & RTE_MBUF_F_RX_IEEE1588_PTP)) {
 		if (eth_type == RTE_ETHER_TYPE_1588) {
 			printf("Port %u Received PTP packet not filtered"
 			       " by hardware\n",
@@ -163,7 +163,7 @@ ieee1588_packet_fwd(struct fwd_stream *fs)
 	 * Check that the received PTP packet has been timestamped by the
 	 * hardware.
 	 */
-	if (! (mb->ol_flags & PKT_RX_IEEE1588_TMST)) {
+	if (! (mb->ol_flags & RTE_MBUF_F_RX_IEEE1588_TMST)) {
 		printf("Port %u Received PTP packet not timestamped"
 		       " by hardware\n",
 		       fs->rx_port);
@@ -183,7 +183,7 @@ ieee1588_packet_fwd(struct fwd_stream *fs)
 	rte_ether_addr_copy(&addr, &eth_hdr->s_addr);
 
 	/* Forward PTP packet with hardware TX timestamp */
-	mb->ol_flags |= PKT_TX_IEEE1588_TMST;
+	mb->ol_flags |= RTE_MBUF_F_TX_IEEE1588_TMST;
 	fs->tx_packets += 1;
 	if (rte_eth_tx_burst(fs->rx_port, fs->tx_queue, &mb, 1) == 0) {
 		printf("Port %u sent PTP packet dropped\n", fs->rx_port);
diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c
index 21be8bb470..ea979c4a71 100644
--- a/app/test-pmd/macfwd.c
+++ b/app/test-pmd/macfwd.c
@@ -73,11 +73,11 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
 	txp = &ports[fs->tx_port];
 	tx_offloads = txp->dev_conf.txmode.offloads;
 	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
-		ol_flags = PKT_TX_VLAN;
+		ol_flags = RTE_MBUF_F_TX_VLAN;
 	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
-		ol_flags |= PKT_TX_QINQ;
+		ol_flags |= RTE_MBUF_F_TX_QINQ;
 	if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
-		ol_flags |= PKT_TX_MACSEC;
+		ol_flags |= RTE_MBUF_F_TX_MACSEC;
 	for (i = 0; i < nb_rx; i++) {
 		if (likely(i < nb_rx - 1))
 			rte_prefetch0(rte_pktmbuf_mtod(pkts_burst[i + 1],
@@ -88,7 +88,7 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
 				&eth_hdr->d_addr);
 		rte_ether_addr_copy(&ports[fs->tx_port].eth_addr,
 				&eth_hdr->s_addr);
-		mb->ol_flags &= IND_ATTACHED_MBUF | EXT_ATTACHED_MBUF;
+		mb->ol_flags &= RTE_MBUF_F_INDIRECT | RTE_MBUF_F_EXTERNAL;
 		mb->ol_flags |= ol_flags;
 		mb->l2_len = sizeof(struct rte_ether_hdr);
 		mb->l3_len = sizeof(struct rte_ipv4_hdr);
diff --git a/app/test-pmd/macswap_common.h b/app/test-pmd/macswap_common.h
index 7e9a3590a4..0d43d5cceb 100644
--- a/app/test-pmd/macswap_common.h
+++ b/app/test-pmd/macswap_common.h
@@ -11,11 +11,11 @@ ol_flags_init(uint64_t tx_offload)
 	uint64_t ol_flags = 0;
 
 	ol_flags |= (tx_offload & DEV_TX_OFFLOAD_VLAN_INSERT) ?
-			PKT_TX_VLAN : 0;
+			RTE_MBUF_F_TX_VLAN : 0;
 	ol_flags |= (tx_offload & DEV_TX_OFFLOAD_QINQ_INSERT) ?
-			PKT_TX_QINQ : 0;
+			RTE_MBUF_F_TX_QINQ : 0;
 	ol_flags |= (tx_offload & DEV_TX_OFFLOAD_MACSEC_INSERT) ?
-			PKT_TX_MACSEC : 0;
+			RTE_MBUF_F_TX_MACSEC : 0;
 
 	return ol_flags;
 }
@@ -26,10 +26,10 @@ vlan_qinq_set(struct rte_mbuf *pkts[], uint16_t nb,
 {
 	int i;
 
-	if (ol_flags & PKT_TX_VLAN)
+	if (ol_flags & RTE_MBUF_F_TX_VLAN)
 		for (i = 0; i < nb; i++)
 			pkts[i]->vlan_tci = vlan;
-	if (ol_flags & PKT_TX_QINQ)
+	if (ol_flags & RTE_MBUF_F_TX_QINQ)
 		for (i = 0; i < nb; i++)
 			pkts[i]->vlan_tci_outer = outer_vlan;
 }
@@ -37,7 +37,7 @@ vlan_qinq_set(struct rte_mbuf *pkts[], uint16_t nb,
 static inline void
 mbuf_field_set(struct rte_mbuf *mb, uint64_t ol_flags)
 {
-	mb->ol_flags &= IND_ATTACHED_MBUF | EXT_ATTACHED_MBUF;
+	mb->ol_flags &= RTE_MBUF_F_INDIRECT | RTE_MBUF_F_EXTERNAL;
 	mb->ol_flags |= ol_flags;
 	mb->l2_len = sizeof(struct rte_ether_hdr);
 	mb->l3_len = sizeof(struct rte_ipv4_hdr);
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index ab7cd622c7..881a1120e9 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -207,7 +207,7 @@ pkt_burst_prepare(struct rte_mbuf *pkt, struct rte_mempool *mbp,
 
 	rte_pktmbuf_reset_headroom(pkt);
 	pkt->data_len = tx_pkt_seg_lengths[0];
-	pkt->ol_flags &= EXT_ATTACHED_MBUF;
+	pkt->ol_flags &= RTE_MBUF_F_EXTERNAL;
 	pkt->ol_flags |= ol_flags;
 	pkt->vlan_tci = vlan_tci;
 	pkt->vlan_tci_outer = vlan_tci_outer;
@@ -353,11 +353,11 @@ pkt_burst_transmit(struct fwd_stream *fs)
 	vlan_tci = txp->tx_vlan_id;
 	vlan_tci_outer = txp->tx_vlan_id_outer;
 	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
-		ol_flags = PKT_TX_VLAN;
+		ol_flags = RTE_MBUF_F_TX_VLAN;
 	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
-		ol_flags |= PKT_TX_QINQ;
+		ol_flags |= RTE_MBUF_F_TX_QINQ;
 	if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
-		ol_flags |= PKT_TX_MACSEC;
+		ol_flags |= RTE_MBUF_F_TX_MACSEC;
 
 	/*
 	 * Initialize Ethernet header.
diff --git a/app/test-pmd/util.c b/app/test-pmd/util.c
index 14a9a251fb..a3c4318164 100644
--- a/app/test-pmd/util.c
+++ b/app/test-pmd/util.c
@@ -151,20 +151,20 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[],
 			  eth_type, (unsigned int) mb->pkt_len,
 			  (int)mb->nb_segs);
 		ol_flags = mb->ol_flags;
-		if (ol_flags & PKT_RX_RSS_HASH) {
+		if (ol_flags & RTE_MBUF_F_RX_RSS_HASH) {
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - RSS hash=0x%x",
 				  (unsigned int) mb->hash.rss);
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - RSS queue=0x%x", (unsigned int) queue);
 		}
-		if (ol_flags & PKT_RX_FDIR) {
+		if (ol_flags & RTE_MBUF_F_RX_FDIR) {
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - FDIR matched ");
-			if (ol_flags & PKT_RX_FDIR_ID)
+			if (ol_flags & RTE_MBUF_F_RX_FDIR_ID)
 				MKDUMPSTR(print_buf, buf_size, cur_len,
 					  "ID=0x%x", mb->hash.fdir.hi);
-			else if (ol_flags & PKT_RX_FDIR_FLX)
+			else if (ol_flags & RTE_MBUF_F_RX_FDIR_FLX)
 				MKDUMPSTR(print_buf, buf_size, cur_len,
 					  "flex bytes=0x%08x %08x",
 					  mb->hash.fdir.hi, mb->hash.fdir.lo);
@@ -176,18 +176,18 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[],
 		if (is_timestamp_enabled(mb))
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - timestamp %"PRIu64" ", get_timestamp(mb));
-		if (ol_flags & PKT_RX_QINQ)
+		if (ol_flags & RTE_MBUF_F_RX_QINQ)
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - QinQ VLAN tci=0x%x, VLAN tci outer=0x%x",
 				  mb->vlan_tci, mb->vlan_tci_outer);
-		else if (ol_flags & PKT_RX_VLAN)
+		else if (ol_flags & RTE_MBUF_F_RX_VLAN)
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - VLAN tci=0x%x", mb->vlan_tci);
-		if (!is_rx && (ol_flags & PKT_TX_DYNF_METADATA))
+		if (!is_rx && (ol_flags & RTE_MBUF_DYNFLAG_TX_METADATA))
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - Tx metadata: 0x%x",
 				  *RTE_FLOW_DYNF_METADATA(mb));
-		if (is_rx && (ol_flags & PKT_RX_DYNF_METADATA))
+		if (is_rx && (ol_flags & RTE_MBUF_DYNFLAG_RX_METADATA))
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - Rx metadata: 0x%x",
 				  *RTE_FLOW_DYNF_METADATA(mb));
@@ -325,7 +325,7 @@ tx_pkt_set_md(uint16_t port_id, __rte_unused uint16_t queue,
 		for (i = 0; i < nb_pkts; i++) {
 			*RTE_FLOW_DYNF_METADATA(pkts[i]) =
 						ports[port_id].tx_metadata;
-			pkts[i]->ol_flags |= PKT_TX_DYNF_METADATA;
+			pkts[i]->ol_flags |= RTE_MBUF_DYNFLAG_TX_METADATA;
 		}
 	return nb_pkts;
 }
diff --git a/app/test/test_ipsec.c b/app/test/test_ipsec.c
index c6d6b88d6d..1bec63b0e8 100644
--- a/app/test/test_ipsec.c
+++ b/app/test/test_ipsec.c
@@ -1622,8 +1622,8 @@ inline_outb_burst_null_null_check(struct ipsec_unitest_params *ut_params,
 			"ibuf pkt_len is not equal to obuf pkt_len");
 
 		/* check mbuf ol_flags */
-		TEST_ASSERT(ut_params->ibuf[j]->ol_flags & PKT_TX_SEC_OFFLOAD,
-			"ibuf PKT_TX_SEC_OFFLOAD is not set");
+		TEST_ASSERT(ut_params->ibuf[j]->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD,
+			    "ibuf RTE_MBUF_F_TX_SEC_OFFLOAD is not set");
 	}
 	return 0;
 }
diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
index 9a248dfaea..ee034fb898 100644
--- a/app/test/test_mbuf.c
+++ b/app/test/test_mbuf.c
@@ -1495,7 +1495,7 @@ test_get_rx_ol_flag_list(void)
 		GOTO_FAIL("%s expected: -1, received = %d\n", __func__, ret);
 
 	/* Test case to check with zero buffer len */
-	ret = rte_get_rx_ol_flag_list(PKT_RX_L4_CKSUM_MASK, buf, 0);
+	ret = rte_get_rx_ol_flag_list(RTE_MBUF_F_RX_L4_CKSUM_MASK, buf, 0);
 	if (ret != -1)
 		GOTO_FAIL("%s expected: -1, received = %d\n", __func__, ret);
 
@@ -1526,7 +1526,8 @@ test_get_rx_ol_flag_list(void)
 				"non-zero, buffer should not be empty");
 
 	/* Test case to check with valid mask value */
-	ret = rte_get_rx_ol_flag_list(PKT_RX_SEC_OFFLOAD, buf, sizeof(buf));
+	ret = rte_get_rx_ol_flag_list(RTE_MBUF_F_RX_SEC_OFFLOAD, buf,
+				      sizeof(buf));
 	if (ret != 0)
 		GOTO_FAIL("%s expected: 0, received = %d\n", __func__, ret);
 
@@ -1553,7 +1554,7 @@ test_get_tx_ol_flag_list(void)
 		GOTO_FAIL("%s expected: -1, received = %d\n", __func__, ret);
 
 	/* Test case to check with zero buffer len */
-	ret = rte_get_tx_ol_flag_list(PKT_TX_IP_CKSUM, buf, 0);
+	ret = rte_get_tx_ol_flag_list(RTE_MBUF_F_TX_IP_CKSUM, buf, 0);
 	if (ret != -1)
 		GOTO_FAIL("%s expected: -1, received = %d\n", __func__, ret);
 
@@ -1585,7 +1586,8 @@ test_get_tx_ol_flag_list(void)
 				"non-zero, buffer should not be empty");
 
 	/* Test case to check with valid mask value */
-	ret = rte_get_tx_ol_flag_list(PKT_TX_UDP_CKSUM, buf, sizeof(buf));
+	ret = rte_get_tx_ol_flag_list(RTE_MBUF_F_TX_UDP_CKSUM, buf,
+				      sizeof(buf));
 	if (ret != 0)
 		GOTO_FAIL("%s expected: 0, received = %d\n", __func__, ret);
 
@@ -1611,28 +1613,28 @@ test_get_rx_ol_flag_name(void)
 	uint16_t i;
 	const char *flag_str = NULL;
 	const struct flag_name rx_flags[] = {
-		VAL_NAME(PKT_RX_VLAN),
-		VAL_NAME(PKT_RX_RSS_HASH),
-		VAL_NAME(PKT_RX_FDIR),
-		VAL_NAME(PKT_RX_L4_CKSUM_BAD),
-		VAL_NAME(PKT_RX_L4_CKSUM_GOOD),
-		VAL_NAME(PKT_RX_L4_CKSUM_NONE),
-		VAL_NAME(PKT_RX_IP_CKSUM_BAD),
-		VAL_NAME(PKT_RX_IP_CKSUM_GOOD),
-		VAL_NAME(PKT_RX_IP_CKSUM_NONE),
-		VAL_NAME(PKT_RX_OUTER_IP_CKSUM_BAD),
-		VAL_NAME(PKT_RX_VLAN_STRIPPED),
-		VAL_NAME(PKT_RX_IEEE1588_PTP),
-		VAL_NAME(PKT_RX_IEEE1588_TMST),
-		VAL_NAME(PKT_RX_FDIR_ID),
-		VAL_NAME(PKT_RX_FDIR_FLX),
-		VAL_NAME(PKT_RX_QINQ_STRIPPED),
-		VAL_NAME(PKT_RX_LRO),
-		VAL_NAME(PKT_RX_SEC_OFFLOAD),
-		VAL_NAME(PKT_RX_SEC_OFFLOAD_FAILED),
-		VAL_NAME(PKT_RX_OUTER_L4_CKSUM_BAD),
-		VAL_NAME(PKT_RX_OUTER_L4_CKSUM_GOOD),
-		VAL_NAME(PKT_RX_OUTER_L4_CKSUM_INVALID),
+		VAL_NAME(RTE_MBUF_F_RX_VLAN),
+		VAL_NAME(RTE_MBUF_F_RX_RSS_HASH),
+		VAL_NAME(RTE_MBUF_F_RX_FDIR),
+		VAL_NAME(RTE_MBUF_F_RX_L4_CKSUM_BAD),
+		VAL_NAME(RTE_MBUF_F_RX_L4_CKSUM_GOOD),
+		VAL_NAME(RTE_MBUF_F_RX_L4_CKSUM_NONE),
+		VAL_NAME(RTE_MBUF_F_RX_IP_CKSUM_BAD),
+		VAL_NAME(RTE_MBUF_F_RX_IP_CKSUM_GOOD),
+		VAL_NAME(RTE_MBUF_F_RX_IP_CKSUM_NONE),
+		VAL_NAME(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD),
+		VAL_NAME(RTE_MBUF_F_RX_VLAN_STRIPPED),
+		VAL_NAME(RTE_MBUF_F_RX_IEEE1588_PTP),
+		VAL_NAME(RTE_MBUF_F_RX_IEEE1588_TMST),
+		VAL_NAME(RTE_MBUF_F_RX_FDIR_ID),
+		VAL_NAME(RTE_MBUF_F_RX_FDIR_FLX),
+		VAL_NAME(RTE_MBUF_F_RX_QINQ_STRIPPED),
+		VAL_NAME(RTE_MBUF_F_RX_LRO),
+		VAL_NAME(RTE_MBUF_F_RX_SEC_OFFLOAD),
+		VAL_NAME(RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED),
+		VAL_NAME(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD),
+		VAL_NAME(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD),
+		VAL_NAME(RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID),
 	};
 
 	/* Test case to check with valid flag */
@@ -1663,31 +1665,31 @@ test_get_tx_ol_flag_name(void)
 	uint16_t i;
 	const char *flag_str = NULL;
 	const struct flag_name tx_flags[] = {
-		VAL_NAME(PKT_TX_VLAN),
-		VAL_NAME(PKT_TX_IP_CKSUM),
-		VAL_NAME(PKT_TX_TCP_CKSUM),
-		VAL_NAME(PKT_TX_SCTP_CKSUM),
-		VAL_NAME(PKT_TX_UDP_CKSUM),
-		VAL_NAME(PKT_TX_IEEE1588_TMST),
-		VAL_NAME(PKT_TX_TCP_SEG),
-		VAL_NAME(PKT_TX_IPV4),
-		VAL_NAME(PKT_TX_IPV6),
-		VAL_NAME(PKT_TX_OUTER_IP_CKSUM),
-		VAL_NAME(PKT_TX_OUTER_IPV4),
-		VAL_NAME(PKT_TX_OUTER_IPV6),
-		VAL_NAME(PKT_TX_TUNNEL_VXLAN),
-		VAL_NAME(PKT_TX_TUNNEL_GRE),
-		VAL_NAME(PKT_TX_TUNNEL_IPIP),
-		VAL_NAME(PKT_TX_TUNNEL_GENEVE),
-		VAL_NAME(PKT_TX_TUNNEL_MPLSINUDP),
-		VAL_NAME(PKT_TX_TUNNEL_VXLAN_GPE),
-		VAL_NAME(PKT_TX_TUNNEL_IP),
-		VAL_NAME(PKT_TX_TUNNEL_UDP),
-		VAL_NAME(PKT_TX_QINQ),
-		VAL_NAME(PKT_TX_MACSEC),
-		VAL_NAME(PKT_TX_SEC_OFFLOAD),
-		VAL_NAME(PKT_TX_UDP_SEG),
-		VAL_NAME(PKT_TX_OUTER_UDP_CKSUM),
+		VAL_NAME(RTE_MBUF_F_TX_VLAN),
+		VAL_NAME(RTE_MBUF_F_TX_IP_CKSUM),
+		VAL_NAME(RTE_MBUF_F_TX_TCP_CKSUM),
+		VAL_NAME(RTE_MBUF_F_TX_SCTP_CKSUM),
+		VAL_NAME(RTE_MBUF_F_TX_UDP_CKSUM),
+		VAL_NAME(RTE_MBUF_F_TX_IEEE1588_TMST),
+		VAL_NAME(RTE_MBUF_F_TX_TCP_SEG),
+		VAL_NAME(RTE_MBUF_F_TX_IPV4),
+		VAL_NAME(RTE_MBUF_F_TX_IPV6),
+		VAL_NAME(RTE_MBUF_F_TX_OUTER_IP_CKSUM),
+		VAL_NAME(RTE_MBUF_F_TX_OUTER_IPV4),
+		VAL_NAME(RTE_MBUF_F_TX_OUTER_IPV6),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_VXLAN),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_GRE),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_IPIP),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_GENEVE),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_MPLSINUDP),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_IP),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_UDP),
+		VAL_NAME(RTE_MBUF_F_TX_QINQ),
+		VAL_NAME(RTE_MBUF_F_TX_MACSEC),
+		VAL_NAME(RTE_MBUF_F_TX_SEC_OFFLOAD),
+		VAL_NAME(RTE_MBUF_F_TX_UDP_SEG),
+		VAL_NAME(RTE_MBUF_F_TX_OUTER_UDP_CKSUM),
 	};
 
 	/* Test case to check with valid flag */
@@ -1755,8 +1757,8 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 
 	/* test to validate if IP checksum is counted only for IPV4 packet */
 	/* set both IP checksum and IPV6 flags */
-	ol_flags |= PKT_TX_IP_CKSUM;
-	ol_flags |= PKT_TX_IPV6;
+	ol_flags |= RTE_MBUF_F_TX_IP_CKSUM;
+	ol_flags |= RTE_MBUF_F_TX_IPV6;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_IP_CKSUM_IPV6_SET",
 				pktmbuf_pool,
 				ol_flags, 0, -EINVAL) < 0)
@@ -1765,14 +1767,14 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 	ol_flags = 0;
 
 	/* test to validate if IP type is set when required */
-	ol_flags |= PKT_TX_L4_MASK;
+	ol_flags |= RTE_MBUF_F_TX_L4_MASK;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_IP_TYPE_NOT_SET",
 				pktmbuf_pool,
 				ol_flags, 0, -EINVAL) < 0)
 		GOTO_FAIL("%s failed: IP type is not set.\n", __func__);
 
 	/* test if IP type is set when TCP SEG is on */
-	ol_flags |= PKT_TX_TCP_SEG;
+	ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_IP_TYPE_NOT_SET",
 				pktmbuf_pool,
 				ol_flags, 0, -EINVAL) < 0)
@@ -1780,8 +1782,8 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 
 	ol_flags = 0;
 	/* test to confirm IP type (IPV4/IPV6) is set */
-	ol_flags = PKT_TX_L4_MASK;
-	ol_flags |= PKT_TX_IPV6;
+	ol_flags = RTE_MBUF_F_TX_L4_MASK;
+	ol_flags |= RTE_MBUF_F_TX_IPV6;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_IP_TYPE_SET",
 				pktmbuf_pool,
 				ol_flags, 0, 0) < 0)
@@ -1789,15 +1791,15 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 
 	ol_flags = 0;
 	/* test to check TSO segment size is non-zero */
-	ol_flags |= PKT_TX_IPV4;
-	ol_flags |= PKT_TX_TCP_SEG;
+	ol_flags |= RTE_MBUF_F_TX_IPV4;
+	ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 	/* set 0 tso segment size */
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_NULL_TSO_SEGSZ",
 				pktmbuf_pool,
 				ol_flags, 0, -EINVAL) < 0)
 		GOTO_FAIL("%s failed: tso segment size is null.\n", __func__);
 
-	/* retain IPV4 and PKT_TX_TCP_SEG mask */
+	/* retain IPV4 and RTE_MBUF_F_TX_TCP_SEG mask */
 	/* set valid tso segment size but IP CKSUM not set */
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_TSO_IP_CKSUM_NOT_SET",
 				pktmbuf_pool,
@@ -1806,7 +1808,7 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 
 	/* test to validate if IP checksum is set for TSO capability */
 	/* retain IPV4, TCP_SEG, tso_seg size */
-	ol_flags |= PKT_TX_IP_CKSUM;
+	ol_flags |= RTE_MBUF_F_TX_IP_CKSUM;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_TSO_IP_CKSUM_SET",
 				pktmbuf_pool,
 				ol_flags, 512, 0) < 0)
@@ -1814,8 +1816,8 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 
 	/* test to confirm TSO for IPV6 type */
 	ol_flags = 0;
-	ol_flags |= PKT_TX_IPV6;
-	ol_flags |= PKT_TX_TCP_SEG;
+	ol_flags |= RTE_MBUF_F_TX_IPV6;
+	ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_TSO_IPV6_SET",
 				pktmbuf_pool,
 				ol_flags, 512, 0) < 0)
@@ -1823,8 +1825,8 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 
 	ol_flags = 0;
 	/* test if outer IP checksum set for non outer IPv4 packet */
-	ol_flags |= PKT_TX_IPV6;
-	ol_flags |= PKT_TX_OUTER_IP_CKSUM;
+	ol_flags |= RTE_MBUF_F_TX_IPV6;
+	ol_flags |= RTE_MBUF_F_TX_OUTER_IP_CKSUM;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_OUTER_IPV4_NOT_SET",
 				pktmbuf_pool,
 				ol_flags, 512, -EINVAL) < 0)
@@ -1832,8 +1834,8 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 
 	ol_flags = 0;
 	/* test to confirm outer IP checksum is set for outer IPV4 packet */
-	ol_flags |= PKT_TX_OUTER_IP_CKSUM;
-	ol_flags |= PKT_TX_OUTER_IPV4;
+	ol_flags |= RTE_MBUF_F_TX_OUTER_IP_CKSUM;
+	ol_flags |= RTE_MBUF_F_TX_OUTER_IPV4;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_OUTER_IPV4_SET",
 				pktmbuf_pool,
 				ol_flags, 512, 0) < 0)
@@ -2366,7 +2368,7 @@ test_pktmbuf_ext_shinfo_init_helper(struct rte_mempool *pktmbuf_pool)
 	buf_iova = rte_mem_virt2iova(ext_buf_addr);
 	rte_pktmbuf_attach_extbuf(m, ext_buf_addr, buf_iova, buf_len,
 		ret_shinfo);
-	if (m->ol_flags != EXT_ATTACHED_MBUF)
+	if (m->ol_flags != RTE_MBUF_F_EXTERNAL)
 		GOTO_FAIL("%s: External buffer is not attached to mbuf\n",
 				__func__);
 
@@ -2380,7 +2382,7 @@ test_pktmbuf_ext_shinfo_init_helper(struct rte_mempool *pktmbuf_pool)
 	/* attach the same external buffer to the cloned mbuf */
 	rte_pktmbuf_attach_extbuf(clone, ext_buf_addr, buf_iova, buf_len,
 			ret_shinfo);
-	if (clone->ol_flags != EXT_ATTACHED_MBUF)
+	if (clone->ol_flags != RTE_MBUF_F_EXTERNAL)
 		GOTO_FAIL("%s: External buffer is not attached to mbuf\n",
 				__func__);
 
@@ -2654,8 +2656,8 @@ test_mbuf_dyn(struct rte_mempool *pktmbuf_pool)
 			flag2, strerror(errno));
 
 	flag3 = rte_mbuf_dynflag_register_bitnum(&dynflag3,
-						rte_bsf64(PKT_LAST_FREE));
-	if (flag3 != rte_bsf64(PKT_LAST_FREE))
+						rte_bsf64(RTE_MBUF_F_LAST_FREE));
+	if (flag3 != rte_bsf64(RTE_MBUF_F_LAST_FREE))
 		GOTO_FAIL("failed to register dynamic flag 3, flag3=%d: %s",
 			flag3, strerror(errno));
 
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index e75f4fa9e3..309b91a01f 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -512,9 +512,9 @@ configured TPID.
     // enable VLAN insert offload
     testpmd> port config (port_id) rx_offload vlan_insert|qinq_insert (on|off)
 
-    if (mbuf->ol_flags && PKT_TX_QINQ)       // case-1: insert VLAN to single-tagged packet
+    if (mbuf->ol_flags && RTE_MBUF_F_TX_QINQ)       // case-1: insert VLAN to single-tagged packet
         tci_value = mbuf->vlan_tci_outer
-    else if (mbuf->ol_flags && PKT_TX_VLAN)  // case-2: insert VLAN to untagged packet
+    else if (mbuf->ol_flags && RTE_MBUF_F_TX_VLAN)  // case-2: insert VLAN to untagged packet
         tci_value = mbuf->vlan_tci
 
 VLAN Strip
@@ -528,7 +528,7 @@ The application configures the per-port VLAN strip offload.
     testpmd> port config (port_id) tx_offload vlan_strip (on|off)
 
     // notify application VLAN strip via mbuf
-    mbuf->ol_flags |= PKT_RX_VLAN | PKT_RX_STRIPPED // outer VLAN is found and stripped
+    mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_STRIPPED // outer VLAN is found and stripped
     mbuf->vlan_tci = tci_value                      // TCI of the stripped VLAN
 
 Time Synchronization
@@ -552,7 +552,7 @@ packets to application via mbuf.
 .. code-block:: console
 
     // RX packet completion will indicate whether the packet is PTP
-    mbuf->ol_flags |= PKT_RX_IEEE1588_PTP
+    mbuf->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP
 
 Statistics Collection
 ~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst
index 91bdcd065a..d5ffd51dea 100644
--- a/doc/guides/nics/enic.rst
+++ b/doc/guides/nics/enic.rst
@@ -279,9 +279,9 @@ inner and outer packets can be IPv4 or IPv6.
 - Rx checksum offloads.
 
   The NIC validates IPv4/UDP/TCP checksums of both inner and outer packets.
-  Good checksum flags (e.g. ``PKT_RX_L4_CKSUM_GOOD``) indicate that the inner
+  Good checksum flags (e.g. ``RTE_MBUF_F_RX_L4_CKSUM_GOOD``) indicate that the inner
   packet has the correct checksum, and if applicable, the outer packet also
-  has the correct checksum. Bad checksum flags (e.g. ``PKT_RX_L4_CKSUM_BAD``)
+  has the correct checksum. Bad checksum flags (e.g. ``RTE_MBUF_F_RX_L4_CKSUM_BAD``)
   indicate that the inner and/or outer packets have invalid checksum values.
 
 - Inner Rx packet type classification
@@ -437,8 +437,8 @@ Limitations
 
 Another alternative is modify the adapter's ingress VLAN rewrite mode so that
 packets with the default VLAN tag are stripped by the adapter and presented to
-DPDK as untagged packets. In this case mbuf->vlan_tci and the PKT_RX_VLAN and
-PKT_RX_VLAN_STRIPPED mbuf flags would not be set. This mode is enabled with the
+DPDK as untagged packets. In this case mbuf->vlan_tci and the RTE_MBUF_F_RX_VLAN and
+RTE_MBUF_F_RX_VLAN_STRIPPED mbuf flags would not be set. This mode is enabled with the
 ``devargs`` parameter ``ig-vlan-rewrite=untag``. For example::
 
     -a 12:00.0,ig-vlan-rewrite=untag
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 4fce8cd1c9..0c059411e8 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -210,7 +210,7 @@ Supports Large Receive Offload.
   ``dev_conf.rxmode.max_lro_pkt_size``.
 * **[implements] datapath**: ``LRO functionality``.
 * **[implements] rte_eth_dev_data**: ``lro``.
-* **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
+* **[provides]   mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_LRO``, ``mbuf.tso_segsz``.
 * **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
 * **[provides]   rte_eth_dev_info**: ``max_lro_pkt_size``.
 
@@ -224,7 +224,7 @@ Supports TCP Segmentation Offloading.
 
 * **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_TCP_TSO``.
 * **[uses]       rte_eth_desc_lim**: ``nb_seg_max``, ``nb_mtu_seg_max``.
-* **[uses]       mbuf**: ``mbuf.ol_flags:`` ``PKT_TX_TCP_SEG``, ``PKT_TX_IPV4``, ``PKT_TX_IPV6``, ``PKT_TX_IP_CKSUM``.
+* **[uses]       mbuf**: ``mbuf.ol_flags:`` ``RTE_MBUF_F_TX_TCP_SEG``, ``RTE_MBUF_F_TX_IPV4``, ``RTE_MBUF_F_TX_IPV6``, ``RTE_MBUF_F_TX_IP_CKSUM``.
 * **[uses]       mbuf**: ``mbuf.tso_segsz``, ``mbuf.l2_len``, ``mbuf.l3_len``, ``mbuf.l4_len``.
 * **[implements] datapath**: ``TSO functionality``.
 * **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_TCP_TSO,DEV_TX_OFFLOAD_UDP_TSO``.
@@ -292,7 +292,7 @@ Supports RSS hashing on RX.
 * **[uses]     user config**: ``dev_conf.rx_adv_conf.rss_conf``.
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_RSS_HASH``.
 * **[provides] rte_eth_dev_info**: ``flow_type_rss_offloads``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_RSS_HASH``, ``mbuf.rss``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_RSS_HASH``, ``mbuf.rss``.
 
 
 .. _nic_features_inner_rss:
@@ -304,7 +304,7 @@ Supports RX RSS hashing on Inner headers.
 
 * **[uses]    rte_flow_action_rss**: ``level``.
 * **[uses]    rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_RSS_HASH``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_RSS_HASH``, ``mbuf.rss``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_RSS_HASH``, ``mbuf.rss``.
 
 
 .. _nic_features_rss_key_update:
@@ -435,8 +435,8 @@ of protocol operations. See Security library and PMD documentation for more deta
   ``session_stats_get``, ``session_destroy``, ``set_pkt_metadata``, ``capabilities_get``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_SECURITY``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_SECURITY``.
-* **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD``,
-  ``mbuf.ol_flags:PKT_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD_FAILED``.
+* **[provides]   mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_SEC_OFFLOAD``,
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED``.
 * **[provides]   rte_security_ops, capabilities_get**:  ``action: RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO``
 
 
@@ -458,8 +458,8 @@ protocol operations. See security library and PMD documentation for more details
   ``capabilities_get``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_SECURITY``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_SECURITY``.
-* **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD``,
-  ``mbuf.ol_flags:PKT_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD_FAILED``.
+* **[provides]   mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_SEC_OFFLOAD``,
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED``.
 * **[provides]   rte_security_ops, capabilities_get**:  ``action: RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL``
 
 
@@ -483,9 +483,9 @@ Supports VLAN offload to hardware.
 
 * **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
 * **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_VLAN_INSERT``.
-* **[uses]       mbuf**: ``mbuf.ol_flags:PKT_TX_VLAN``, ``mbuf.vlan_tci``.
+* **[uses]       mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_TX_VLAN``, ``mbuf.vlan_tci``.
 * **[implements] eth_dev_ops**: ``vlan_offload_set``.
-* **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:PKT_RX_VLAN`` ``mbuf.vlan_tci``.
+* **[provides]   mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:RTE_MBUF_F_RX_VLAN`` ``mbuf.vlan_tci``.
 * **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
 * **[related]    API**: ``rte_eth_dev_set_vlan_offload()``,
@@ -501,9 +501,9 @@ Supports QinQ (queue in queue) offload.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
 * **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_QINQ_INSERT``.
-* **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ``, ``mbuf.vlan_tci_outer``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.ol_flags:PKT_RX_QINQ``,
-  ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:PKT_RX_VLAN``
+* **[uses]     mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_TX_QINQ``, ``mbuf.vlan_tci_outer``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_QINQ_STRIPPED``, ``mbuf.ol_flags:RTE_MBUF_F_RX_QINQ``,
+  ``mbuf.ol_flags:RTE_MBUF_F_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:RTE_MBUF_F_RX_VLAN``
   ``mbuf.vlan_tci``, ``mbuf.vlan_tci_outer``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
@@ -533,12 +533,12 @@ Supports L3 checksum offload.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
 * **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_IPV4_CKSUM``.
-* **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
-  ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
+* **[uses]     mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_TX_IP_CKSUM``,
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_IPV4`` | ``RTE_MBUF_F_TX_IPV6``.
 * **[uses]     mbuf**: ``mbuf.l2_len``, ``mbuf.l3_len``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
-  ``PKT_RX_IP_CKSUM_BAD`` | ``PKT_RX_IP_CKSUM_GOOD`` |
-  ``PKT_RX_IP_CKSUM_NONE``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN`` |
+  ``RTE_MBUF_F_RX_IP_CKSUM_BAD`` | ``RTE_MBUF_F_RX_IP_CKSUM_GOOD`` |
+  ``RTE_MBUF_F_RX_IP_CKSUM_NONE``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
 
@@ -552,13 +552,13 @@ Supports L4 checksum offload.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM,DEV_RX_OFFLOAD_SCTP_CKSUM``.
 * **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
-* **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
-  ``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
-  ``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
+* **[uses]     mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_TX_IPV4`` | ``RTE_MBUF_F_TX_IPV6``,
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_L4_NO_CKSUM`` | ``RTE_MBUF_F_TX_TCP_CKSUM`` |
+  ``RTE_MBUF_F_TX_SCTP_CKSUM`` | ``RTE_MBUF_F_TX_UDP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.l2_len``, ``mbuf.l3_len``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_L4_CKSUM_UNKNOWN`` |
-  ``PKT_RX_L4_CKSUM_BAD`` | ``PKT_RX_L4_CKSUM_GOOD`` |
-  ``PKT_RX_L4_CKSUM_NONE``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN`` |
+  ``RTE_MBUF_F_RX_L4_CKSUM_BAD`` | ``RTE_MBUF_F_RX_L4_CKSUM_GOOD`` |
+  ``RTE_MBUF_F_RX_L4_CKSUM_NONE``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM,DEV_RX_OFFLOAD_SCTP_CKSUM``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
 
@@ -570,7 +570,7 @@ Timestamp offload
 Supports Timestamp.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TIMESTAMP``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_TIMESTAMP``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_TIMESTAMP``.
 * **[provides] mbuf**: ``mbuf.timestamp``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa: DEV_RX_OFFLOAD_TIMESTAMP``.
 * **[related] eth_dev_ops**: ``read_clock``.
@@ -584,7 +584,7 @@ Supports MACsec.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
 * **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MACSEC_INSERT``.
-* **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
+* **[uses]     mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_TX_MACSEC``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
 
@@ -598,12 +598,12 @@ Supports inner packet L3 checksum.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
 * **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
-* **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
-  ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
-  ``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
-  ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
+* **[uses]     mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_TX_IP_CKSUM``,
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_IPV4`` | ``RTE_MBUF_F_TX_IPV6``,
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_OUTER_IP_CKSUM``,
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_OUTER_IPV4`` | ``RTE_MBUF_F_TX_OUTER_IPV6``.
 * **[uses]     mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_OUTER_IP_CKSUM_BAD``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 
@@ -616,11 +616,11 @@ Inner L4 checksum
 Supports inner packet L4 checksum.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_UDP_CKSUM``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_OUTER_L4_CKSUM_UNKNOWN`` |
-  ``PKT_RX_OUTER_L4_CKSUM_BAD`` | ``PKT_RX_OUTER_L4_CKSUM_GOOD`` | ``PKT_RX_OUTER_L4_CKSUM_INVALID``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN`` |
+  ``RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD`` | ``RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD`` | ``RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID``.
 * **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``.
-* **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
-  ``mbuf.ol_flags:PKT_TX_OUTER_UDP_CKSUM``.
+* **[uses]     mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_TX_OUTER_IPV4`` | ``RTE_MBUF_F_TX_OUTER_IPV6``.
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_OUTER_UDP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_UDP_CKSUM``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``.
diff --git a/doc/guides/nics/ixgbe.rst b/doc/guides/nics/ixgbe.rst
index b82e634382..97e4648a38 100644
--- a/doc/guides/nics/ixgbe.rst
+++ b/doc/guides/nics/ixgbe.rst
@@ -264,7 +264,7 @@ Intel 82599 10 Gigabit Ethernet Controller Specification Update (Revision 2.87)
 Errata: 44 Integrity Error Reported for IPv4/UDP Packets With Zero Checksum
 
 To support UDP zero checksum, the zero and bad UDP checksum packet is marked as
-PKT_RX_L4_CKSUM_UNKNOWN, so the application needs to recompute the checksum to
+RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN, so the application needs to recompute the checksum to
 validate it.
 
 Inline crypto processing support
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index bae73f42d8..9324ce7818 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -255,7 +255,7 @@ Limitations
   no MPRQ feature or vectorized code can be engaged.
 
 - When Multi-Packet Rx queue is configured (``mprq_en``), a Rx packet can be
-  externally attached to a user-provided mbuf with having EXT_ATTACHED_MBUF in
+  externally attached to a user-provided mbuf with having RTE_MBUF_F_EXTERNAL in
   ol_flags. As the mempool for the external buffer is managed by PMD, all the
   Rx mbufs must be freed before the device is closed. Otherwise, the mempool of
   the external buffers will be freed by PMD and the application which still
@@ -263,7 +263,7 @@ Limitations
 
 - If Multi-Packet Rx queue is configured (``mprq_en``) and Rx CQE compression is
   enabled (``rxq_cqe_comp_en``) at the same time, RSS hash result is not fully
-  supported. Some Rx packets may not have PKT_RX_RSS_HASH.
+  supported. Some Rx packets may not have RTE_MBUF_F_RX_RSS_HASH.
 
 - IPv6 Multicast messages are not supported on VM, while promiscuous mode
   and allmulticast mode are both set to off.
@@ -644,7 +644,7 @@ Driver options
   the mbuf by external buffer attachment - ``rte_pktmbuf_attach_extbuf()``.
   A mempool for external buffers will be allocated and managed by PMD. If Rx
   packet is externally attached, ol_flags field of the mbuf will have
-  EXT_ATTACHED_MBUF and this flag must be preserved. ``RTE_MBUF_HAS_EXTBUF()``
+  RTE_MBUF_F_EXTERNAL and this flag must be preserved. ``RTE_MBUF_HAS_EXTBUF()``
   checks the flag. The default value is 128, valid only if ``mprq_en`` is set.
 
 - ``rxqs_min_mprq`` parameter [int]
diff --git a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
index 7bff0aef0b..6537f3d5d6 100644
--- a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
+++ b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
@@ -211,11 +211,11 @@ To segment an outgoing packet, an application must:
      responsibility to ensure that these flags are set.
 
    - For example, in order to segment TCP/IPv4 packets, the application should
-     add the ``PKT_TX_IPV4`` and ``PKT_TX_TCP_SEG`` flags to the mbuf's
+     add the ``RTE_MBUF_F_TX_IPV4`` and ``RTE_MBUF_F_TX_TCP_SEG`` flags to the mbuf's
      ol_flags.
 
    - If checksum calculation in hardware is required, the application should
-     also add the ``PKT_TX_TCP_CKSUM`` and ``PKT_TX_IP_CKSUM`` flags.
+     also add the ``RTE_MBUF_F_TX_TCP_CKSUM`` and ``RTE_MBUF_F_TX_IP_CKSUM`` flags.
 
 #. Check if the packet should be processed. Packets with one of the
    following properties are not processed and are returned immediately:
diff --git a/doc/guides/prog_guide/mbuf_lib.rst b/doc/guides/prog_guide/mbuf_lib.rst
index 2f190b40e4..15b266c295 100644
--- a/doc/guides/prog_guide/mbuf_lib.rst
+++ b/doc/guides/prog_guide/mbuf_lib.rst
@@ -123,7 +123,7 @@ timestamp mechanism, the VLAN tagging and the IP checksum computation.
 
 On TX side, it is also possible for an application to delegate some
 processing to the hardware if it supports it. For instance, the
-PKT_TX_IP_CKSUM flag allows to offload the computation of the IPv4
+RTE_MBUF_F_TX_IP_CKSUM flag allows to offload the computation of the IPv4
 checksum.
 
 The following examples explain how to configure different TX offloads on
@@ -134,7 +134,7 @@ a vxlan-encapsulated tcp packet:
 
     mb->l2_len = len(out_eth)
     mb->l3_len = len(out_ip)
-    mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM
+    mb->ol_flags |= RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CSUM
     set out_ip checksum to 0 in the packet
 
   This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM.
@@ -143,7 +143,7 @@ a vxlan-encapsulated tcp packet:
 
     mb->l2_len = len(out_eth)
     mb->l3_len = len(out_ip)
-    mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM | PKT_TX_UDP_CKSUM
+    mb->ol_flags |= RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CSUM | RTE_MBUF_F_TX_UDP_CKSUM
     set out_ip checksum to 0 in the packet
     set out_udp checksum to pseudo header using rte_ipv4_phdr_cksum()
 
@@ -154,7 +154,7 @@ a vxlan-encapsulated tcp packet:
 
     mb->l2_len = len(out_eth + out_ip + out_udp + vxlan + in_eth)
     mb->l3_len = len(in_ip)
-    mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM
+    mb->ol_flags |= RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CSUM
     set in_ip checksum to 0 in the packet
 
   This is similar to case 1), but l2_len is different. It is supported
@@ -165,7 +165,7 @@ a vxlan-encapsulated tcp packet:
 
     mb->l2_len = len(out_eth + out_ip + out_udp + vxlan + in_eth)
     mb->l3_len = len(in_ip)
-    mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM | PKT_TX_TCP_CKSUM
+    mb->ol_flags |= RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CSUM | RTE_MBUF_F_TX_TCP_CKSUM
     set in_ip checksum to 0 in the packet
     set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()
 
@@ -179,8 +179,8 @@ a vxlan-encapsulated tcp packet:
     mb->l2_len = len(out_eth + out_ip + out_udp + vxlan + in_eth)
     mb->l3_len = len(in_ip)
     mb->l4_len = len(in_tcp)
-    mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM |
-      PKT_TX_TCP_SEG;
+    mb->ol_flags |= RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM |
+      RTE_MBUF_F_TX_TCP_SEG;
     set in_ip checksum to 0 in the packet
     set in_tcp checksum to pseudo header without including the IP
       payload length using rte_ipv4_phdr_cksum()
@@ -194,8 +194,8 @@ a vxlan-encapsulated tcp packet:
     mb->outer_l3_len = len(out_ip)
     mb->l2_len = len(out_udp + vxlan + in_eth)
     mb->l3_len = len(in_ip)
-    mb->ol_flags |= PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IP_CKSUM  | \
-      PKT_TX_IP_CKSUM |  PKT_TX_TCP_CKSUM;
+    mb->ol_flags |= RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IP_CKSUM  | \
+      RTE_MBUF_F_TX_IP_CKSUM |  RTE_MBUF_F_TX_TCP_CKSUM;
     set out_ip checksum to 0 in the packet
     set in_ip checksum to 0 in the packet
     set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()
diff --git a/doc/guides/prog_guide/metrics_lib.rst b/doc/guides/prog_guide/metrics_lib.rst
index eca855d601..f8416eaa02 100644
--- a/doc/guides/prog_guide/metrics_lib.rst
+++ b/doc/guides/prog_guide/metrics_lib.rst
@@ -290,7 +290,7 @@ Timestamp and latency calculation
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 The Latency stats library marks the time in the timestamp field of the
-mbuf for the ingress packets and sets the ``PKT_RX_TIMESTAMP`` flag of
+mbuf for the ingress packets and sets the ``RTE_MBUF_F_RX_TIMESTAMP`` flag of
 ``ol_flags`` for the mbuf to indicate the marked time as a valid one.
 At the egress, the mbufs with the flag set are considered having valid
 timestamp and are used for the latency calculation.
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 2b42d5ec8c..8f9251953d 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -687,9 +687,9 @@ Item: ``META``
 Matches 32 bit metadata item set.
 
 On egress, metadata can be set either by mbuf metadata field with
-PKT_TX_DYNF_METADATA flag or ``SET_META`` action. On ingress, ``SET_META``
+RTE_MBUF_DYNFLAG_TX_METADATA flag or ``SET_META`` action. On ingress, ``SET_META``
 action sets metadata for a packet and the metadata will be reported via
-``metadata`` dynamic field of ``rte_mbuf`` with PKT_RX_DYNF_METADATA flag.
+``metadata`` dynamic field of ``rte_mbuf`` with RTE_MBUF_DYNFLAG_RX_METADATA flag.
 
 - Default ``mask`` matches the specified Rx metadata value.
 
@@ -1656,8 +1656,8 @@ flows to loop between groups.
 Action: ``MARK``
 ^^^^^^^^^^^^^^^^
 
-Attaches an integer value to packets and sets ``PKT_RX_FDIR`` and
-``PKT_RX_FDIR_ID`` mbuf flags.
+Attaches an integer value to packets and sets ``RTE_MBUF_F_RX_FDIR`` and
+``RTE_MBUF_F_RX_FDIR_ID`` mbuf flags.
 
 This value is arbitrary and application-defined. Maximum allowed value
 depends on the underlying implementation. It is returned in the
@@ -1677,7 +1677,7 @@ Action: ``FLAG``
 ^^^^^^^^^^^^^^^^
 
 Flags packets. Similar to `Action: MARK`_ without a specific value; only
-sets the ``PKT_RX_FDIR`` mbuf flag.
+sets the ``RTE_MBUF_F_RX_FDIR`` mbuf flag.
 
 - No configurable properties.
 
@@ -2635,10 +2635,10 @@ Action: ``SET_META``
 
 Set metadata. Item ``META`` matches metadata.
 
-Metadata set by mbuf metadata field with PKT_TX_DYNF_METADATA flag on egress
+Metadata set by mbuf metadata field with RTE_MBUF_DYNFLAG_TX_METADATA flag on egress
 will be overridden by this action. On ingress, the metadata will be carried by
 ``metadata`` dynamic field of ``rte_mbuf`` which can be accessed by
-``RTE_FLOW_DYNF_METADATA()``. PKT_RX_DYNF_METADATA flag will be set along
+``RTE_FLOW_DYNF_METADATA()``. RTE_MBUF_DYNFLAG_RX_METADATA flag will be set along
 with the data.
 
 The mbuf dynamic field must be registered by calling
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 549e9416c4..fd277e28c9 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -39,11 +39,6 @@ Deprecation Notices
   ``__atomic_thread_fence`` must be used for patches that need to be merged in
   20.08 onwards. This change will not introduce any performance degradation.
 
-* mbuf: The mbuf offload flags ``PKT_*`` will be renamed as ``RTE_MBUF_F_*``.
-  A compatibility layer will be kept until DPDK 22.11, except for the flags
-  that are already deprecated (``PKT_RX_L4_CKSUM_BAD``, ``PKT_RX_IP_CKSUM_BAD``,
-  ``PKT_RX_EIP_CKSUM_BAD``, ``PKT_TX_QINQ_PKT``) which will be removed.
-
 * pci: To reduce unnecessary ABIs exposed by DPDK bus driver, "rte_bus_pci.h"
   will be made internal in 21.11 and macros/data structures/functions defined
   in the header will not be considered as ABI anymore. This change is inspired
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 73e377a007..33519bf90b 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -141,6 +141,10 @@ Removed Items
   blacklist/whitelist are removed. Users must use the new
   block/allow list arguments.
 
+* mbuf: The mbuf offload flags ``PKT_*`` are renamed as ``RTE_MBUF_F_*``. A
+  compatibility layer will be kept until DPDK 22.11, except for the flags that
+  are already deprecated (``PKT_RX_L4_CKSUM_BAD``, ``PKT_RX_IP_CKSUM_BAD``,
+  ``PKT_RX_EIP_CKSUM_BAD``, ``PKT_TX_QINQ_PKT``), and which are now removed.
 
 API Changes
 -----------
diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c
index c5e0a83a8c..c9dc2cba9b 100644
--- a/drivers/compress/mlx5/mlx5_compress.c
+++ b/drivers/compress/mlx5/mlx5_compress.c
@@ -466,7 +466,7 @@ mlx5_compress_addr2mr(struct mlx5_compress_priv *priv, uintptr_t addr,
 		return lkey;
 	/* Take slower bottom-half on miss. */
 	return mlx5_mr_addr2mr_bh(priv->pd, 0, &priv->mr_scache, mr_ctrl, addr,
-				  !!(ol_flags & EXT_ATTACHED_MBUF));
+				  !!(ol_flags & RTE_MBUF_F_EXTERNAL));
 }
 
 static __rte_always_inline uint32_t
diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c
index 682cf8b607..27207ab9a3 100644
--- a/drivers/crypto/mlx5/mlx5_crypto.c
+++ b/drivers/crypto/mlx5/mlx5_crypto.c
@@ -367,7 +367,7 @@ mlx5_crypto_addr2mr(struct mlx5_crypto_priv *priv, uintptr_t addr,
 		return lkey;
 	/* Take slower bottom-half on miss. */
 	return mlx5_mr_addr2mr_bh(priv->pd, 0, &priv->mr_scache, mr_ctrl, addr,
-				  !!(ol_flags & EXT_ATTACHED_MBUF));
+				  !!(ol_flags & RTE_MBUF_F_EXTERNAL));
 }
 
 static __rte_always_inline uint32_t
diff --git a/drivers/event/octeontx/ssovf_worker.c b/drivers/event/octeontx/ssovf_worker.c
index 8b056ddc5a..1300c4f155 100644
--- a/drivers/event/octeontx/ssovf_worker.c
+++ b/drivers/event/octeontx/ssovf_worker.c
@@ -428,53 +428,53 @@ octeontx_create_rx_ol_flags_array(void *mem)
 		errcode = idx & 0xff;
 		errlev = (idx & 0x700) >> 8;
 
-		val = PKT_RX_IP_CKSUM_UNKNOWN;
-		val |= PKT_RX_L4_CKSUM_UNKNOWN;
-		val |= PKT_RX_OUTER_L4_CKSUM_UNKNOWN;
+		val = RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
+		val |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
+		val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN;
 
 		switch (errlev) {
 		case OCCTX_ERRLEV_RE:
 			if (errcode) {
-				val |= PKT_RX_IP_CKSUM_BAD;
-				val |= PKT_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 			break;
 		case OCCTX_ERRLEV_LC:
 			if (errcode == OCCTX_EC_IP4_CSUM) {
-				val |= PKT_RX_IP_CKSUM_BAD;
-				val |= PKT_RX_OUTER_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			}
 			break;
 		case OCCTX_ERRLEV_LD:
 			/* Check if parsed packet is neither IPv4 or IPV6 */
 			if (errcode == OCCTX_EC_IP4_NOT)
 				break;
-			val |= PKT_RX_IP_CKSUM_GOOD;
+			val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			if (errcode == OCCTX_EC_L4_CSUM)
-				val |= PKT_RX_OUTER_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 			else
-				val |= PKT_RX_L4_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			break;
 		case OCCTX_ERRLEV_LE:
 			if (errcode == OCCTX_EC_IP4_CSUM)
-				val |= PKT_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			else
-				val |= PKT_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			break;
 		case OCCTX_ERRLEV_LF:
 			/* Check if parsed packet is neither IPv4 or IPV6 */
 			if (errcode == OCCTX_EC_IP4_NOT)
 				break;
-			val |= PKT_RX_IP_CKSUM_GOOD;
+			val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			if (errcode == OCCTX_EC_L4_CSUM)
-				val |= PKT_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			else
-				val |= PKT_RX_L4_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			break;
 		}
 
diff --git a/drivers/event/octeontx/ssovf_worker.h b/drivers/event/octeontx/ssovf_worker.h
index f609b296ed..ccc6de588e 100644
--- a/drivers/event/octeontx/ssovf_worker.h
+++ b/drivers/event/octeontx/ssovf_worker.h
@@ -126,7 +126,7 @@ ssovf_octeontx_wqe_to_pkt(uint64_t work, uint16_t port_info,
 
 	if (!!(flag & OCCTX_RX_VLAN_FLTR_F)) {
 		if (likely(wqe->s.w2.vv)) {
-			mbuf->ol_flags |= PKT_RX_VLAN;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN;
 			mbuf->vlan_tci =
 				ntohs(*((uint16_t *)((char *)mbuf->buf_addr +
 					mbuf->data_off + wqe->s.w4.vlptr + 2)));
diff --git a/drivers/event/octeontx2/otx2_worker.h b/drivers/event/octeontx2/otx2_worker.h
index 3e36dcece1..aa766c6602 100644
--- a/drivers/event/octeontx2/otx2_worker.h
+++ b/drivers/event/octeontx2/otx2_worker.h
@@ -277,7 +277,7 @@ otx2_ssogws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
 	uint16_t ref_cnt = m->refcnt;
 
 	if ((flags & NIX_TX_OFFLOAD_SECURITY_F) &&
-	    (m->ol_flags & PKT_TX_SEC_OFFLOAD)) {
+	    (m->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD)) {
 		txq = otx2_ssogws_xtract_meta(m, txq_data);
 		return otx2_sec_event_tx(base, ev, m, txq, flags);
 	}
diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
index 294132b759..e1a0a4cf94 100644
--- a/drivers/net/af_packet/rte_eth_af_packet.c
+++ b/drivers/net/af_packet/rte_eth_af_packet.c
@@ -147,7 +147,7 @@ eth_af_packet_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		/* check for vlan info */
 		if (ppd->tp_status & TP_STATUS_VLAN_VALID) {
 			mbuf->vlan_tci = ppd->tp_vlan_tci;
-			mbuf->ol_flags |= (PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+			mbuf->ol_flags |= (RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 		}
 
 		/* release incoming frame and advance ring buffer */
@@ -224,7 +224,7 @@ eth_af_packet_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		}
 
 		/* insert vlan info if necessary */
-		if (mbuf->ol_flags & PKT_TX_VLAN) {
+		if (mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) {
 			if (rte_vlan_insert(&mbuf)) {
 				rte_pktmbuf_free(mbuf);
 				continue;
diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c
index 7d367c9306..402892cc9e 100644
--- a/drivers/net/atlantic/atl_rxtx.c
+++ b/drivers/net/atlantic/atl_rxtx.c
@@ -15,20 +15,20 @@
 #include "hw_atl/hw_atl_b0_internal.h"
 
 #define ATL_TX_CKSUM_OFFLOAD_MASK (			 \
-	PKT_TX_IP_CKSUM |				 \
-	PKT_TX_L4_MASK |				 \
-	PKT_TX_TCP_SEG)
+	RTE_MBUF_F_TX_IP_CKSUM |				 \
+	RTE_MBUF_F_TX_L4_MASK |				 \
+	RTE_MBUF_F_TX_TCP_SEG)
 
 #define ATL_TX_OFFLOAD_MASK (				 \
-	PKT_TX_VLAN |					 \
-	PKT_TX_IPV6 |					 \
-	PKT_TX_IPV4 |					 \
-	PKT_TX_IP_CKSUM |				 \
-	PKT_TX_L4_MASK |				 \
-	PKT_TX_TCP_SEG)
+	RTE_MBUF_F_TX_VLAN |					 \
+	RTE_MBUF_F_TX_IPV6 |					 \
+	RTE_MBUF_F_TX_IPV4 |					 \
+	RTE_MBUF_F_TX_IP_CKSUM |				 \
+	RTE_MBUF_F_TX_L4_MASK |				 \
+	RTE_MBUF_F_TX_TCP_SEG)
 
 #define ATL_TX_OFFLOAD_NOTSUP_MASK \
-	(PKT_TX_OFFLOAD_MASK ^ ATL_TX_OFFLOAD_MASK)
+	(RTE_MBUF_F_TX_OFFLOAD_MASK ^ ATL_TX_OFFLOAD_MASK)
 
 /**
  * Structure associated with each descriptor of the RX ring of a RX queue.
@@ -850,21 +850,21 @@ atl_desc_to_offload_flags(struct atl_rx_queue *rxq,
 	if (rxq->l3_csum_enabled && ((rxd_wb->pkt_type & 0x3) == 0)) {
 		/* IPv4 csum error ? */
 		if (rxd_wb->rx_stat & BIT(1))
-			mbuf_flags |= PKT_RX_IP_CKSUM_BAD;
+			mbuf_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		else
-			mbuf_flags |= PKT_RX_IP_CKSUM_GOOD;
+			mbuf_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 	} else {
-		mbuf_flags |= PKT_RX_IP_CKSUM_UNKNOWN;
+		mbuf_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 	}
 
 	/* CSUM calculated ? */
 	if (rxq->l4_csum_enabled && (rxd_wb->rx_stat & BIT(3))) {
 		if (rxd_wb->rx_stat & BIT(2))
-			mbuf_flags |= PKT_RX_L4_CKSUM_BAD;
+			mbuf_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		else
-			mbuf_flags |= PKT_RX_L4_CKSUM_GOOD;
+			mbuf_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 	} else {
-		mbuf_flags |= PKT_RX_L4_CKSUM_UNKNOWN;
+		mbuf_flags |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 	}
 
 	return mbuf_flags;
@@ -1044,12 +1044,12 @@ atl_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 			rx_mbuf->packet_type = atl_desc_to_pkt_type(&rxd_wb);
 
 			if (rx_mbuf->packet_type & RTE_PTYPE_L2_ETHER_VLAN) {
-				rx_mbuf->ol_flags |= PKT_RX_VLAN;
+				rx_mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN;
 				rx_mbuf->vlan_tci = rxd_wb.vlan;
 
 				if (cfg->vlan_strip)
 					rx_mbuf->ol_flags |=
-						PKT_RX_VLAN_STRIPPED;
+						RTE_MBUF_F_RX_VLAN_STRIPPED;
 			}
 
 			if (!rx_mbuf_first)
@@ -1179,12 +1179,12 @@ atl_tso_setup(struct rte_mbuf *tx_pkt, union hw_atl_txc_s *txc)
 	uint32_t tx_cmd = 0;
 	uint64_t ol_flags = tx_pkt->ol_flags;
 
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		tx_cmd |= tx_desc_cmd_lso | tx_desc_cmd_l4cs;
 
 		txc->cmd = 0x4;
 
-		if (ol_flags & PKT_TX_IPV6)
+		if (ol_flags & RTE_MBUF_F_TX_IPV6)
 			txc->cmd |= 0x2;
 
 		txc->l2_len = tx_pkt->l2_len;
@@ -1194,7 +1194,7 @@ atl_tso_setup(struct rte_mbuf *tx_pkt, union hw_atl_txc_s *txc)
 		txc->mss_len = tx_pkt->tso_segsz;
 	}
 
-	if (ol_flags & PKT_TX_VLAN) {
+	if (ol_flags & RTE_MBUF_F_TX_VLAN) {
 		tx_cmd |= tx_desc_cmd_vlan;
 		txc->vlan_tag = tx_pkt->vlan_tci;
 	}
@@ -1212,9 +1212,9 @@ atl_setup_csum_offload(struct rte_mbuf *mbuf, struct hw_atl_txd_s *txd,
 		       uint32_t tx_cmd)
 {
 	txd->cmd |= tx_desc_cmd_fcs;
-	txd->cmd |= (mbuf->ol_flags & PKT_TX_IP_CKSUM) ? tx_desc_cmd_ipv4 : 0;
+	txd->cmd |= (mbuf->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) ? tx_desc_cmd_ipv4 : 0;
 	/* L4 csum requested */
-	txd->cmd |= (mbuf->ol_flags & PKT_TX_L4_MASK) ? tx_desc_cmd_l4cs : 0;
+	txd->cmd |= (mbuf->ol_flags & RTE_MBUF_F_TX_L4_MASK) ? tx_desc_cmd_l4cs : 0;
 	txd->cmd |= tx_cmd;
 }
 
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 01553958be..dbb100763e 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -1310,7 +1310,7 @@ avp_dev_copy_from_buffers(struct avp_dev *avp,
 	src_offset = 0;
 
 	if (pkt_buf->ol_flags & RTE_AVP_RX_VLAN_PKT) {
-		ol_flags = PKT_RX_VLAN;
+		ol_flags = RTE_MBUF_F_RX_VLAN;
 		vlan_tci = pkt_buf->vlan_tci;
 	} else {
 		ol_flags = 0;
@@ -1568,7 +1568,7 @@ avp_recv_pkts(void *rx_queue,
 		m->port = avp->port_id;
 
 		if (pkt_buf->ol_flags & RTE_AVP_RX_VLAN_PKT) {
-			m->ol_flags = PKT_RX_VLAN;
+			m->ol_flags = RTE_MBUF_F_RX_VLAN;
 			m->vlan_tci = pkt_buf->vlan_tci;
 		}
 
@@ -1674,7 +1674,7 @@ avp_dev_copy_to_buffers(struct avp_dev *avp,
 	first_buf->nb_segs = count;
 	first_buf->pkt_len = total_length;
 
-	if (mbuf->ol_flags & PKT_TX_VLAN) {
+	if (mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		first_buf->ol_flags |= RTE_AVP_TX_VLAN_PKT;
 		first_buf->vlan_tci = mbuf->vlan_tci;
 	}
@@ -1905,7 +1905,7 @@ avp_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		pkt_buf->nb_segs = 1;
 		pkt_buf->next = NULL;
 
-		if (m->ol_flags & PKT_TX_VLAN) {
+		if (m->ol_flags & RTE_MBUF_F_TX_VLAN) {
 			pkt_buf->ol_flags |= RTE_AVP_TX_VLAN_PKT;
 			pkt_buf->vlan_tci = m->vlan_tci;
 		}
diff --git a/drivers/net/axgbe/axgbe_rxtx.c b/drivers/net/axgbe/axgbe_rxtx.c
index 45b9bd3e39..67149abf80 100644
--- a/drivers/net/axgbe/axgbe_rxtx.c
+++ b/drivers/net/axgbe/axgbe_rxtx.c
@@ -260,17 +260,17 @@ axgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		}
 		if (rxq->pdata->rx_csum_enable) {
 			mbuf->ol_flags = 0;
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			if (unlikely(error_status == AXGBE_L3_CSUM_ERR)) {
-				mbuf->ol_flags &= ~PKT_RX_IP_CKSUM_GOOD;
-				mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
-				mbuf->ol_flags &= ~PKT_RX_L4_CKSUM_GOOD;
-				mbuf->ol_flags |= PKT_RX_L4_CKSUM_UNKNOWN;
+				mbuf->ol_flags &= ~RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				mbuf->ol_flags &= ~RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 			} else if (
 				unlikely(error_status == AXGBE_L4_CSUM_ERR)) {
-				mbuf->ol_flags &= ~PKT_RX_L4_CKSUM_GOOD;
-				mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+				mbuf->ol_flags &= ~RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			}
 		}
 		rte_prefetch1(rte_pktmbuf_mtod(mbuf, void *));
@@ -282,25 +282,25 @@ axgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		offloads = rxq->pdata->eth_dev->data->dev_conf.rxmode.offloads;
 		if (!err || !etlt) {
 			if (etlt == RX_CVLAN_TAG_PRESENT) {
-				mbuf->ol_flags |= PKT_RX_VLAN;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN;
 				mbuf->vlan_tci =
 					AXGMAC_GET_BITS_LE(desc->write.desc0,
 							RX_NORMAL_DESC0, OVT);
 				if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-					mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED;
+					mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN_STRIPPED;
 				else
-					mbuf->ol_flags &= ~PKT_RX_VLAN_STRIPPED;
+					mbuf->ol_flags &= ~RTE_MBUF_F_RX_VLAN_STRIPPED;
 				} else {
 					mbuf->ol_flags &=
-						~(PKT_RX_VLAN
-							| PKT_RX_VLAN_STRIPPED);
+						~(RTE_MBUF_F_RX_VLAN
+							| RTE_MBUF_F_RX_VLAN_STRIPPED);
 					mbuf->vlan_tci = 0;
 				}
 		}
 		/* Indicate if a Context Descriptor is next */
 		if (AXGMAC_GET_BITS_LE(desc->write.desc3, RX_NORMAL_DESC3, CDA))
-			mbuf->ol_flags |= PKT_RX_IEEE1588_PTP
-					| PKT_RX_IEEE1588_TMST;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP
+					| RTE_MBUF_F_RX_IEEE1588_TMST;
 		pkt_len = AXGMAC_GET_BITS_LE(desc->write.desc3, RX_NORMAL_DESC3,
 					     PL) - rxq->crc_len;
 		/* Mbuf populate */
@@ -426,17 +426,17 @@ uint16_t eth_axgbe_recv_scattered_pkts(void *rx_queue,
 		offloads = rxq->pdata->eth_dev->data->dev_conf.rxmode.offloads;
 		if (!err || !etlt) {
 			if (etlt == RX_CVLAN_TAG_PRESENT) {
-				mbuf->ol_flags |= PKT_RX_VLAN;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN;
 				mbuf->vlan_tci =
 					AXGMAC_GET_BITS_LE(desc->write.desc0,
 							RX_NORMAL_DESC0, OVT);
 				if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-					mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED;
+					mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN_STRIPPED;
 				else
-					mbuf->ol_flags &= ~PKT_RX_VLAN_STRIPPED;
+					mbuf->ol_flags &= ~RTE_MBUF_F_RX_VLAN_STRIPPED;
 			} else {
 				mbuf->ol_flags &=
-					~(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+					~(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 				mbuf->vlan_tci = 0;
 			}
 		}
@@ -465,17 +465,17 @@ uint16_t eth_axgbe_recv_scattered_pkts(void *rx_queue,
 		first_seg->port = rxq->port_id;
 		if (rxq->pdata->rx_csum_enable) {
 			mbuf->ol_flags = 0;
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			if (unlikely(error_status == AXGBE_L3_CSUM_ERR)) {
-				mbuf->ol_flags &= ~PKT_RX_IP_CKSUM_GOOD;
-				mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
-				mbuf->ol_flags &= ~PKT_RX_L4_CKSUM_GOOD;
-				mbuf->ol_flags |= PKT_RX_L4_CKSUM_UNKNOWN;
+				mbuf->ol_flags &= ~RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				mbuf->ol_flags &= ~RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 			} else if (unlikely(error_status
 						== AXGBE_L4_CSUM_ERR)) {
-				mbuf->ol_flags &= ~PKT_RX_L4_CKSUM_GOOD;
-				mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+				mbuf->ol_flags &= ~RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			}
 		}
 
@@ -795,7 +795,7 @@ static int axgbe_xmit_hw(struct axgbe_tx_queue *txq,
 	AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, FL,
 			   mbuf->pkt_len);
 	/* Timestamp enablement check */
-	if (mbuf->ol_flags & PKT_TX_IEEE1588_TMST)
+	if (mbuf->ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 		AXGMAC_SET_BITS_LE(desc->desc2, TX_NORMAL_DESC2, TTSE, 1);
 	rte_wmb();
 	/* Mark it as First and Last Descriptor */
@@ -804,14 +804,14 @@ static int axgbe_xmit_hw(struct axgbe_tx_queue *txq,
 	/* Mark it as a NORMAL descriptor */
 	AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, CTXT, 0);
 	/* configure h/w Offload */
-	mask = mbuf->ol_flags & PKT_TX_L4_MASK;
-	if ((mask == PKT_TX_TCP_CKSUM) || (mask == PKT_TX_UDP_CKSUM))
+	mask = mbuf->ol_flags & RTE_MBUF_F_TX_L4_MASK;
+	if ((mask == RTE_MBUF_F_TX_TCP_CKSUM) || (mask == RTE_MBUF_F_TX_UDP_CKSUM))
 		AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, CIC, 0x3);
-	else if (mbuf->ol_flags & PKT_TX_IP_CKSUM)
+	else if (mbuf->ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 		AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, CIC, 0x1);
 	rte_wmb();
 
-	if (mbuf->ol_flags & (PKT_TX_VLAN | PKT_TX_QINQ)) {
+	if (mbuf->ol_flags & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) {
 		/* Mark it as a CONTEXT descriptor */
 		AXGMAC_SET_BITS_LE(desc->desc3, TX_CONTEXT_DESC3,
 				  CTXT, 1);
diff --git a/drivers/net/axgbe/axgbe_rxtx_vec_sse.c b/drivers/net/axgbe/axgbe_rxtx_vec_sse.c
index 1c962b9333..816371cd79 100644
--- a/drivers/net/axgbe/axgbe_rxtx_vec_sse.c
+++ b/drivers/net/axgbe/axgbe_rxtx_vec_sse.c
@@ -23,7 +23,7 @@ axgbe_vec_tx(volatile struct axgbe_tx_desc *desc,
 {
 	uint64_t tmst_en = 0;
 	/* Timestamp enablement check */
-	if (mbuf->ol_flags & PKT_TX_IEEE1588_TMST)
+	if (mbuf->ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 		tmst_en = TX_DESC_CTRL_FLAG_TMST;
 	__m128i descriptor = _mm_set_epi64x((uint64_t)mbuf->pkt_len << 32 |
 					    TX_DESC_CTRL_FLAGS | mbuf->data_len
diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c
index 235c374180..6a710021dc 100644
--- a/drivers/net/bnx2x/bnx2x.c
+++ b/drivers/net/bnx2x/bnx2x.c
@@ -2189,7 +2189,7 @@ int bnx2x_tx_encap(struct bnx2x_tx_queue *txq, struct rte_mbuf *m0)
 
 	tx_start_bd->nbd = rte_cpu_to_le_16(2);
 
-	if (m0->ol_flags & PKT_TX_VLAN) {
+	if (m0->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		tx_start_bd->vlan_or_ethertype =
 		    rte_cpu_to_le_16(m0->vlan_tci);
 		tx_start_bd->bd_flags.as_bitfield |=
diff --git a/drivers/net/bnx2x/bnx2x_rxtx.c b/drivers/net/bnx2x/bnx2x_rxtx.c
index 2b17602290..2570a6b252 100644
--- a/drivers/net/bnx2x/bnx2x_rxtx.c
+++ b/drivers/net/bnx2x/bnx2x_rxtx.c
@@ -435,7 +435,7 @@ bnx2x_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		 */
 		if (cqe_fp->pars_flags.flags & PARSING_FLAGS_VLAN) {
 			rx_mb->vlan_tci = cqe_fp->vlan_tag;
-			rx_mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+			rx_mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		}
 
 		rx_pkts[nb_rx] = rx_mb;
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index a40fa50138..882206f93c 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -260,25 +260,25 @@ static void bnxt_tpa_start(struct bnxt_rx_queue *rxq,
 	mbuf->pkt_len = rte_le_to_cpu_32(tpa_start->len);
 	mbuf->data_len = mbuf->pkt_len;
 	mbuf->port = rxq->port_id;
-	mbuf->ol_flags = PKT_RX_LRO;
+	mbuf->ol_flags = RTE_MBUF_F_RX_LRO;
 
 	bnxt_tpa_get_metadata(rxq->bp, tpa_info, tpa_start, tpa_start1);
 
 	if (likely(tpa_info->hash_valid)) {
 		mbuf->hash.rss = tpa_info->rss_hash;
-		mbuf->ol_flags |= PKT_RX_RSS_HASH;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 	} else if (tpa_info->cfa_code_valid) {
 		mbuf->hash.fdir.id = tpa_info->cfa_code;
-		mbuf->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 	}
 
 	if (tpa_info->vlan_valid && BNXT_RX_VLAN_STRIP_EN(rxq->bp)) {
 		mbuf->vlan_tci = tpa_info->vlan;
-		mbuf->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 	}
 
 	if (likely(tpa_info->l4_csum_valid))
-		mbuf->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	/* recycle next mbuf */
 	data_cons = RING_NEXT(data_cons);
@@ -576,34 +576,34 @@ bnxt_init_ol_flags_tables(struct bnxt_rx_queue *rxq)
 
 		if (BNXT_RX_VLAN_STRIP_EN(rxq->bp)) {
 			if (i & RX_PKT_CMPL_FLAGS2_META_FORMAT_VLAN)
-				pt[i] |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+				pt[i] |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		}
 
 		if (i & (RX_PKT_CMPL_FLAGS2_T_IP_CS_CALC << 3)) {
 			/* Tunnel case. */
 			if (outer_cksum_enabled) {
 				if (i & RX_PKT_CMPL_FLAGS2_IP_CS_CALC)
-					pt[i] |= PKT_RX_IP_CKSUM_GOOD;
+					pt[i] |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 				if (i & RX_PKT_CMPL_FLAGS2_L4_CS_CALC)
-					pt[i] |= PKT_RX_L4_CKSUM_GOOD;
+					pt[i] |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 				if (i & RX_PKT_CMPL_FLAGS2_T_L4_CS_CALC)
-					pt[i] |= PKT_RX_OUTER_L4_CKSUM_GOOD;
+					pt[i] |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
 			} else {
 				if (i & RX_PKT_CMPL_FLAGS2_T_IP_CS_CALC)
-					pt[i] |= PKT_RX_IP_CKSUM_GOOD;
+					pt[i] |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 				if (i & RX_PKT_CMPL_FLAGS2_T_L4_CS_CALC)
-					pt[i] |= PKT_RX_L4_CKSUM_GOOD;
+					pt[i] |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 		} else {
 			/* Non-tunnel case. */
 			if (i & RX_PKT_CMPL_FLAGS2_IP_CS_CALC)
-				pt[i] |= PKT_RX_IP_CKSUM_GOOD;
+				pt[i] |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 			if (i & RX_PKT_CMPL_FLAGS2_L4_CS_CALC)
-				pt[i] |= PKT_RX_L4_CKSUM_GOOD;
+				pt[i] |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		}
 	}
 
@@ -616,30 +616,30 @@ bnxt_init_ol_flags_tables(struct bnxt_rx_queue *rxq)
 			/* Tunnel case. */
 			if (outer_cksum_enabled) {
 				if (i & (RX_PKT_CMPL_ERRORS_IP_CS_ERROR >> 4))
-					pt[i] |= PKT_RX_IP_CKSUM_BAD;
+					pt[i] |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 
 				if (i & (RX_PKT_CMPL_ERRORS_T_IP_CS_ERROR >> 4))
-					pt[i] |= PKT_RX_OUTER_IP_CKSUM_BAD;
+					pt[i] |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 
 				if (i & (RX_PKT_CMPL_ERRORS_L4_CS_ERROR >> 4))
-					pt[i] |= PKT_RX_L4_CKSUM_BAD;
+					pt[i] |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 
 				if (i & (RX_PKT_CMPL_ERRORS_T_L4_CS_ERROR >> 4))
-					pt[i] |= PKT_RX_OUTER_L4_CKSUM_BAD;
+					pt[i] |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 			} else {
 				if (i & (RX_PKT_CMPL_ERRORS_T_IP_CS_ERROR >> 4))
-					pt[i] |= PKT_RX_IP_CKSUM_BAD;
+					pt[i] |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 
 				if (i & (RX_PKT_CMPL_ERRORS_T_L4_CS_ERROR >> 4))
-					pt[i] |= PKT_RX_L4_CKSUM_BAD;
+					pt[i] |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			}
 		} else {
 			/* Non-tunnel case. */
 			if (i & (RX_PKT_CMPL_ERRORS_IP_CS_ERROR >> 4))
-				pt[i] |= PKT_RX_IP_CKSUM_BAD;
+				pt[i] |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 
 			if (i & (RX_PKT_CMPL_ERRORS_L4_CS_ERROR >> 4))
-				pt[i] |= PKT_RX_L4_CKSUM_BAD;
+				pt[i] |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		}
 	}
 }
@@ -677,13 +677,13 @@ bnxt_set_ol_flags(struct bnxt_rx_ring_info *rxr, struct rx_pkt_cmpl *rxcmp,
 
 	if (flags_type & RX_PKT_CMPL_FLAGS_RSS_VALID) {
 		mbuf->hash.rss = rte_le_to_cpu_32(rxcmp->rss_hash);
-		ol_flags |= PKT_RX_RSS_HASH;
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 	}
 
 #ifdef RTE_LIBRTE_IEEE1588
 	if (unlikely((flags_type & RX_PKT_CMPL_FLAGS_MASK) ==
 		     RX_PKT_CMPL_FLAGS_ITYPE_PTP_W_TIMESTAMP))
-		ol_flags |= PKT_RX_IEEE1588_PTP | PKT_RX_IEEE1588_TMST;
+		ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP | RTE_MBUF_F_RX_IEEE1588_TMST;
 #endif
 
 	mbuf->ol_flags = ol_flags;
@@ -807,7 +807,7 @@ bnxt_ulp_set_mark_in_mbuf(struct bnxt *bp, struct rx_pkt_cmpl_hi *rxcmp1,
 		mbuf->hash.fdir.hi = mark_id;
 		*bnxt_cfa_code_dynfield(mbuf) = cfa_code & 0xffffffffull;
 		mbuf->hash.fdir.id = rxcmp1->cfa_code;
-		mbuf->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		return mark_id;
 	}
 
@@ -854,7 +854,7 @@ void bnxt_set_mark_in_mbuf(struct bnxt *bp,
 	}
 
 	mbuf->hash.fdir.hi = bp->mark_table[cfa_code].mark_id;
-	mbuf->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+	mbuf->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 }
 
 static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
diff --git a/drivers/net/bnxt/bnxt_rxr.h b/drivers/net/bnxt/bnxt_rxr.h
index 59adb7242c..a84f016609 100644
--- a/drivers/net/bnxt/bnxt_rxr.h
+++ b/drivers/net/bnxt/bnxt_rxr.h
@@ -212,7 +212,7 @@ static inline void bnxt_rx_vlan_v2(struct rte_mbuf *mbuf,
 {
 	if (RX_CMP_VLAN_VALID(rxcmp)) {
 		mbuf->vlan_tci = RX_CMP_METADATA0_VID(rxcmp1);
-		mbuf->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 	}
 }
 
@@ -276,47 +276,47 @@ static inline void bnxt_parse_csum_v2(struct rte_mbuf *mbuf,
 			t_pkt = 1;
 
 		if (unlikely(RX_CMP_V2_L4_CS_ERR(error_v2)))
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		else if (flags2 & RX_CMP_FLAGS2_L4_CSUM_ALL_OK_MASK)
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		else
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_UNKNOWN;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 
 		if (unlikely(RX_CMP_V2_L3_CS_ERR(error_v2)))
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		else if (flags2 & RX_CMP_FLAGS2_IP_CSUM_ALL_OK_MASK)
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 		else
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_UNKNOWN;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 	} else {
 		hdr_cnt = RX_CMP_V2_L4_CS_OK(flags2);
 		if (hdr_cnt > 1)
 			t_pkt = 1;
 
 		if (RX_CMP_V2_L4_CS_OK(flags2))
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		else if (RX_CMP_V2_L4_CS_ERR(error_v2))
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		else
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_UNKNOWN;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 
 		if (RX_CMP_V2_L3_CS_OK(flags2))
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 		else if (RX_CMP_V2_L3_CS_ERR(error_v2))
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		else
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_UNKNOWN;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 	}
 
 	if (t_pkt) {
 		if (unlikely(RX_CMP_V2_OT_L4_CS_ERR(error_v2) ||
 					RX_CMP_V2_T_L4_CS_ERR(error_v2)))
-			mbuf->ol_flags |= PKT_RX_OUTER_L4_CKSUM_BAD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 		else
-			mbuf->ol_flags |= PKT_RX_OUTER_L4_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
 
 		if (unlikely(RX_CMP_V2_T_IP_CS_ERR(error_v2)))
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	}
 }
 
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 5d3cdfa8f2..964b5772b0 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -108,12 +108,12 @@ int bnxt_init_tx_ring_struct(struct bnxt_tx_queue *txq, unsigned int socket_id)
 static bool
 bnxt_xmit_need_long_bd(struct rte_mbuf *tx_pkt, struct bnxt_tx_queue *txq)
 {
-	if (tx_pkt->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_TCP_CKSUM |
-				PKT_TX_UDP_CKSUM | PKT_TX_IP_CKSUM |
-				PKT_TX_VLAN | PKT_TX_OUTER_IP_CKSUM |
-				PKT_TX_TUNNEL_GRE | PKT_TX_TUNNEL_VXLAN |
-				PKT_TX_TUNNEL_GENEVE | PKT_TX_IEEE1588_TMST |
-				PKT_TX_QINQ) ||
+	if (tx_pkt->ol_flags & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_TCP_CKSUM |
+				RTE_MBUF_F_TX_UDP_CKSUM | RTE_MBUF_F_TX_IP_CKSUM |
+				RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_OUTER_IP_CKSUM |
+				RTE_MBUF_F_TX_TUNNEL_GRE | RTE_MBUF_F_TX_TUNNEL_VXLAN |
+				RTE_MBUF_F_TX_TUNNEL_GENEVE | RTE_MBUF_F_TX_IEEE1588_TMST |
+				RTE_MBUF_F_TX_QINQ) ||
 	     (BNXT_TRUFLOW_EN(txq->bp) &&
 	      (txq->bp->tx_cfa_action || txq->vfr_tx_cfa_action)))
 		return true;
@@ -200,13 +200,13 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 		vlan_tag_flags = 0;
 
 		/* HW can accelerate only outer vlan in QinQ mode */
-		if (tx_pkt->ol_flags & PKT_TX_QINQ) {
+		if (tx_pkt->ol_flags & RTE_MBUF_F_TX_QINQ) {
 			vlan_tag_flags = TX_BD_LONG_CFA_META_KEY_VLAN_TAG |
 				tx_pkt->vlan_tci_outer;
 			outer_tpid_bd = txq->bp->outer_tpid_bd &
 				BNXT_OUTER_TPID_BD_MASK;
 			vlan_tag_flags |= outer_tpid_bd;
-		} else if (tx_pkt->ol_flags & PKT_TX_VLAN) {
+		} else if (tx_pkt->ol_flags & RTE_MBUF_F_TX_VLAN) {
 			/* shurd: Should this mask at
 			 * TX_BD_LONG_CFA_META_VLAN_VID_MASK?
 			 */
@@ -236,7 +236,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 		else
 			txbd1->cfa_action = txq->bp->tx_cfa_action;
 
-		if (tx_pkt->ol_flags & PKT_TX_TCP_SEG) {
+		if (tx_pkt->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			uint16_t hdr_size;
 
 			/* TSO */
@@ -244,7 +244,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 					 TX_BD_LONG_LFLAGS_T_IPID;
 			hdr_size = tx_pkt->l2_len + tx_pkt->l3_len +
 					tx_pkt->l4_len;
-			hdr_size += (tx_pkt->ol_flags & PKT_TX_TUNNEL_MASK) ?
+			hdr_size += (tx_pkt->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 				    tx_pkt->outer_l2_len +
 				    tx_pkt->outer_l3_len : 0;
 			/* The hdr_size is multiple of 16bit units not 8bit.
@@ -299,24 +299,24 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 			   PKT_TX_TCP_UDP_CKSUM) {
 			/* TCP/UDP CSO */
 			txbd1->lflags |= TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM;
-		} else if ((tx_pkt->ol_flags & PKT_TX_TCP_CKSUM) ==
-			   PKT_TX_TCP_CKSUM) {
+		} else if ((tx_pkt->ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) ==
+			   RTE_MBUF_F_TX_TCP_CKSUM) {
 			/* TCP/UDP CSO */
 			txbd1->lflags |= TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM;
-		} else if ((tx_pkt->ol_flags & PKT_TX_UDP_CKSUM) ==
-			   PKT_TX_UDP_CKSUM) {
+		} else if ((tx_pkt->ol_flags & RTE_MBUF_F_TX_UDP_CKSUM) ==
+			   RTE_MBUF_F_TX_UDP_CKSUM) {
 			/* TCP/UDP CSO */
 			txbd1->lflags |= TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM;
-		} else if ((tx_pkt->ol_flags & PKT_TX_IP_CKSUM) ==
-			   PKT_TX_IP_CKSUM) {
+		} else if ((tx_pkt->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) ==
+			   RTE_MBUF_F_TX_IP_CKSUM) {
 			/* IP CSO */
 			txbd1->lflags |= TX_BD_LONG_LFLAGS_IP_CHKSUM;
-		} else if ((tx_pkt->ol_flags & PKT_TX_OUTER_IP_CKSUM) ==
-			   PKT_TX_OUTER_IP_CKSUM) {
+		} else if ((tx_pkt->ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) ==
+			   RTE_MBUF_F_TX_OUTER_IP_CKSUM) {
 			/* IP CSO */
 			txbd1->lflags |= TX_BD_LONG_LFLAGS_T_IP_CHKSUM;
-		} else if ((tx_pkt->ol_flags & PKT_TX_IEEE1588_TMST) ==
-			   PKT_TX_IEEE1588_TMST) {
+		} else if ((tx_pkt->ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST) ==
+			   RTE_MBUF_F_TX_IEEE1588_TMST) {
 			/* PTP */
 			txbd1->lflags |= TX_BD_LONG_LFLAGS_STAMP;
 		}
diff --git a/drivers/net/bnxt/bnxt_txr.h b/drivers/net/bnxt/bnxt_txr.h
index 6bfdc6d01a..e11343c082 100644
--- a/drivers/net/bnxt/bnxt_txr.h
+++ b/drivers/net/bnxt/bnxt_txr.h
@@ -60,25 +60,25 @@ int bnxt_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int bnxt_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int bnxt_flush_tx_cmp(struct bnxt_cp_ring_info *cpr);
 
-#define PKT_TX_OIP_IIP_TCP_UDP_CKSUM	(PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM | \
-					PKT_TX_IP_CKSUM | PKT_TX_OUTER_IP_CKSUM)
-#define PKT_TX_OIP_IIP_UDP_CKSUM	(PKT_TX_UDP_CKSUM | \
-					PKT_TX_IP_CKSUM | PKT_TX_OUTER_IP_CKSUM)
-#define PKT_TX_OIP_IIP_TCP_CKSUM	(PKT_TX_TCP_CKSUM | \
-					PKT_TX_IP_CKSUM | PKT_TX_OUTER_IP_CKSUM)
-#define PKT_TX_IIP_TCP_UDP_CKSUM	(PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM | \
-					PKT_TX_IP_CKSUM)
-#define PKT_TX_IIP_TCP_CKSUM		(PKT_TX_TCP_CKSUM | PKT_TX_IP_CKSUM)
-#define PKT_TX_IIP_UDP_CKSUM		(PKT_TX_UDP_CKSUM | PKT_TX_IP_CKSUM)
-#define PKT_TX_OIP_TCP_UDP_CKSUM	(PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM | \
-					PKT_TX_OUTER_IP_CKSUM)
-#define PKT_TX_OIP_UDP_CKSUM		(PKT_TX_UDP_CKSUM | \
-					PKT_TX_OUTER_IP_CKSUM)
-#define PKT_TX_OIP_TCP_CKSUM		(PKT_TX_TCP_CKSUM | \
-					PKT_TX_OUTER_IP_CKSUM)
-#define PKT_TX_OIP_IIP_CKSUM		(PKT_TX_IP_CKSUM |	\
-					 PKT_TX_OUTER_IP_CKSUM)
-#define PKT_TX_TCP_UDP_CKSUM		(PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM)
+#define PKT_TX_OIP_IIP_TCP_UDP_CKSUM	(RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM | \
+					RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+#define PKT_TX_OIP_IIP_UDP_CKSUM	(RTE_MBUF_F_TX_UDP_CKSUM | \
+					RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+#define PKT_TX_OIP_IIP_TCP_CKSUM	(RTE_MBUF_F_TX_TCP_CKSUM | \
+					RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+#define PKT_TX_IIP_TCP_UDP_CKSUM	(RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM | \
+					RTE_MBUF_F_TX_IP_CKSUM)
+#define PKT_TX_IIP_TCP_CKSUM		(RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_IP_CKSUM)
+#define PKT_TX_IIP_UDP_CKSUM		(RTE_MBUF_F_TX_UDP_CKSUM | RTE_MBUF_F_TX_IP_CKSUM)
+#define PKT_TX_OIP_TCP_UDP_CKSUM	(RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM | \
+					RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+#define PKT_TX_OIP_UDP_CKSUM		(RTE_MBUF_F_TX_UDP_CKSUM | \
+					RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+#define PKT_TX_OIP_TCP_CKSUM		(RTE_MBUF_F_TX_TCP_CKSUM | \
+					RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+#define PKT_TX_OIP_IIP_CKSUM		(RTE_MBUF_F_TX_IP_CKSUM |	\
+					 RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+#define PKT_TX_TCP_UDP_CKSUM		(RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM)
 
 
 #define TX_BD_FLG_TIP_IP_TCP_UDP_CHKSUM	(TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM | \
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 54987d96b3..d516a1fcc3 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -112,7 +112,7 @@ is_lacp_packets(uint16_t ethertype, uint8_t subtype, struct rte_mbuf *mbuf)
 	const uint16_t ether_type_slow_be =
 		rte_be_to_cpu_16(RTE_ETHER_TYPE_SLOW);
 
-	return !((mbuf->ol_flags & PKT_RX_VLAN) ? mbuf->vlan_tci : 0) &&
+	return !((mbuf->ol_flags & RTE_MBUF_F_RX_VLAN) ? mbuf->vlan_tci : 0) &&
 		(ethertype == ether_type_slow_be &&
 		(subtype == SLOW_SUBTYPE_MARKER || subtype == SLOW_SUBTYPE_LACP));
 }
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index 7caec6cf14..784a979f44 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -47,15 +47,15 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	uint16_t flags = 0;
 
 	/* Fastpath is dependent on these enums */
-	RTE_BUILD_BUG_ON(PKT_TX_TCP_CKSUM != (1ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_SCTP_CKSUM != (2ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_UDP_CKSUM != (3ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_IP_CKSUM != (1ULL << 54));
-	RTE_BUILD_BUG_ON(PKT_TX_IPV4 != (1ULL << 55));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IP_CKSUM != (1ULL << 58));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IPV4 != (1ULL << 59));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IPV6 != (1ULL << 60));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_UDP_CKSUM != (1ULL << 41));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_TCP_CKSUM != (1ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_SCTP_CKSUM != (2ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_UDP_CKSUM != (3ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IP_CKSUM != (1ULL << 54));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IPV4 != (1ULL << 55));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IP_CKSUM != (1ULL << 58));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV4 != (1ULL << 59));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV6 != (1ULL << 60));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_UDP_CKSUM != (1ULL << 41));
 	RTE_BUILD_BUG_ON(RTE_MBUF_L2_LEN_BITS != 7);
 	RTE_BUILD_BUG_ON(RTE_MBUF_L3_LEN_BITS != 9);
 	RTE_BUILD_BUG_ON(RTE_MBUF_OUTL2_LEN_BITS != 7);
diff --git a/drivers/net/cnxk/cn10k_rx.h b/drivers/net/cnxk/cn10k_rx.h
index 68219b8c19..a1990d83b2 100644
--- a/drivers/net/cnxk/cn10k_rx.h
+++ b/drivers/net/cnxk/cn10k_rx.h
@@ -104,9 +104,9 @@ nix_update_match_id(const uint16_t match_id, uint64_t ol_flags,
 	 * 0 to CNXK_FLOW_ACTION_FLAG_DEFAULT - 2
 	 */
 	if (likely(match_id)) {
-		ol_flags |= PKT_RX_FDIR;
+		ol_flags |= RTE_MBUF_F_RX_FDIR;
 		if (match_id != CNXK_FLOW_ACTION_FLAG_DEFAULT) {
-			ol_flags |= PKT_RX_FDIR_ID;
+			ol_flags |= RTE_MBUF_F_RX_FDIR_ID;
 			mbuf->hash.fdir.hi = match_id - 1;
 		}
 	}
@@ -190,7 +190,7 @@ cn10k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 
 	if (flag & NIX_RX_OFFLOAD_RSS_F) {
 		mbuf->hash.rss = tag;
-		ol_flags |= PKT_RX_RSS_HASH;
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 	}
 
 	if (flag & NIX_RX_OFFLOAD_CHECKSUM_F)
@@ -198,11 +198,11 @@ cn10k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 
 	if (flag & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
 		if (rx->vtag0_gone) {
-			ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+			ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 			mbuf->vlan_tci = rx->vtag0_tci;
 		}
 		if (rx->vtag1_gone) {
-			ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED;
+			ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
 			mbuf->vlan_tci_outer = rx->vtag1_tci;
 		}
 	}
@@ -305,7 +305,7 @@ static __rte_always_inline uint64_t
 nix_vlan_update(const uint64_t w2, uint64_t ol_flags, uint8x16_t *f)
 {
 	if (w2 & BIT_ULL(21) /* vtag0_gone */) {
-		ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		*f = vsetq_lane_u16((uint16_t)(w2 >> 32), *f, 5);
 	}
 
@@ -316,7 +316,7 @@ static __rte_always_inline uint64_t
 nix_qinq_update(const uint64_t w2, uint64_t ol_flags, struct rte_mbuf *mbuf)
 {
 	if (w2 & BIT_ULL(23) /* vtag1_gone */) {
-		ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED;
+		ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
 		mbuf->vlan_tci_outer = (uint16_t)(w2 >> 48);
 	}
 
@@ -443,10 +443,10 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 			f1 = vsetq_lane_u32(cq1_w0, f1, 3);
 			f2 = vsetq_lane_u32(cq2_w0, f2, 3);
 			f3 = vsetq_lane_u32(cq3_w0, f3, 3);
-			ol_flags0 = PKT_RX_RSS_HASH;
-			ol_flags1 = PKT_RX_RSS_HASH;
-			ol_flags2 = PKT_RX_RSS_HASH;
-			ol_flags3 = PKT_RX_RSS_HASH;
+			ol_flags0 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags1 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags2 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags3 = RTE_MBUF_F_RX_RSS_HASH;
 		} else {
 			ol_flags0 = 0;
 			ol_flags1 = 0;
@@ -519,8 +519,8 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 						  RTE_PTYPE_L2_ETHER_TIMESYNC,
 						  RTE_PTYPE_L2_ETHER_TIMESYNC,
 						  RTE_PTYPE_L2_ETHER_TIMESYNC};
-			const uint64_t ts_olf = PKT_RX_IEEE1588_PTP |
-						PKT_RX_IEEE1588_TMST |
+			const uint64_t ts_olf = RTE_MBUF_F_RX_IEEE1588_PTP |
+						RTE_MBUF_F_RX_IEEE1588_TMST |
 						tstamp->rx_tstamp_dynflag;
 			const uint32x4_t and_mask = {0x1, 0x2, 0x4, 0x8};
 			uint64x2_t ts01, ts23, mask;
diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index f75cae07ae..0e8c8c71b5 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -171,12 +171,12 @@ cn10k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 {
 	uint64_t mask, ol_flags = m->ol_flags;
 
-	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & PKT_TX_TCP_SEG)) {
+	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		uintptr_t mdata = rte_pktmbuf_mtod(m, uintptr_t);
 		uint16_t *iplen, *oiplen, *oudplen;
 		uint16_t lso_sb, paylen;
 
-		mask = -!!(ol_flags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6));
+		mask = -!!(ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IPV6));
 		lso_sb = (mask & (m->outer_l2_len + m->outer_l3_len)) +
 			 m->l2_len + m->l3_len + m->l4_len;
 
@@ -185,18 +185,18 @@ cn10k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 
 		/* Get iplen position assuming no tunnel hdr */
 		iplen = (uint16_t *)(mdata + m->l2_len +
-				     (2 << !!(ol_flags & PKT_TX_IPV6)));
+				     (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
 		/* Handle tunnel tso */
 		if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-		    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+		    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 			const uint8_t is_udp_tun =
 				(CNXK_NIX_UDP_TUN_BITMASK >>
-				 ((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) &
+				 ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) &
 				0x1;
 
 			oiplen = (uint16_t *)(mdata + m->outer_l2_len +
 					      (2 << !!(ol_flags &
-						       PKT_TX_OUTER_IPV6)));
+						       RTE_MBUF_F_TX_OUTER_IPV6)));
 			*oiplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*oiplen) -
 						   paylen);
 
@@ -211,7 +211,7 @@ cn10k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 			/* Update iplen position to inner ip hdr */
 			iplen = (uint16_t *)(mdata + lso_sb - m->l3_len -
 					     m->l4_len +
-					     (2 << !!(ol_flags & PKT_TX_IPV6)));
+					     (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
 		}
 
 		*iplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*iplen) - paylen);
@@ -261,11 +261,11 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, uintptr_t lmt_addr,
 
 	if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
 	    (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F)) {
-		const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+		const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
 		const uint8_t ol3type =
-			((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) +
-			((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) +
-			!!(ol_flags & PKT_TX_OUTER_IP_CKSUM);
+			((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
+			((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
+			!!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
 
 		/* Outer L3 */
 		w1.ol3type = ol3type;
@@ -277,15 +277,15 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, uintptr_t lmt_addr,
 		w1.ol4type = csum + (csum << 1);
 
 		/* Inner L3 */
-		w1.il3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) +
-			     ((!!(ol_flags & PKT_TX_IPV6)) << 2);
+		w1.il3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
+			     ((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2);
 		w1.il3ptr = w1.ol4ptr + m->l2_len;
 		w1.il4ptr = w1.il3ptr + m->l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.il3type = w1.il3type + !!(ol_flags & PKT_TX_IP_CKSUM);
+		w1.il3type = w1.il3type + !!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
 
 		/* Inner L4 */
-		w1.il4type = (ol_flags & PKT_TX_L4_MASK) >> 52;
+		w1.il4type = (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
 
 		/* In case of no tunnel header use only
 		 * shift IL3/IL4 fields a bit to use
@@ -296,16 +296,16 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, uintptr_t lmt_addr,
 		       ((w1.u & 0X00000000FFFFFFFF) >> (mask << 4));
 
 	} else if (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) {
-		const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+		const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
 		const uint8_t outer_l2_len = m->outer_l2_len;
 
 		/* Outer L3 */
 		w1.ol3ptr = outer_l2_len;
 		w1.ol4ptr = outer_l2_len + m->outer_l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.ol3type = ((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) +
-			     ((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) +
-			     !!(ol_flags & PKT_TX_OUTER_IP_CKSUM);
+		w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
+			     ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
+			     !!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
 
 		/* Outer L4 */
 		w1.ol4type = csum + (csum << 1);
@@ -321,27 +321,27 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, uintptr_t lmt_addr,
 		w1.ol3ptr = l2_len;
 		w1.ol4ptr = l2_len + m->l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.ol3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) +
-			     ((!!(ol_flags & PKT_TX_IPV6)) << 2) +
-			     !!(ol_flags & PKT_TX_IP_CKSUM);
+		w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
+			     ((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2) +
+			     !!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
 
 		/* Inner L4 */
-		w1.ol4type = (ol_flags & PKT_TX_L4_MASK) >> 52;
+		w1.ol4type = (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
 	}
 
 	if (flags & NIX_TX_NEED_EXT_HDR && flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) {
-		send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & PKT_TX_VLAN);
+		send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_VLAN);
 		/* HW will update ptr after vlan0 update */
 		send_hdr_ext->w1.vlan1_ins_ptr = 12;
 		send_hdr_ext->w1.vlan1_ins_tci = m->vlan_tci;
 
-		send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & PKT_TX_QINQ);
+		send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_QINQ);
 		/* 2B before end of l2 header */
 		send_hdr_ext->w1.vlan0_ins_ptr = 12;
 		send_hdr_ext->w1.vlan0_ins_tci = m->vlan_tci_outer;
 	}
 
-	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & PKT_TX_TCP_SEG)) {
+	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		uint16_t lso_sb;
 		uint64_t mask;
 
@@ -352,20 +352,20 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, uintptr_t lmt_addr,
 		send_hdr_ext->w0.lso = 1;
 		send_hdr_ext->w0.lso_mps = m->tso_segsz;
 		send_hdr_ext->w0.lso_format =
-			NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & PKT_TX_IPV6);
+			NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & RTE_MBUF_F_TX_IPV6);
 		w1.ol4type = NIX_SENDL4TYPE_TCP_CKSUM;
 
 		/* Handle tunnel tso */
 		if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-		    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+		    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 			const uint8_t is_udp_tun =
 				(CNXK_NIX_UDP_TUN_BITMASK >>
-				 ((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) &
+				 ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) &
 				0x1;
 			uint8_t shift = is_udp_tun ? 32 : 0;
 
-			shift += (!!(ol_flags & PKT_TX_OUTER_IPV6) << 4);
-			shift += (!!(ol_flags & PKT_TX_IPV6) << 3);
+			shift += (!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) << 4);
+			shift += (!!(ol_flags & RTE_MBUF_F_TX_IPV6) << 3);
 
 			w1.il4type = NIX_SENDL4TYPE_TCP_CKSUM;
 			w1.ol4type = is_udp_tun ? NIX_SENDL4TYPE_UDP_CKSUM : 0;
@@ -414,7 +414,7 @@ cn10k_nix_xmit_prepare_tstamp(uintptr_t lmt_addr, const uint64_t *cmd,
 			      const uint16_t flags)
 {
 	if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
-		const uint8_t is_ol_tstamp = !(ol_flags & PKT_TX_IEEE1588_TMST);
+		const uint8_t is_ol_tstamp = !(ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST);
 		struct nix_send_ext_s *send_hdr_ext =
 					(struct nix_send_ext_s *)lmt_addr + 16;
 		uint64_t *lmt = (uint64_t *)lmt_addr;
@@ -434,7 +434,7 @@ cn10k_nix_xmit_prepare_tstamp(uintptr_t lmt_addr, const uint64_t *cmd,
 			rte_compiler_barrier();
 		}
 
-		/* Packets for which PKT_TX_IEEE1588_TMST is not set, tx tstamp
+		/* Packets for which RTE_MBUF_F_TX_IEEE1588_TMST is not set, tx tstamp
 		 * should not be recorded, hence changing the alg type to
 		 * NIX_SENDMEMALG_SET and also changing send mem addr field to
 		 * next 8 bytes as it corrpt the actual tx tstamp registered
@@ -721,7 +721,7 @@ cn10k_nix_prepare_tso(struct rte_mbuf *m, union nix_send_hdr_w1_u *w1,
 	uint16_t lso_sb;
 	uint64_t mask;
 
-	if (!(ol_flags & PKT_TX_TCP_SEG))
+	if (!(ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 		return;
 
 	mask = -(!w1->il3type);
@@ -730,20 +730,20 @@ cn10k_nix_prepare_tso(struct rte_mbuf *m, union nix_send_hdr_w1_u *w1,
 	w0->u |= BIT(14);
 	w0->lso_sb = lso_sb;
 	w0->lso_mps = m->tso_segsz;
-	w0->lso_format = NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & PKT_TX_IPV6);
+	w0->lso_format = NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & RTE_MBUF_F_TX_IPV6);
 	w1->ol4type = NIX_SENDL4TYPE_TCP_CKSUM;
 
 	/* Handle tunnel tso */
 	if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-	    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+	    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 		const uint8_t is_udp_tun =
 			(CNXK_NIX_UDP_TUN_BITMASK >>
-			 ((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) &
+			 ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) &
 			0x1;
 		uint8_t shift = is_udp_tun ? 32 : 0;
 
-		shift += (!!(ol_flags & PKT_TX_OUTER_IPV6) << 4);
-		shift += (!!(ol_flags & PKT_TX_IPV6) << 3);
+		shift += (!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) << 4);
+		shift += (!!(ol_flags & RTE_MBUF_F_TX_IPV6) << 3);
 
 		w1->il4type = NIX_SENDL4TYPE_TCP_CKSUM;
 		w1->ol4type = is_udp_tun ? NIX_SENDL4TYPE_UDP_CKSUM : 0;
@@ -1282,26 +1282,26 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			const uint8x16_t tbl = {
 				/* [0-15] = il4type:il3type */
 				0x04, /* none (IPv6 assumed) */
-				0x14, /* PKT_TX_TCP_CKSUM (IPv6 assumed) */
-				0x24, /* PKT_TX_SCTP_CKSUM (IPv6 assumed) */
-				0x34, /* PKT_TX_UDP_CKSUM (IPv6 assumed) */
-				0x03, /* PKT_TX_IP_CKSUM */
-				0x13, /* PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM */
-				0x23, /* PKT_TX_IP_CKSUM | PKT_TX_SCTP_CKSUM */
-				0x33, /* PKT_TX_IP_CKSUM | PKT_TX_UDP_CKSUM */
-				0x02, /* PKT_TX_IPV4  */
-				0x12, /* PKT_TX_IPV4 | PKT_TX_TCP_CKSUM */
-				0x22, /* PKT_TX_IPV4 | PKT_TX_SCTP_CKSUM */
-				0x32, /* PKT_TX_IPV4 | PKT_TX_UDP_CKSUM */
-				0x03, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM */
-				0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_TCP_CKSUM
+				0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6 assumed) */
+				0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6 assumed) */
+				0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6 assumed) */
+				0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
+				0x13, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM */
+				0x23, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_SCTP_CKSUM */
+				0x33, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM */
+				0x02, /* RTE_MBUF_F_TX_IPV4  */
+				0x12, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_TCP_CKSUM */
+				0x22, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_SCTP_CKSUM */
+				0x32, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_UDP_CKSUM */
+				0x03, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM */
+				0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_TCP_CKSUM
 				       */
-				0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_SCTP_CKSUM
+				0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_SCTP_CKSUM
 				       */
-				0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_UDP_CKSUM
+				0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_UDP_CKSUM
 				       */
 			};
 
@@ -1486,40 +1486,40 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 				{
 					/* [0-15] = il4type:il3type */
 					0x04, /* none (IPv6) */
-					0x14, /* PKT_TX_TCP_CKSUM (IPv6) */
-					0x24, /* PKT_TX_SCTP_CKSUM (IPv6) */
-					0x34, /* PKT_TX_UDP_CKSUM (IPv6) */
-					0x03, /* PKT_TX_IP_CKSUM */
-					0x13, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_TCP_CKSUM
+					0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6) */
+					0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6) */
+					0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6) */
+					0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
+					0x13, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x23, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_SCTP_CKSUM
+					0x23, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x33, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_UDP_CKSUM
+					0x33, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
-					0x02, /* PKT_TX_IPV4 */
-					0x12, /* PKT_TX_IPV4 |
-					       * PKT_TX_TCP_CKSUM
+					0x02, /* RTE_MBUF_F_TX_IPV4 */
+					0x12, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x22, /* PKT_TX_IPV4 |
-					       * PKT_TX_SCTP_CKSUM
+					0x22, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x32, /* PKT_TX_IPV4 |
-					       * PKT_TX_UDP_CKSUM
+					0x32, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
-					0x03, /* PKT_TX_IPV4 |
-					       * PKT_TX_IP_CKSUM
+					0x03, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_IP_CKSUM
 					       */
-					0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_TCP_CKSUM
+					0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_SCTP_CKSUM
+					0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_UDP_CKSUM
+					0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
 				},
 
@@ -1707,11 +1707,11 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 		if (flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) {
 			/* Tx ol_flag for vlan. */
-			const uint64x2_t olv = {PKT_TX_VLAN, PKT_TX_VLAN};
+			const uint64x2_t olv = {RTE_MBUF_F_TX_VLAN, RTE_MBUF_F_TX_VLAN};
 			/* Bit enable for VLAN1 */
 			const uint64x2_t mlv = {BIT_ULL(49), BIT_ULL(49)};
 			/* Tx ol_flag for QnQ. */
-			const uint64x2_t olq = {PKT_TX_QINQ, PKT_TX_QINQ};
+			const uint64x2_t olq = {RTE_MBUF_F_TX_QINQ, RTE_MBUF_F_TX_QINQ};
 			/* Bit enable for VLAN0 */
 			const uint64x2_t mlq = {BIT_ULL(48), BIT_ULL(48)};
 			/* Load vlan values from packet. outer is VLAN 0 */
@@ -1753,8 +1753,8 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 		if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
 			/* Tx ol_flag for timestam. */
-			const uint64x2_t olf = {PKT_TX_IEEE1588_TMST,
-						PKT_TX_IEEE1588_TMST};
+			const uint64x2_t olf = {RTE_MBUF_F_TX_IEEE1588_TMST,
+						RTE_MBUF_F_TX_IEEE1588_TMST};
 			/* Set send mem alg to SUB. */
 			const uint64x2_t alg = {BIT_ULL(59), BIT_ULL(59)};
 			/* Increment send mem address by 8. */
diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index 115e678916..08090fd4ac 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -47,15 +47,15 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	uint16_t flags = 0;
 
 	/* Fastpath is dependent on these enums */
-	RTE_BUILD_BUG_ON(PKT_TX_TCP_CKSUM != (1ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_SCTP_CKSUM != (2ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_UDP_CKSUM != (3ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_IP_CKSUM != (1ULL << 54));
-	RTE_BUILD_BUG_ON(PKT_TX_IPV4 != (1ULL << 55));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IP_CKSUM != (1ULL << 58));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IPV4 != (1ULL << 59));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IPV6 != (1ULL << 60));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_UDP_CKSUM != (1ULL << 41));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_TCP_CKSUM != (1ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_SCTP_CKSUM != (2ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_UDP_CKSUM != (3ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IP_CKSUM != (1ULL << 54));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IPV4 != (1ULL << 55));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IP_CKSUM != (1ULL << 58));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV4 != (1ULL << 59));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV6 != (1ULL << 60));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_UDP_CKSUM != (1ULL << 41));
 	RTE_BUILD_BUG_ON(RTE_MBUF_L2_LEN_BITS != 7);
 	RTE_BUILD_BUG_ON(RTE_MBUF_L3_LEN_BITS != 9);
 	RTE_BUILD_BUG_ON(RTE_MBUF_OUTL2_LEN_BITS != 7);
diff --git a/drivers/net/cnxk/cn9k_rx.h b/drivers/net/cnxk/cn9k_rx.h
index a3bf4e0b63..6714743f96 100644
--- a/drivers/net/cnxk/cn9k_rx.h
+++ b/drivers/net/cnxk/cn9k_rx.h
@@ -99,9 +99,9 @@ nix_update_match_id(const uint16_t match_id, uint64_t ol_flags,
 	 * 0 to CNXK_FLOW_ACTION_FLAG_DEFAULT - 2
 	 */
 	if (likely(match_id)) {
-		ol_flags |= PKT_RX_FDIR;
+		ol_flags |= RTE_MBUF_F_RX_FDIR;
 		if (match_id != CNXK_FLOW_ACTION_FLAG_DEFAULT) {
-			ol_flags |= PKT_RX_FDIR_ID;
+			ol_flags |= RTE_MBUF_F_RX_FDIR_ID;
 			mbuf->hash.fdir.hi = match_id - 1;
 		}
 	}
@@ -186,7 +186,7 @@ cn9k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 
 	if (flag & NIX_RX_OFFLOAD_RSS_F) {
 		mbuf->hash.rss = tag;
-		ol_flags |= PKT_RX_RSS_HASH;
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 	}
 
 	if (flag & NIX_RX_OFFLOAD_CHECKSUM_F)
@@ -194,11 +194,11 @@ cn9k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 
 	if (flag & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
 		if (rx->cn9k.vtag0_gone) {
-			ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+			ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 			mbuf->vlan_tci = rx->cn9k.vtag0_tci;
 		}
 		if (rx->cn9k.vtag1_gone) {
-			ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED;
+			ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
 			mbuf->vlan_tci_outer = rx->cn9k.vtag1_tci;
 		}
 	}
@@ -302,7 +302,7 @@ static __rte_always_inline uint64_t
 nix_vlan_update(const uint64_t w2, uint64_t ol_flags, uint8x16_t *f)
 {
 	if (w2 & BIT_ULL(21) /* vtag0_gone */) {
-		ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		*f = vsetq_lane_u16((uint16_t)(w2 >> 32), *f, 5);
 	}
 
@@ -313,7 +313,7 @@ static __rte_always_inline uint64_t
 nix_qinq_update(const uint64_t w2, uint64_t ol_flags, struct rte_mbuf *mbuf)
 {
 	if (w2 & BIT_ULL(23) /* vtag1_gone */) {
-		ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED;
+		ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
 		mbuf->vlan_tci_outer = (uint16_t)(w2 >> 48);
 	}
 
@@ -414,10 +414,10 @@ cn9k_nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
 			f1 = vsetq_lane_u32(cq1_w0, f1, 3);
 			f2 = vsetq_lane_u32(cq2_w0, f2, 3);
 			f3 = vsetq_lane_u32(cq3_w0, f3, 3);
-			ol_flags0 = PKT_RX_RSS_HASH;
-			ol_flags1 = PKT_RX_RSS_HASH;
-			ol_flags2 = PKT_RX_RSS_HASH;
-			ol_flags3 = PKT_RX_RSS_HASH;
+			ol_flags0 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags1 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags2 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags3 = RTE_MBUF_F_RX_RSS_HASH;
 		} else {
 			ol_flags0 = 0;
 			ol_flags1 = 0;
@@ -490,8 +490,8 @@ cn9k_nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
 						  RTE_PTYPE_L2_ETHER_TIMESYNC,
 						  RTE_PTYPE_L2_ETHER_TIMESYNC,
 						  RTE_PTYPE_L2_ETHER_TIMESYNC};
-			const uint64_t ts_olf = PKT_RX_IEEE1588_PTP |
-						PKT_RX_IEEE1588_TMST |
+			const uint64_t ts_olf = RTE_MBUF_F_RX_IEEE1588_PTP |
+						RTE_MBUF_F_RX_IEEE1588_TMST |
 						rxq->tstamp->rx_tstamp_dynflag;
 			const uint32x4_t and_mask = {0x1, 0x2, 0x4, 0x8};
 			uint64x2_t ts01, ts23, mask;
diff --git a/drivers/net/cnxk/cn9k_tx.h b/drivers/net/cnxk/cn9k_tx.h
index ed65cd351f..584378d759 100644
--- a/drivers/net/cnxk/cn9k_tx.h
+++ b/drivers/net/cnxk/cn9k_tx.h
@@ -61,12 +61,12 @@ cn9k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 {
 	uint64_t mask, ol_flags = m->ol_flags;
 
-	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & PKT_TX_TCP_SEG)) {
+	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		uintptr_t mdata = rte_pktmbuf_mtod(m, uintptr_t);
 		uint16_t *iplen, *oiplen, *oudplen;
 		uint16_t lso_sb, paylen;
 
-		mask = -!!(ol_flags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6));
+		mask = -!!(ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IPV6));
 		lso_sb = (mask & (m->outer_l2_len + m->outer_l3_len)) +
 			 m->l2_len + m->l3_len + m->l4_len;
 
@@ -75,18 +75,18 @@ cn9k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 
 		/* Get iplen position assuming no tunnel hdr */
 		iplen = (uint16_t *)(mdata + m->l2_len +
-				     (2 << !!(ol_flags & PKT_TX_IPV6)));
+				     (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
 		/* Handle tunnel tso */
 		if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-		    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+		    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 			const uint8_t is_udp_tun =
 				(CNXK_NIX_UDP_TUN_BITMASK >>
-				 ((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) &
+				 ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) &
 				0x1;
 
 			oiplen = (uint16_t *)(mdata + m->outer_l2_len +
 					      (2 << !!(ol_flags &
-						       PKT_TX_OUTER_IPV6)));
+						       RTE_MBUF_F_TX_OUTER_IPV6)));
 			*oiplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*oiplen) -
 						   paylen);
 
@@ -101,7 +101,7 @@ cn9k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 			/* Update iplen position to inner ip hdr */
 			iplen = (uint16_t *)(mdata + lso_sb - m->l3_len -
 					     m->l4_len +
-					     (2 << !!(ol_flags & PKT_TX_IPV6)));
+					     (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
 		}
 
 		*iplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*iplen) - paylen);
@@ -151,11 +151,11 @@ cn9k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 
 	if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
 	    (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F)) {
-		const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+		const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
 		const uint8_t ol3type =
-			((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) +
-			((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) +
-			!!(ol_flags & PKT_TX_OUTER_IP_CKSUM);
+			((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
+			((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
+			!!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
 
 		/* Outer L3 */
 		w1.ol3type = ol3type;
@@ -167,15 +167,15 @@ cn9k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		w1.ol4type = csum + (csum << 1);
 
 		/* Inner L3 */
-		w1.il3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) +
-			     ((!!(ol_flags & PKT_TX_IPV6)) << 2);
+		w1.il3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
+			     ((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2);
 		w1.il3ptr = w1.ol4ptr + m->l2_len;
 		w1.il4ptr = w1.il3ptr + m->l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.il3type = w1.il3type + !!(ol_flags & PKT_TX_IP_CKSUM);
+		w1.il3type = w1.il3type + !!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
 
 		/* Inner L4 */
-		w1.il4type = (ol_flags & PKT_TX_L4_MASK) >> 52;
+		w1.il4type = (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
 
 		/* In case of no tunnel header use only
 		 * shift IL3/IL4 fields a bit to use
@@ -186,16 +186,16 @@ cn9k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		       ((w1.u & 0X00000000FFFFFFFF) >> (mask << 4));
 
 	} else if (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) {
-		const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+		const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
 		const uint8_t outer_l2_len = m->outer_l2_len;
 
 		/* Outer L3 */
 		w1.ol3ptr = outer_l2_len;
 		w1.ol4ptr = outer_l2_len + m->outer_l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.ol3type = ((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) +
-			     ((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) +
-			     !!(ol_flags & PKT_TX_OUTER_IP_CKSUM);
+		w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
+			     ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
+			     !!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
 
 		/* Outer L4 */
 		w1.ol4type = csum + (csum << 1);
@@ -211,27 +211,27 @@ cn9k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		w1.ol3ptr = l2_len;
 		w1.ol4ptr = l2_len + m->l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.ol3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) +
-			     ((!!(ol_flags & PKT_TX_IPV6)) << 2) +
-			     !!(ol_flags & PKT_TX_IP_CKSUM);
+		w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
+			     ((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2) +
+			     !!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
 
 		/* Inner L4 */
-		w1.ol4type = (ol_flags & PKT_TX_L4_MASK) >> 52;
+		w1.ol4type = (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
 	}
 
 	if (flags & NIX_TX_NEED_EXT_HDR && flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) {
-		send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & PKT_TX_VLAN);
+		send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_VLAN);
 		/* HW will update ptr after vlan0 update */
 		send_hdr_ext->w1.vlan1_ins_ptr = 12;
 		send_hdr_ext->w1.vlan1_ins_tci = m->vlan_tci;
 
-		send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & PKT_TX_QINQ);
+		send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_QINQ);
 		/* 2B before end of l2 header */
 		send_hdr_ext->w1.vlan0_ins_ptr = 12;
 		send_hdr_ext->w1.vlan0_ins_tci = m->vlan_tci_outer;
 	}
 
-	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & PKT_TX_TCP_SEG)) {
+	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		uint16_t lso_sb;
 		uint64_t mask;
 
@@ -242,20 +242,20 @@ cn9k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		send_hdr_ext->w0.lso = 1;
 		send_hdr_ext->w0.lso_mps = m->tso_segsz;
 		send_hdr_ext->w0.lso_format =
-			NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & PKT_TX_IPV6);
+			NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & RTE_MBUF_F_TX_IPV6);
 		w1.ol4type = NIX_SENDL4TYPE_TCP_CKSUM;
 
 		/* Handle tunnel tso */
 		if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-		    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+		    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 			const uint8_t is_udp_tun =
 				(CNXK_NIX_UDP_TUN_BITMASK >>
-				 ((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) &
+				 ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) &
 				0x1;
 			uint8_t shift = is_udp_tun ? 32 : 0;
 
-			shift += (!!(ol_flags & PKT_TX_OUTER_IPV6) << 4);
-			shift += (!!(ol_flags & PKT_TX_IPV6) << 3);
+			shift += (!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) << 4);
+			shift += (!!(ol_flags & RTE_MBUF_F_TX_IPV6) << 3);
 
 			w1.il4type = NIX_SENDL4TYPE_TCP_CKSUM;
 			w1.ol4type = is_udp_tun ? NIX_SENDL4TYPE_UDP_CKSUM : 0;
@@ -296,7 +296,7 @@ cn9k_nix_xmit_prepare_tstamp(uint64_t *cmd, const uint64_t *send_mem_desc,
 	if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
 		struct nix_send_mem_s *send_mem;
 		uint16_t off = (no_segdw - 1) << 1;
-		const uint8_t is_ol_tstamp = !(ol_flags & PKT_TX_IEEE1588_TMST);
+		const uint8_t is_ol_tstamp = !(ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST);
 
 		send_mem = (struct nix_send_mem_s *)(cmd + off);
 		if (flags & NIX_TX_MULTI_SEG_F) {
@@ -309,7 +309,7 @@ cn9k_nix_xmit_prepare_tstamp(uint64_t *cmd, const uint64_t *send_mem_desc,
 			rte_compiler_barrier();
 		}
 
-		/* Packets for which PKT_TX_IEEE1588_TMST is not set, tx tstamp
+		/* Packets for which RTE_MBUF_F_TX_IEEE1588_TMST is not set, tx tstamp
 		 * should not be recorded, hence changing the alg type to
 		 * NIX_SENDMEMALG_SET and also changing send mem addr field to
 		 * next 8 bytes as it corrpt the actual tx tstamp registered
@@ -553,7 +553,7 @@ cn9k_nix_prepare_tso(struct rte_mbuf *m, union nix_send_hdr_w1_u *w1,
 	uint16_t lso_sb;
 	uint64_t mask;
 
-	if (!(ol_flags & PKT_TX_TCP_SEG))
+	if (!(ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 		return;
 
 	mask = -(!w1->il3type);
@@ -562,15 +562,15 @@ cn9k_nix_prepare_tso(struct rte_mbuf *m, union nix_send_hdr_w1_u *w1,
 	w0->u |= BIT(14);
 	w0->lso_sb = lso_sb;
 	w0->lso_mps = m->tso_segsz;
-	w0->lso_format = NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & PKT_TX_IPV6);
+	w0->lso_format = NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & RTE_MBUF_F_TX_IPV6);
 	w1->ol4type = NIX_SENDL4TYPE_TCP_CKSUM;
 
 	/* Handle tunnel tso */
 	if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-	    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+	    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 		const uint8_t is_udp_tun =
 			(CNXK_NIX_UDP_TUN_BITMASK >>
-			 ((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) &
+			 ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) &
 			0x1;
 
 		w1->il4type = NIX_SENDL4TYPE_TCP_CKSUM;
@@ -578,7 +578,7 @@ cn9k_nix_prepare_tso(struct rte_mbuf *m, union nix_send_hdr_w1_u *w1,
 		/* Update format for UDP tunneled packet */
 		w0->lso_format += is_udp_tun ? 2 : 6;
 
-		w0->lso_format += !!(ol_flags & PKT_TX_OUTER_IPV6) << 1;
+		w0->lso_format += !!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) << 1;
 	}
 }
 
@@ -1060,26 +1060,26 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			const uint8x16_t tbl = {
 				/* [0-15] = il4type:il3type */
 				0x04, /* none (IPv6 assumed) */
-				0x14, /* PKT_TX_TCP_CKSUM (IPv6 assumed) */
-				0x24, /* PKT_TX_SCTP_CKSUM (IPv6 assumed) */
-				0x34, /* PKT_TX_UDP_CKSUM (IPv6 assumed) */
-				0x03, /* PKT_TX_IP_CKSUM */
-				0x13, /* PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM */
-				0x23, /* PKT_TX_IP_CKSUM | PKT_TX_SCTP_CKSUM */
-				0x33, /* PKT_TX_IP_CKSUM | PKT_TX_UDP_CKSUM */
-				0x02, /* PKT_TX_IPV4  */
-				0x12, /* PKT_TX_IPV4 | PKT_TX_TCP_CKSUM */
-				0x22, /* PKT_TX_IPV4 | PKT_TX_SCTP_CKSUM */
-				0x32, /* PKT_TX_IPV4 | PKT_TX_UDP_CKSUM */
-				0x03, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM */
-				0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_TCP_CKSUM
+				0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6 assumed) */
+				0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6 assumed) */
+				0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6 assumed) */
+				0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
+				0x13, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM */
+				0x23, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_SCTP_CKSUM */
+				0x33, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM */
+				0x02, /* RTE_MBUF_F_TX_IPV4  */
+				0x12, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_TCP_CKSUM */
+				0x22, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_SCTP_CKSUM */
+				0x32, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_UDP_CKSUM */
+				0x03, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM */
+				0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_TCP_CKSUM
 				       */
-				0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_SCTP_CKSUM
+				0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_SCTP_CKSUM
 				       */
-				0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_UDP_CKSUM
+				0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_UDP_CKSUM
 				       */
 			};
 
@@ -1264,40 +1264,40 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 				{
 					/* [0-15] = il4type:il3type */
 					0x04, /* none (IPv6) */
-					0x14, /* PKT_TX_TCP_CKSUM (IPv6) */
-					0x24, /* PKT_TX_SCTP_CKSUM (IPv6) */
-					0x34, /* PKT_TX_UDP_CKSUM (IPv6) */
-					0x03, /* PKT_TX_IP_CKSUM */
-					0x13, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_TCP_CKSUM
+					0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6) */
+					0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6) */
+					0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6) */
+					0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
+					0x13, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x23, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_SCTP_CKSUM
+					0x23, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x33, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_UDP_CKSUM
+					0x33, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
-					0x02, /* PKT_TX_IPV4 */
-					0x12, /* PKT_TX_IPV4 |
-					       * PKT_TX_TCP_CKSUM
+					0x02, /* RTE_MBUF_F_TX_IPV4 */
+					0x12, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x22, /* PKT_TX_IPV4 |
-					       * PKT_TX_SCTP_CKSUM
+					0x22, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x32, /* PKT_TX_IPV4 |
-					       * PKT_TX_UDP_CKSUM
+					0x32, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
-					0x03, /* PKT_TX_IPV4 |
-					       * PKT_TX_IP_CKSUM
+					0x03, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_IP_CKSUM
 					       */
-					0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_TCP_CKSUM
+					0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_SCTP_CKSUM
+					0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_UDP_CKSUM
+					0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
 				},
 
@@ -1485,11 +1485,11 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 		if (flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) {
 			/* Tx ol_flag for vlan. */
-			const uint64x2_t olv = {PKT_TX_VLAN, PKT_TX_VLAN};
+			const uint64x2_t olv = {RTE_MBUF_F_TX_VLAN, RTE_MBUF_F_TX_VLAN};
 			/* Bit enable for VLAN1 */
 			const uint64x2_t mlv = {BIT_ULL(49), BIT_ULL(49)};
 			/* Tx ol_flag for QnQ. */
-			const uint64x2_t olq = {PKT_TX_QINQ, PKT_TX_QINQ};
+			const uint64x2_t olq = {RTE_MBUF_F_TX_QINQ, RTE_MBUF_F_TX_QINQ};
 			/* Bit enable for VLAN0 */
 			const uint64x2_t mlq = {BIT_ULL(48), BIT_ULL(48)};
 			/* Load vlan values from packet. outer is VLAN 0 */
@@ -1531,8 +1531,8 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 		if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
 			/* Tx ol_flag for timestam. */
-			const uint64x2_t olf = {PKT_TX_IEEE1588_TMST,
-						PKT_TX_IEEE1588_TMST};
+			const uint64x2_t olf = {RTE_MBUF_F_TX_IEEE1588_TMST,
+						RTE_MBUF_F_TX_IEEE1588_TMST};
 			/* Set send mem alg to SUB. */
 			const uint64x2_t alg = {BIT_ULL(59), BIT_ULL(59)};
 			/* Increment send mem address by 8. */
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 27920c84f2..262f726998 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -116,8 +116,8 @@
 #define CNXK_NIX_FASTPATH_LOOKUP_MEM "cnxk_nix_fastpath_lookup_mem"
 
 #define CNXK_NIX_UDP_TUN_BITMASK                                               \
-	((1ull << (PKT_TX_TUNNEL_VXLAN >> 45)) |                               \
-	 (1ull << (PKT_TX_TUNNEL_GENEVE >> 45)))
+	((1ull << (RTE_MBUF_F_TX_TUNNEL_VXLAN >> 45)) |                               \
+	 (1ull << (RTE_MBUF_F_TX_TUNNEL_GENEVE >> 45)))
 
 struct cnxk_fc_cfg {
 	enum rte_eth_fc_mode mode;
@@ -481,15 +481,15 @@ cnxk_nix_mbuf_to_tstamp(struct rte_mbuf *mbuf,
 		 */
 		*cnxk_nix_timestamp_dynfield(mbuf, tstamp) =
 			rte_be_to_cpu_64(*tstamp_ptr);
-		/* PKT_RX_IEEE1588_TMST flag needs to be set only in case
+		/* RTE_MBUF_F_RX_IEEE1588_TMST flag needs to be set only in case
 		 * PTP packets are received.
 		 */
 		if (mbuf->packet_type == RTE_PTYPE_L2_ETHER_TIMESYNC) {
 			tstamp->rx_tstamp =
 				*cnxk_nix_timestamp_dynfield(mbuf, tstamp);
 			tstamp->rx_ready = 1;
-			mbuf->ol_flags |= PKT_RX_IEEE1588_PTP |
-					  PKT_RX_IEEE1588_TMST |
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP |
+					  RTE_MBUF_F_RX_IEEE1588_TMST |
 					  tstamp->rx_tstamp_dynflag;
 		}
 	}
diff --git a/drivers/net/cnxk/cnxk_lookup.c b/drivers/net/cnxk/cnxk_lookup.c
index 0152ad906a..d3e78fa58b 100644
--- a/drivers/net/cnxk/cnxk_lookup.c
+++ b/drivers/net/cnxk/cnxk_lookup.c
@@ -242,9 +242,9 @@ nix_create_rx_ol_flags_array(void *mem)
 		errlev = idx & 0xf;
 		errcode = (idx & 0xff0) >> 4;
 
-		val = PKT_RX_IP_CKSUM_UNKNOWN;
-		val |= PKT_RX_L4_CKSUM_UNKNOWN;
-		val |= PKT_RX_OUTER_L4_CKSUM_UNKNOWN;
+		val = RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
+		val |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
+		val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN;
 
 		switch (errlev) {
 		case NPC_ERRLEV_RE:
@@ -252,46 +252,46 @@ nix_create_rx_ol_flags_array(void *mem)
 			 * including Outer L2 length mismatch error
 			 */
 			if (errcode) {
-				val |= PKT_RX_IP_CKSUM_BAD;
-				val |= PKT_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 			break;
 		case NPC_ERRLEV_LC:
 			if (errcode == NPC_EC_OIP4_CSUM ||
 			    errcode == NPC_EC_IP_FRAG_OFFSET_1) {
-				val |= PKT_RX_IP_CKSUM_BAD;
-				val |= PKT_RX_OUTER_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			}
 			break;
 		case NPC_ERRLEV_LG:
 			if (errcode == NPC_EC_IIP4_CSUM)
-				val |= PKT_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			else
-				val |= PKT_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			break;
 		case NPC_ERRLEV_NIX:
 			if (errcode == NIX_RX_PERRCODE_OL4_CHK ||
 			    errcode == NIX_RX_PERRCODE_OL4_LEN ||
 			    errcode == NIX_RX_PERRCODE_OL4_PORT) {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_BAD;
-				val |= PKT_RX_OUTER_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 			} else if (errcode == NIX_RX_PERRCODE_IL4_CHK ||
 				   errcode == NIX_RX_PERRCODE_IL4_LEN ||
 				   errcode == NIX_RX_PERRCODE_IL4_PORT) {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			} else if (errcode == NIX_RX_PERRCODE_IL3_LEN ||
 				   errcode == NIX_RX_PERRCODE_OL3_LEN) {
-				val |= PKT_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 			break;
 		}
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index 3299d6252e..20aa84b653 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -539,7 +539,7 @@ static inline unsigned int flits_to_desc(unsigned int n)
  */
 static inline int is_eth_imm(const struct rte_mbuf *m)
 {
-	unsigned int hdrlen = (m->ol_flags & PKT_TX_TCP_SEG) ?
+	unsigned int hdrlen = (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) ?
 			      sizeof(struct cpl_tx_pkt_lso_core) : 0;
 
 	hdrlen += sizeof(struct cpl_tx_pkt);
@@ -749,12 +749,12 @@ static u64 hwcsum(enum chip_type chip, const struct rte_mbuf *m)
 {
 	int csum_type;
 
-	if (m->ol_flags & PKT_TX_IP_CKSUM) {
-		switch (m->ol_flags & PKT_TX_L4_MASK) {
-		case PKT_TX_TCP_CKSUM:
+	if (m->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
+		switch (m->ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+		case RTE_MBUF_F_TX_TCP_CKSUM:
 			csum_type = TX_CSUM_TCPIP;
 			break;
-		case PKT_TX_UDP_CKSUM:
+		case RTE_MBUF_F_TX_UDP_CKSUM:
 			csum_type = TX_CSUM_UDPIP;
 			break;
 		default:
@@ -1029,7 +1029,7 @@ static inline int tx_do_packet_coalesce(struct sge_eth_txq *txq,
 	/* fill the cpl message, same as in t4_eth_xmit, this should be kept
 	 * similar to t4_eth_xmit
 	 */
-	if (mbuf->ol_flags & PKT_TX_IP_CKSUM) {
+	if (mbuf->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		cntrl = hwcsum(adap->params.chip, mbuf) |
 			       F_TXPKT_IPCSUM_DIS;
 		txq->stats.tx_cso++;
@@ -1037,7 +1037,7 @@ static inline int tx_do_packet_coalesce(struct sge_eth_txq *txq,
 		cntrl = F_TXPKT_L4CSUM_DIS | F_TXPKT_IPCSUM_DIS;
 	}
 
-	if (mbuf->ol_flags & PKT_TX_VLAN) {
+	if (mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		txq->stats.vlan_ins++;
 		cntrl |= F_TXPKT_VLAN_VLD | V_TXPKT_VLAN(mbuf->vlan_tci);
 	}
@@ -1129,7 +1129,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
 		return 0;
 	}
 
-	if ((!(m->ol_flags & PKT_TX_TCP_SEG)) &&
+	if ((!(m->ol_flags & RTE_MBUF_F_TX_TCP_SEG)) &&
 	    (unlikely(m->pkt_len > max_pkt_len)))
 		goto out_free;
 
@@ -1140,7 +1140,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
 	/* align the end of coalesce WR to a 512 byte boundary */
 	txq->q.coalesce.max = (8 - (txq->q.pidx & 7)) * 8;
 
-	if (!((m->ol_flags & PKT_TX_TCP_SEG) ||
+	if (!((m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) ||
 			m->pkt_len > RTE_ETHER_MAX_LEN)) {
 		if (should_tx_packet_coalesce(txq, mbuf, &cflits, adap)) {
 			if (unlikely(map_mbuf(mbuf, addr) < 0)) {
@@ -1203,7 +1203,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
 	len += sizeof(*cpl);
 
 	/* Coalescing skipped and we send through normal path */
-	if (!(m->ol_flags & PKT_TX_TCP_SEG)) {
+	if (!(m->ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		wr->op_immdlen = htonl(V_FW_WR_OP(is_pf4(adap) ?
 						  FW_ETH_TX_PKT_WR :
 						  FW_ETH_TX_PKT_VM_WR) |
@@ -1212,7 +1212,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
 			cpl = (void *)(wr + 1);
 		else
 			cpl = (void *)(vmwr + 1);
-		if (m->ol_flags & PKT_TX_IP_CKSUM) {
+		if (m->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 			cntrl = hwcsum(adap->params.chip, m) |
 				F_TXPKT_IPCSUM_DIS;
 			txq->stats.tx_cso++;
@@ -1222,7 +1222,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
 			lso = (void *)(wr + 1);
 		else
 			lso = (void *)(vmwr + 1);
-		v6 = (m->ol_flags & PKT_TX_IPV6) != 0;
+		v6 = (m->ol_flags & RTE_MBUF_F_TX_IPV6) != 0;
 		l3hdr_len = m->l3_len;
 		l4hdr_len = m->l4_len;
 		eth_xtra_len = m->l2_len - RTE_ETHER_HDR_LEN;
@@ -1258,7 +1258,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
 		txq->stats.tx_cso += m->tso_segsz;
 	}
 
-	if (m->ol_flags & PKT_TX_VLAN) {
+	if (m->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		txq->stats.vlan_ins++;
 		cntrl |= F_TXPKT_VLAN_VLD | V_TXPKT_VLAN(m->vlan_tci);
 	}
@@ -1528,27 +1528,27 @@ static inline void cxgbe_fill_mbuf_info(struct adapter *adap,
 
 	if (cpl->vlan_ex)
 		cxgbe_set_mbuf_info(pkt, RTE_PTYPE_L2_ETHER_VLAN,
-				    PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+				    RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 	else
 		cxgbe_set_mbuf_info(pkt, RTE_PTYPE_L2_ETHER, 0);
 
 	if (cpl->l2info & htonl(F_RXF_IP))
 		cxgbe_set_mbuf_info(pkt, RTE_PTYPE_L3_IPV4,
-				    csum_ok ? PKT_RX_IP_CKSUM_GOOD :
-					      PKT_RX_IP_CKSUM_BAD);
+				    csum_ok ? RTE_MBUF_F_RX_IP_CKSUM_GOOD :
+				    RTE_MBUF_F_RX_IP_CKSUM_BAD);
 	else if (cpl->l2info & htonl(F_RXF_IP6))
 		cxgbe_set_mbuf_info(pkt, RTE_PTYPE_L3_IPV6,
-				    csum_ok ? PKT_RX_IP_CKSUM_GOOD :
-					      PKT_RX_IP_CKSUM_BAD);
+				    csum_ok ? RTE_MBUF_F_RX_IP_CKSUM_GOOD :
+				    RTE_MBUF_F_RX_IP_CKSUM_BAD);
 
 	if (cpl->l2info & htonl(F_RXF_TCP))
 		cxgbe_set_mbuf_info(pkt, RTE_PTYPE_L4_TCP,
-				    csum_ok ? PKT_RX_L4_CKSUM_GOOD :
-					      PKT_RX_L4_CKSUM_BAD);
+				    csum_ok ? RTE_MBUF_F_RX_L4_CKSUM_GOOD :
+				    RTE_MBUF_F_RX_L4_CKSUM_BAD);
 	else if (cpl->l2info & htonl(F_RXF_UDP))
 		cxgbe_set_mbuf_info(pkt, RTE_PTYPE_L4_UDP,
-				    csum_ok ? PKT_RX_L4_CKSUM_GOOD :
-					      PKT_RX_L4_CKSUM_BAD);
+				    csum_ok ? RTE_MBUF_F_RX_L4_CKSUM_GOOD :
+				    RTE_MBUF_F_RX_L4_CKSUM_BAD);
 }
 
 /**
@@ -1639,7 +1639,7 @@ static int process_responses(struct sge_rspq *q, int budget,
 
 				if (!rss_hdr->filter_tid &&
 				    rss_hdr->hash_type) {
-					pkt->ol_flags |= PKT_RX_RSS_HASH;
+					pkt->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 					pkt->hash.rss =
 						ntohl(rss_hdr->hash_val);
 				}
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index b5728e09c2..98edc53359 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -80,10 +80,9 @@
 	ETH_RSS_TCP | \
 	ETH_RSS_SCTP)
 
-#define DPAA_TX_CKSUM_OFFLOAD_MASK (             \
-		PKT_TX_IP_CKSUM |                \
-		PKT_TX_TCP_CKSUM |               \
-		PKT_TX_UDP_CKSUM)
+#define DPAA_TX_CKSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM |                \
+		RTE_MBUF_F_TX_TCP_CKSUM |               \
+		RTE_MBUF_F_TX_UDP_CKSUM)
 
 /* DPAA Frame descriptor macros */
 
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 423de40e95..ffac6ce3e2 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -125,8 +125,8 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m, void *fd_virt_addr)
 
 	DPAA_DP_LOG(DEBUG, " Parsing mbuf: %p with annotations: %p", m, annot);
 
-	m->ol_flags = PKT_RX_RSS_HASH | PKT_RX_IP_CKSUM_GOOD |
-		PKT_RX_L4_CKSUM_GOOD;
+	m->ol_flags = RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+		RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	switch (prs) {
 	case DPAA_PKT_TYPE_IPV4:
@@ -204,13 +204,13 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m, void *fd_virt_addr)
 		break;
 	case DPAA_PKT_TYPE_IPV4_CSUM_ERR:
 	case DPAA_PKT_TYPE_IPV6_CSUM_ERR:
-		m->ol_flags = PKT_RX_RSS_HASH | PKT_RX_IP_CKSUM_BAD;
+		m->ol_flags = RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		break;
 	case DPAA_PKT_TYPE_IPV4_TCP_CSUM_ERR:
 	case DPAA_PKT_TYPE_IPV6_TCP_CSUM_ERR:
 	case DPAA_PKT_TYPE_IPV4_UDP_CSUM_ERR:
 	case DPAA_PKT_TYPE_IPV6_UDP_CSUM_ERR:
-		m->ol_flags = PKT_RX_RSS_HASH | PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags = RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		break;
 	case DPAA_PKT_TYPE_NONE:
 		m->packet_type = 0;
@@ -229,7 +229,7 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m, void *fd_virt_addr)
 
 	/* Check if Vlan is present */
 	if (prs & DPAA_PARSE_VLAN_MASK)
-		m->ol_flags |= PKT_RX_VLAN;
+		m->ol_flags |= RTE_MBUF_F_RX_VLAN;
 	/* Packet received without stripping the vlan */
 }
 
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index f491f4d10a..267090c59b 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -114,7 +114,7 @@ dpaa2_dev_rx_parse_new(struct rte_mbuf *m, const struct qbman_fd *fd,
 		m->packet_type = dpaa2_dev_rx_parse_slow(m, annotation);
 	}
 	m->hash.rss = fd->simple.flc_hi;
-	m->ol_flags |= PKT_RX_RSS_HASH;
+	m->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 
 	if (dpaa2_enable_ts[m->port]) {
 		*dpaa2_timestamp_dynfield(m) = annotation->word2;
@@ -141,20 +141,20 @@ dpaa2_dev_rx_parse_slow(struct rte_mbuf *mbuf,
 
 #if defined(RTE_LIBRTE_IEEE1588)
 	if (BIT_ISSET_AT_POS(annotation->word1, DPAA2_ETH_FAS_PTP))
-		mbuf->ol_flags |= PKT_RX_IEEE1588_PTP;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP;
 #endif
 
 	if (BIT_ISSET_AT_POS(annotation->word3, L2_VLAN_1_PRESENT)) {
 		vlan_tci = rte_pktmbuf_mtod_offset(mbuf, uint16_t *,
 			(VLAN_TCI_OFFSET_1(annotation->word5) >> 16));
 		mbuf->vlan_tci = rte_be_to_cpu_16(*vlan_tci);
-		mbuf->ol_flags |= PKT_RX_VLAN;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN;
 		pkt_type |= RTE_PTYPE_L2_ETHER_VLAN;
 	} else if (BIT_ISSET_AT_POS(annotation->word3, L2_VLAN_N_PRESENT)) {
 		vlan_tci = rte_pktmbuf_mtod_offset(mbuf, uint16_t *,
 			(VLAN_TCI_OFFSET_1(annotation->word5) >> 16));
 		mbuf->vlan_tci = rte_be_to_cpu_16(*vlan_tci);
-		mbuf->ol_flags |= PKT_RX_VLAN | PKT_RX_QINQ;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_QINQ;
 		pkt_type |= RTE_PTYPE_L2_ETHER_QINQ;
 	}
 
@@ -189,9 +189,9 @@ dpaa2_dev_rx_parse_slow(struct rte_mbuf *mbuf,
 	}
 
 	if (BIT_ISSET_AT_POS(annotation->word8, DPAA2_ETH_FAS_L3CE))
-		mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else if (BIT_ISSET_AT_POS(annotation->word8, DPAA2_ETH_FAS_L4CE))
-		mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 
 	if (BIT_ISSET_AT_POS(annotation->word4, L3_IP_1_FIRST_FRAGMENT |
 	    L3_IP_1_MORE_FRAGMENT |
@@ -232,9 +232,9 @@ dpaa2_dev_rx_parse(struct rte_mbuf *mbuf, void *hw_annot_addr)
 			   annotation->word4);
 
 	if (BIT_ISSET_AT_POS(annotation->word8, DPAA2_ETH_FAS_L3CE))
-		mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else if (BIT_ISSET_AT_POS(annotation->word8, DPAA2_ETH_FAS_L4CE))
-		mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 
 	if (dpaa2_enable_ts[mbuf->port]) {
 		*dpaa2_timestamp_dynfield(mbuf) = annotation->word2;
@@ -1228,9 +1228,9 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 				    (*bufs)->nb_segs == 1 &&
 				    rte_mbuf_refcnt_read((*bufs)) == 1)) {
 					if (unlikely(((*bufs)->ol_flags
-						& PKT_TX_VLAN) ||
-						(eth_data->dev_conf.txmode.offloads
-						& DEV_TX_OFFLOAD_VLAN_INSERT))) {
+						& RTE_MBUF_F_TX_VLAN) ||
+						     (eth_data->dev_conf.txmode.offloads
+						      & DEV_TX_OFFLOAD_VLAN_INSERT))) {
 						ret = rte_vlan_insert(bufs);
 						if (ret)
 							goto send_n_return;
@@ -1271,9 +1271,9 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 				goto send_n_return;
 			}
 
-			if (unlikely(((*bufs)->ol_flags & PKT_TX_VLAN) ||
-				(eth_data->dev_conf.txmode.offloads
-				& DEV_TX_OFFLOAD_VLAN_INSERT))) {
+			if (unlikely(((*bufs)->ol_flags & RTE_MBUF_F_TX_VLAN) ||
+				     (eth_data->dev_conf.txmode.offloads
+				      & DEV_TX_OFFLOAD_VLAN_INSERT))) {
 				int ret = rte_vlan_insert(bufs);
 				if (ret)
 					goto send_n_return;
@@ -1532,7 +1532,7 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 				    (*bufs)->nb_segs == 1 &&
 				    rte_mbuf_refcnt_read((*bufs)) == 1)) {
 					if (unlikely((*bufs)->ol_flags
-						& PKT_TX_VLAN)) {
+						& RTE_MBUF_F_TX_VLAN)) {
 					  ret = rte_vlan_insert(bufs);
 					  if (ret)
 						goto send_n_return;
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index 0105e2d384..dbfdaecd4c 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -50,15 +50,14 @@
 
 #define E1000_RXDCTL_GRAN	0x01000000 /* RXDCTL Granularity */
 
-#define E1000_TX_OFFLOAD_MASK ( \
-		PKT_TX_IPV6 |           \
-		PKT_TX_IPV4 |           \
-		PKT_TX_IP_CKSUM |       \
-		PKT_TX_L4_MASK |        \
-		PKT_TX_VLAN)
+#define E1000_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_IPV6 |           \
+		RTE_MBUF_F_TX_IPV4 |           \
+		RTE_MBUF_F_TX_IP_CKSUM |       \
+		RTE_MBUF_F_TX_L4_MASK |        \
+		RTE_MBUF_F_TX_VLAN)
 
 #define E1000_TX_OFFLOAD_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ E1000_TX_OFFLOAD_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ E1000_TX_OFFLOAD_MASK)
 
 /* PCI offset for querying configuration status register */
 #define PCI_CFG_STATUS_REG                 0x06
@@ -234,7 +233,7 @@ em_set_xmit_ctx(struct em_tx_queue* txq,
 	 * When doing checksum or TCP segmentation with IPv6 headers,
 	 * IPCSE field should be set t0 0.
 	 */
-	if (flags & PKT_TX_IP_CKSUM) {
+	if (flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		ctx.lower_setup.ip_fields.ipcse =
 			(uint16_t)rte_cpu_to_le_16(ipcse - 1);
 		cmd_len |= E1000_TXD_CMD_IP;
@@ -247,13 +246,13 @@ em_set_xmit_ctx(struct em_tx_queue* txq,
 	ctx.upper_setup.tcp_fields.tucss = (uint8_t)ipcse;
 	ctx.upper_setup.tcp_fields.tucse = 0;
 
-	switch (flags & PKT_TX_L4_MASK) {
-	case PKT_TX_UDP_CKSUM:
+	switch (flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		ctx.upper_setup.tcp_fields.tucso = (uint8_t)(ipcse +
 				offsetof(struct rte_udp_hdr, dgram_cksum));
 		cmp_mask |= TX_MACIP_LEN_CMP_MASK;
 		break;
-	case PKT_TX_TCP_CKSUM:
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		ctx.upper_setup.tcp_fields.tucso = (uint8_t)(ipcse +
 				offsetof(struct rte_tcp_hdr, cksum));
 		cmd_len |= E1000_TXD_CMD_TCP;
@@ -356,8 +355,8 @@ tx_desc_cksum_flags_to_upper(uint64_t ol_flags)
 	static const uint32_t l3_olinfo[2] = {0, E1000_TXD_POPTS_IXSM << 8};
 	uint32_t tmp;
 
-	tmp = l4_olinfo[(ol_flags & PKT_TX_L4_MASK) != PKT_TX_L4_NO_CKSUM];
-	tmp |= l3_olinfo[(ol_flags & PKT_TX_IP_CKSUM) != 0];
+	tmp = l4_olinfo[(ol_flags & RTE_MBUF_F_TX_L4_MASK) != RTE_MBUF_F_TX_L4_NO_CKSUM];
+	tmp |= l3_olinfo[(ol_flags & RTE_MBUF_F_TX_IP_CKSUM) != 0];
 	return tmp;
 }
 
@@ -410,7 +409,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		ol_flags = tx_pkt->ol_flags;
 
 		/* If hardware offload required */
-		tx_ol_req = (ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK));
+		tx_ol_req = (ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK));
 		if (tx_ol_req) {
 			hdrlen.f.vlan_tci = tx_pkt->vlan_tci;
 			hdrlen.f.l2_len = tx_pkt->l2_len;
@@ -506,7 +505,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		popts_spec = 0;
 
 		/* Set VLAN Tag offload fields. */
-		if (ol_flags & PKT_TX_VLAN) {
+		if (ol_flags & RTE_MBUF_F_TX_VLAN) {
 			cmd_type_len |= E1000_TXD_CMD_VLE;
 			popts_spec = tx_pkt->vlan_tci << E1000_TXD_VLAN_SHIFT;
 		}
@@ -656,7 +655,7 @@ rx_desc_status_to_pkt_flags(uint32_t rx_status)
 
 	/* Check if VLAN present */
 	pkt_flags = ((rx_status & E1000_RXD_STAT_VP) ?
-		PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED : 0);
+		RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED : 0);
 
 	return pkt_flags;
 }
@@ -667,9 +666,9 @@ rx_desc_error_to_pkt_flags(uint32_t rx_error)
 	uint64_t pkt_flags = 0;
 
 	if (rx_error & E1000_RXD_ERR_IPE)
-		pkt_flags |= PKT_RX_IP_CKSUM_BAD;
+		pkt_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	if (rx_error & E1000_RXD_ERR_TCPE)
-		pkt_flags |= PKT_RX_L4_CKSUM_BAD;
+		pkt_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	return pkt_flags;
 }
 
@@ -811,7 +810,7 @@ eth_em_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		rxm->ol_flags = rxm->ol_flags |
 				rx_desc_error_to_pkt_flags(rxd.errors);
 
-		/* Only valid if PKT_RX_VLAN set in pkt_flags */
+		/* Only valid if RTE_MBUF_F_RX_VLAN set in pkt_flags */
 		rxm->vlan_tci = rte_le_to_cpu_16(rxd.special);
 
 		/*
@@ -1037,7 +1036,7 @@ eth_em_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		first_seg->ol_flags = first_seg->ol_flags |
 					rx_desc_error_to_pkt_flags(rxd.errors);
 
-		/* Only valid if PKT_RX_VLAN set in pkt_flags */
+		/* Only valid if RTE_MBUF_F_RX_VLAN set in pkt_flags */
 		rxm->vlan_tci = rte_le_to_cpu_16(rxd.special);
 
 		/* Prefetch data of first segment, if configured to do so. */
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index c630894052..f2e8c7e5b9 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -44,24 +44,23 @@
 #include "e1000_ethdev.h"
 
 #ifdef RTE_LIBRTE_IEEE1588
-#define IGB_TX_IEEE1588_TMST PKT_TX_IEEE1588_TMST
+#define IGB_TX_IEEE1588_TMST RTE_MBUF_F_TX_IEEE1588_TMST
 #else
 #define IGB_TX_IEEE1588_TMST 0
 #endif
 /* Bit Mask to indicate what bits required for building TX context */
-#define IGB_TX_OFFLOAD_MASK (			 \
-		PKT_TX_OUTER_IPV6 |	 \
-		PKT_TX_OUTER_IPV4 |	 \
-		PKT_TX_IPV6 |		 \
-		PKT_TX_IPV4 |		 \
-		PKT_TX_VLAN |		 \
-		PKT_TX_IP_CKSUM |		 \
-		PKT_TX_L4_MASK |		 \
-		PKT_TX_TCP_SEG |		 \
+#define IGB_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_OUTER_IPV6 |	 \
+		RTE_MBUF_F_TX_OUTER_IPV4 |	 \
+		RTE_MBUF_F_TX_IPV6 |		 \
+		RTE_MBUF_F_TX_IPV4 |		 \
+		RTE_MBUF_F_TX_VLAN |		 \
+		RTE_MBUF_F_TX_IP_CKSUM |		 \
+		RTE_MBUF_F_TX_L4_MASK |		 \
+		RTE_MBUF_F_TX_TCP_SEG |		 \
 		IGB_TX_IEEE1588_TMST)
 
 #define IGB_TX_OFFLOAD_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ IGB_TX_OFFLOAD_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IGB_TX_OFFLOAD_MASK)
 
 /**
  * Structure associated with each descriptor of the RX ring of a RX queue.
@@ -224,12 +223,12 @@ struct igb_tx_queue {
 static inline uint64_t
 check_tso_para(uint64_t ol_req, union igb_tx_offload ol_para)
 {
-	if (!(ol_req & PKT_TX_TCP_SEG))
+	if (!(ol_req & RTE_MBUF_F_TX_TCP_SEG))
 		return ol_req;
 	if ((ol_para.tso_segsz > IGB_TSO_MAX_MSS) || (ol_para.l2_len +
 			ol_para.l3_len + ol_para.l4_len > IGB_TSO_MAX_HDRLEN)) {
-		ol_req &= ~PKT_TX_TCP_SEG;
-		ol_req |= PKT_TX_TCP_CKSUM;
+		ol_req &= ~RTE_MBUF_F_TX_TCP_SEG;
+		ol_req |= RTE_MBUF_F_TX_TCP_CKSUM;
 	}
 	return ol_req;
 }
@@ -260,13 +259,13 @@ igbe_set_xmit_ctx(struct igb_tx_queue* txq,
 	/* Specify which HW CTX to upload. */
 	mss_l4len_idx = (ctx_idx << E1000_ADVTXD_IDX_SHIFT);
 
-	if (ol_flags & PKT_TX_VLAN)
+	if (ol_flags & RTE_MBUF_F_TX_VLAN)
 		tx_offload_mask.data |= TX_VLAN_CMP_MASK;
 
 	/* check if TCP segmentation required for this packet */
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		/* implies IP cksum in IPv4 */
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			type_tucmd_mlhl = E1000_ADVTXD_TUCMD_IPV4 |
 				E1000_ADVTXD_TUCMD_L4T_TCP |
 				E1000_ADVTXD_DTYP_CTXT | E1000_ADVTXD_DCMD_DEXT;
@@ -279,26 +278,26 @@ igbe_set_xmit_ctx(struct igb_tx_queue* txq,
 		mss_l4len_idx |= tx_offload.tso_segsz << E1000_ADVTXD_MSS_SHIFT;
 		mss_l4len_idx |= tx_offload.l4_len << E1000_ADVTXD_L4LEN_SHIFT;
 	} else { /* no TSO, check if hardware checksum is needed */
-		if (ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK))
+		if (ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK))
 			tx_offload_mask.data |= TX_MACIP_LEN_CMP_MASK;
 
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			type_tucmd_mlhl = E1000_ADVTXD_TUCMD_IPV4;
 
-		switch (ol_flags & PKT_TX_L4_MASK) {
-		case PKT_TX_UDP_CKSUM:
+		switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+		case RTE_MBUF_F_TX_UDP_CKSUM:
 			type_tucmd_mlhl |= E1000_ADVTXD_TUCMD_L4T_UDP |
 				E1000_ADVTXD_DTYP_CTXT | E1000_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= sizeof(struct rte_udp_hdr)
 				<< E1000_ADVTXD_L4LEN_SHIFT;
 			break;
-		case PKT_TX_TCP_CKSUM:
+		case RTE_MBUF_F_TX_TCP_CKSUM:
 			type_tucmd_mlhl |= E1000_ADVTXD_TUCMD_L4T_TCP |
 				E1000_ADVTXD_DTYP_CTXT | E1000_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= sizeof(struct rte_tcp_hdr)
 				<< E1000_ADVTXD_L4LEN_SHIFT;
 			break;
-		case PKT_TX_SCTP_CKSUM:
+		case RTE_MBUF_F_TX_SCTP_CKSUM:
 			type_tucmd_mlhl |= E1000_ADVTXD_TUCMD_L4T_SCTP |
 				E1000_ADVTXD_DTYP_CTXT | E1000_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= sizeof(struct rte_sctp_hdr)
@@ -357,9 +356,9 @@ tx_desc_cksum_flags_to_olinfo(uint64_t ol_flags)
 	static const uint32_t l3_olinfo[2] = {0, E1000_ADVTXD_POPTS_IXSM};
 	uint32_t tmp;
 
-	tmp  = l4_olinfo[(ol_flags & PKT_TX_L4_MASK)  != PKT_TX_L4_NO_CKSUM];
-	tmp |= l3_olinfo[(ol_flags & PKT_TX_IP_CKSUM) != 0];
-	tmp |= l4_olinfo[(ol_flags & PKT_TX_TCP_SEG) != 0];
+	tmp  = l4_olinfo[(ol_flags & RTE_MBUF_F_TX_L4_MASK)  != RTE_MBUF_F_TX_L4_NO_CKSUM];
+	tmp |= l3_olinfo[(ol_flags & RTE_MBUF_F_TX_IP_CKSUM) != 0];
+	tmp |= l4_olinfo[(ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0];
 	return tmp;
 }
 
@@ -369,8 +368,8 @@ tx_desc_vlan_flags_to_cmdtype(uint64_t ol_flags)
 	uint32_t cmdtype;
 	static uint32_t vlan_cmd[2] = {0, E1000_ADVTXD_DCMD_VLE};
 	static uint32_t tso_cmd[2] = {0, E1000_ADVTXD_DCMD_TSE};
-	cmdtype = vlan_cmd[(ol_flags & PKT_TX_VLAN) != 0];
-	cmdtype |= tso_cmd[(ol_flags & PKT_TX_TCP_SEG) != 0];
+	cmdtype = vlan_cmd[(ol_flags & RTE_MBUF_F_TX_VLAN) != 0];
+	cmdtype |= tso_cmd[(ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0];
 	return cmdtype;
 }
 
@@ -526,11 +525,11 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		 */
 		cmd_type_len = txq->txd_type |
 			E1000_ADVTXD_DCMD_IFCS | E1000_ADVTXD_DCMD_DEXT;
-		if (tx_ol_req & PKT_TX_TCP_SEG)
+		if (tx_ol_req & RTE_MBUF_F_TX_TCP_SEG)
 			pkt_len -= (tx_pkt->l2_len + tx_pkt->l3_len + tx_pkt->l4_len);
 		olinfo_status = (pkt_len << E1000_ADVTXD_PAYLEN_SHIFT);
 #if defined(RTE_LIBRTE_IEEE1588)
-		if (ol_flags & PKT_TX_IEEE1588_TMST)
+		if (ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 			cmd_type_len |= E1000_ADVTXD_MAC_TSTAMP;
 #endif
 		if (tx_ol_req) {
@@ -628,7 +627,7 @@ eth_igb_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		m = tx_pkts[i];
 
 		/* Check some limitations for TSO in hardware */
-		if (m->ol_flags & PKT_TX_TCP_SEG)
+		if (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			if ((m->tso_segsz > IGB_TSO_MAX_MSS) ||
 					(m->l2_len + m->l3_len + m->l4_len >
 					IGB_TSO_MAX_HDRLEN)) {
@@ -743,11 +742,11 @@ igb_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
 static inline uint64_t
 rx_desc_hlen_type_rss_to_pkt_flags(struct igb_rx_queue *rxq, uint32_t hl_tp_rs)
 {
-	uint64_t pkt_flags = ((hl_tp_rs & 0x0F) == 0) ?  0 : PKT_RX_RSS_HASH;
+	uint64_t pkt_flags = ((hl_tp_rs & 0x0F) == 0) ?  0 : RTE_MBUF_F_RX_RSS_HASH;
 
 #if defined(RTE_LIBRTE_IEEE1588)
 	static uint32_t ip_pkt_etqf_map[8] = {
-		0, 0, 0, PKT_RX_IEEE1588_PTP,
+		0, 0, 0, RTE_MBUF_F_RX_IEEE1588_PTP,
 		0, 0, 0, 0,
 	};
 
@@ -773,11 +772,11 @@ rx_desc_status_to_pkt_flags(uint32_t rx_status)
 
 	/* Check if VLAN present */
 	pkt_flags = ((rx_status & E1000_RXD_STAT_VP) ?
-		PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED : 0);
+		RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED : 0);
 
 #if defined(RTE_LIBRTE_IEEE1588)
 	if (rx_status & E1000_RXD_STAT_TMST)
-		pkt_flags = pkt_flags | PKT_RX_IEEE1588_TMST;
+		pkt_flags = pkt_flags | RTE_MBUF_F_RX_IEEE1588_TMST;
 #endif
 	return pkt_flags;
 }
@@ -791,10 +790,10 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status)
 	 */
 
 	static uint64_t error_to_pkt_flags_map[4] = {
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD,
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD,
-		PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD,
-		PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD
 	};
 	return error_to_pkt_flags_map[(rx_status >>
 		E1000_RXD_ERR_CKSUM_BIT) & E1000_RXD_ERR_CKSUM_MSK];
@@ -936,7 +935,7 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
 
 		/*
-		 * The vlan_tci field is only valid when PKT_RX_VLAN is
+		 * The vlan_tci field is only valid when RTE_MBUF_F_RX_VLAN is
 		 * set in the pkt_flags field and must be in CPU byte order.
 		 */
 		if ((staterr & rte_cpu_to_le_32(E1000_RXDEXT_STATERR_LB)) &&
@@ -1176,7 +1175,7 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		first_seg->hash.rss = rxd.wb.lower.hi_dword.rss;
 
 		/*
-		 * The vlan_tci field is only valid when PKT_RX_VLAN is
+		 * The vlan_tci field is only valid when RTE_MBUF_F_RX_VLAN is
 		 * set in the pkt_flags field and must be in CPU byte order.
 		 */
 		if ((staterr & rte_cpu_to_le_32(E1000_RXDEXT_STATERR_LB)) &&
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index 4cebf60a68..73f8f94766 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -120,9 +120,9 @@ static const struct ena_stats ena_stats_rx_strings[] = {
 			DEV_TX_OFFLOAD_UDP_CKSUM |\
 			DEV_TX_OFFLOAD_IPV4_CKSUM |\
 			DEV_TX_OFFLOAD_TCP_TSO)
-#define MBUF_OFFLOADS (PKT_TX_L4_MASK |\
-		       PKT_TX_IP_CKSUM |\
-		       PKT_TX_TCP_SEG)
+#define MBUF_OFFLOADS (RTE_MBUF_F_TX_L4_MASK |\
+		       RTE_MBUF_F_TX_IP_CKSUM |\
+		       RTE_MBUF_F_TX_TCP_SEG)
 
 /** Vendor ID used by Amazon devices */
 #define PCI_VENDOR_ID_AMAZON 0x1D0F
@@ -130,15 +130,14 @@ static const struct ena_stats ena_stats_rx_strings[] = {
 #define PCI_DEVICE_ID_ENA_VF		0xEC20
 #define PCI_DEVICE_ID_ENA_VF_RSERV0	0xEC21
 
-#define	ENA_TX_OFFLOAD_MASK	(\
-	PKT_TX_L4_MASK |         \
-	PKT_TX_IPV6 |            \
-	PKT_TX_IPV4 |            \
-	PKT_TX_IP_CKSUM |        \
-	PKT_TX_TCP_SEG)
+#define	ENA_TX_OFFLOAD_MASK	(RTE_MBUF_F_TX_L4_MASK |         \
+	RTE_MBUF_F_TX_IPV6 |            \
+	RTE_MBUF_F_TX_IPV4 |            \
+	RTE_MBUF_F_TX_IP_CKSUM |        \
+	RTE_MBUF_F_TX_TCP_SEG)
 
 #define	ENA_TX_OFFLOAD_NOTSUP_MASK	\
-	(PKT_TX_OFFLOAD_MASK ^ ENA_TX_OFFLOAD_MASK)
+	(RTE_MBUF_F_TX_OFFLOAD_MASK ^ ENA_TX_OFFLOAD_MASK)
 
 static const struct rte_pci_id pci_id_ena_map[] = {
 	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_AMAZON, PCI_DEVICE_ID_ENA_VF) },
@@ -274,24 +273,24 @@ static inline void ena_rx_mbuf_prepare(struct rte_mbuf *mbuf,
 	if (ena_rx_ctx->l3_proto == ENA_ETH_IO_L3_PROTO_IPV4) {
 		packet_type |= RTE_PTYPE_L3_IPV4;
 		if (unlikely(ena_rx_ctx->l3_csum_err))
-			ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		else
-			ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 	} else if (ena_rx_ctx->l3_proto == ENA_ETH_IO_L3_PROTO_IPV6) {
 		packet_type |= RTE_PTYPE_L3_IPV6;
 	}
 
 	if (!ena_rx_ctx->l4_csum_checked || ena_rx_ctx->frag)
-		ol_flags |= PKT_RX_L4_CKSUM_UNKNOWN;
+		ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 	else
 		if (unlikely(ena_rx_ctx->l4_csum_err))
-			ol_flags |= PKT_RX_L4_CKSUM_BAD;
+			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		else
-			ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	if (fill_hash &&
 	    likely((packet_type & ENA_PTYPE_HAS_HASH) && !ena_rx_ctx->frag)) {
-		ol_flags |= PKT_RX_RSS_HASH;
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mbuf->hash.rss = ena_rx_ctx->hash;
 	}
 
@@ -309,7 +308,7 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
 	if ((mbuf->ol_flags & MBUF_OFFLOADS) &&
 	    (queue_offloads & QUEUE_OFFLOADS)) {
 		/* check if TSO is required */
-		if ((mbuf->ol_flags & PKT_TX_TCP_SEG) &&
+		if ((mbuf->ol_flags & RTE_MBUF_F_TX_TCP_SEG) &&
 		    (queue_offloads & DEV_TX_OFFLOAD_TCP_TSO)) {
 			ena_tx_ctx->tso_enable = true;
 
@@ -317,11 +316,11 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
 		}
 
 		/* check if L3 checksum is needed */
-		if ((mbuf->ol_flags & PKT_TX_IP_CKSUM) &&
+		if ((mbuf->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) &&
 		    (queue_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM))
 			ena_tx_ctx->l3_csum_enable = true;
 
-		if (mbuf->ol_flags & PKT_TX_IPV6) {
+		if (mbuf->ol_flags & RTE_MBUF_F_TX_IPV6) {
 			ena_tx_ctx->l3_proto = ENA_ETH_IO_L3_PROTO_IPV6;
 		} else {
 			ena_tx_ctx->l3_proto = ENA_ETH_IO_L3_PROTO_IPV4;
@@ -334,12 +333,12 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
 		}
 
 		/* check if L4 checksum is needed */
-		if (((mbuf->ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM) &&
+		if (((mbuf->ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_TCP_CKSUM) &&
 		    (queue_offloads & DEV_TX_OFFLOAD_TCP_CKSUM)) {
 			ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_TCP;
 			ena_tx_ctx->l4_csum_enable = true;
-		} else if (((mbuf->ol_flags & PKT_TX_L4_MASK) ==
-				PKT_TX_UDP_CKSUM) &&
+		} else if (((mbuf->ol_flags & RTE_MBUF_F_TX_L4_MASK) ==
+				RTE_MBUF_F_TX_UDP_CKSUM) &&
 				(queue_offloads & DEV_TX_OFFLOAD_UDP_CKSUM)) {
 			ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_UDP;
 			ena_tx_ctx->l4_csum_enable = true;
@@ -2151,7 +2150,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		ena_rx_mbuf_prepare(mbuf, &ena_rx_ctx, fill_hash);
 
 		if (unlikely(mbuf->ol_flags &
-				(PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD))) {
+				(RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD))) {
 			rte_atomic64_inc(&rx_ring->adapter->drv_stats->ierrors);
 			++rx_ring->rx_stats.bad_csum;
 		}
@@ -2193,7 +2192,7 @@ eth_ena_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		m = tx_pkts[i];
 		ol_flags = m->ol_flags;
 
-		if (!(ol_flags & PKT_TX_IPV4))
+		if (!(ol_flags & RTE_MBUF_F_TX_IPV4))
 			continue;
 
 		/* If there was not L2 header length specified, assume it is
@@ -2217,8 +2216,8 @@ eth_ena_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		}
 
 		if ((ol_flags & ENA_TX_OFFLOAD_NOTSUP_MASK) != 0 ||
-				(ol_flags & PKT_TX_L4_MASK) ==
-				PKT_TX_SCTP_CKSUM) {
+				(ol_flags & RTE_MBUF_F_TX_L4_MASK) ==
+				RTE_MBUF_F_TX_SCTP_CKSUM) {
 			rte_errno = ENOTSUP;
 			return i;
 		}
@@ -2237,7 +2236,7 @@ eth_ena_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		 */
 
 		ret = rte_net_intel_cksum_flags_prepare(m,
-			ol_flags & ~PKT_TX_TCP_SEG);
+			ol_flags & ~RTE_MBUF_F_TX_TCP_SEG);
 		if (ret != 0) {
 			rte_errno = -ret;
 			return i;
diff --git a/drivers/net/enetc/enetc_rxtx.c b/drivers/net/enetc/enetc_rxtx.c
index 412322523d..ea64c9f682 100644
--- a/drivers/net/enetc/enetc_rxtx.c
+++ b/drivers/net/enetc/enetc_rxtx.c
@@ -174,80 +174,80 @@ enetc_refill_rx_ring(struct enetc_bdr *rx_ring, const int buff_cnt)
 static inline void enetc_slow_parsing(struct rte_mbuf *m,
 				     uint64_t parse_results)
 {
-	m->ol_flags &= ~(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+	m->ol_flags &= ~(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 
 	switch (parse_results) {
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV4:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV4;
-		m->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV6:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV6;
-		m->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV4_TCP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV4 |
 				 RTE_PTYPE_L4_TCP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV6_TCP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV6 |
 				 RTE_PTYPE_L4_TCP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV4_UDP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV4 |
 				 RTE_PTYPE_L4_UDP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV6_UDP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV6 |
 				 RTE_PTYPE_L4_UDP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV4_SCTP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV4 |
 				 RTE_PTYPE_L4_SCTP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV6_SCTP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV6 |
 				 RTE_PTYPE_L4_SCTP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV4_ICMP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV4 |
 				 RTE_PTYPE_L4_ICMP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV6_ICMP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV6 |
 				 RTE_PTYPE_L4_ICMP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	/* More switch cases can be added */
 	default:
 		m->packet_type = RTE_PTYPE_UNKNOWN;
-		m->ol_flags |= PKT_RX_IP_CKSUM_UNKNOWN |
-			       PKT_RX_L4_CKSUM_UNKNOWN;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN |
+			       RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 	}
 }
 
@@ -256,7 +256,7 @@ static inline void __rte_hot
 enetc_dev_rx_parse(struct rte_mbuf *m, uint16_t parse_results)
 {
 	ENETC_PMD_DP_DEBUG("parse summary = 0x%x   ", parse_results);
-	m->ol_flags |= PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD;
+	m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	switch (parse_results) {
 	case ENETC_PKT_TYPE_ETHER:
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 2affd380c6..b312e216ef 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -250,7 +250,7 @@ void enic_init_vnic_resources(struct enic *enic)
 			error_interrupt_offset);
 		/* Compute unsupported ol flags for enic_prep_pkts() */
 		enic->wq[index].tx_offload_notsup_mask =
-			PKT_TX_OFFLOAD_MASK ^ enic->tx_offload_mask;
+			RTE_MBUF_F_TX_OFFLOAD_MASK ^ enic->tx_offload_mask;
 
 		cq_idx = enic_cq_wq(enic, index);
 		vnic_cq_init(&enic->cq[cq_idx],
@@ -1755,10 +1755,10 @@ enic_enable_overlay_offload(struct enic *enic)
 		(enic->geneve ? DEV_TX_OFFLOAD_GENEVE_TNL_TSO : 0) |
 		(enic->vxlan ? DEV_TX_OFFLOAD_VXLAN_TNL_TSO : 0);
 	enic->tx_offload_mask |=
-		PKT_TX_OUTER_IPV6 |
-		PKT_TX_OUTER_IPV4 |
-		PKT_TX_OUTER_IP_CKSUM |
-		PKT_TX_TUNNEL_MASK;
+		RTE_MBUF_F_TX_OUTER_IPV6 |
+		RTE_MBUF_F_TX_OUTER_IPV4 |
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM |
+		RTE_MBUF_F_TX_TUNNEL_MASK;
 	enic->overlay_offload = true;
 
 	if (enic->vxlan && enic->geneve)
diff --git a/drivers/net/enic/enic_res.c b/drivers/net/enic/enic_res.c
index 0493e096d0..e85f9f23fb 100644
--- a/drivers/net/enic/enic_res.c
+++ b/drivers/net/enic/enic_res.c
@@ -216,12 +216,12 @@ int enic_get_vnic_config(struct enic *enic)
 		DEV_RX_OFFLOAD_TCP_CKSUM |
 		DEV_RX_OFFLOAD_RSS_HASH;
 	enic->tx_offload_mask =
-		PKT_TX_IPV6 |
-		PKT_TX_IPV4 |
-		PKT_TX_VLAN |
-		PKT_TX_IP_CKSUM |
-		PKT_TX_L4_MASK |
-		PKT_TX_TCP_SEG;
+		RTE_MBUF_F_TX_IPV6 |
+		RTE_MBUF_F_TX_IPV4 |
+		RTE_MBUF_F_TX_VLAN |
+		RTE_MBUF_F_TX_IP_CKSUM |
+		RTE_MBUF_F_TX_L4_MASK |
+		RTE_MBUF_F_TX_TCP_SEG;
 
 	return 0;
 }
diff --git a/drivers/net/enic/enic_rxtx.c b/drivers/net/enic/enic_rxtx.c
index 3899907d6d..c44715bfd0 100644
--- a/drivers/net/enic/enic_rxtx.c
+++ b/drivers/net/enic/enic_rxtx.c
@@ -424,7 +424,7 @@ uint16_t enic_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 	for (i = 0; i != nb_pkts; i++) {
 		m = tx_pkts[i];
 		ol_flags = m->ol_flags;
-		if (!(ol_flags & PKT_TX_TCP_SEG)) {
+		if (!(ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 			if (unlikely(m->pkt_len > ENIC_TX_MAX_PKT_SIZE)) {
 				rte_errno = EINVAL;
 				return i;
@@ -489,7 +489,7 @@ uint16_t enic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 	wq_desc_avail = vnic_wq_desc_avail(wq);
 	head_idx = wq->head_idx;
 	desc_count = wq->ring.desc_count;
-	ol_flags_mask = PKT_TX_VLAN | PKT_TX_IP_CKSUM | PKT_TX_L4_MASK;
+	ol_flags_mask = RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK;
 	tx_oversized = &enic->soft_stats.tx_oversized;
 
 	nb_pkts = RTE_MIN(nb_pkts, ENIC_TX_XMIT_MAX);
@@ -500,7 +500,7 @@ uint16_t enic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		data_len = tx_pkt->data_len;
 		ol_flags = tx_pkt->ol_flags;
 		nb_segs = tx_pkt->nb_segs;
-		tso = ol_flags & PKT_TX_TCP_SEG;
+		tso = ol_flags & RTE_MBUF_F_TX_TCP_SEG;
 
 		/* drop packet if it's too big to send */
 		if (unlikely(!tso && pkt_len > ENIC_TX_MAX_PKT_SIZE)) {
@@ -517,7 +517,7 @@ uint16_t enic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 		mss = 0;
 		vlan_id = tx_pkt->vlan_tci;
-		vlan_tag_insert = !!(ol_flags & PKT_TX_VLAN);
+		vlan_tag_insert = !!(ol_flags & RTE_MBUF_F_TX_VLAN);
 		bus_addr = (dma_addr_t)
 			   (tx_pkt->buf_iova + tx_pkt->data_off);
 
@@ -543,20 +543,20 @@ uint16_t enic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			offload_mode = WQ_ENET_OFFLOAD_MODE_TSO;
 			mss = tx_pkt->tso_segsz;
 			/* For tunnel, need the size of outer+inner headers */
-			if (ol_flags & PKT_TX_TUNNEL_MASK) {
+			if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
 				header_len += tx_pkt->outer_l2_len +
 					tx_pkt->outer_l3_len;
 			}
 		}
 
 		if ((ol_flags & ol_flags_mask) && (header_len == 0)) {
-			if (ol_flags & PKT_TX_IP_CKSUM)
+			if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 				mss |= ENIC_CALC_IP_CKSUM;
 
 			/* Nic uses just 1 bit for UDP and TCP */
-			switch (ol_flags & PKT_TX_L4_MASK) {
-			case PKT_TX_TCP_CKSUM:
-			case PKT_TX_UDP_CKSUM:
+			switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+			case RTE_MBUF_F_TX_TCP_CKSUM:
+			case RTE_MBUF_F_TX_UDP_CKSUM:
 				mss |= ENIC_CALC_TCP_UDP_CKSUM;
 				break;
 			}
@@ -634,7 +634,7 @@ static void enqueue_simple_pkts(struct rte_mbuf **pkts,
 		desc->header_length_flags &=
 			((1 << WQ_ENET_FLAGS_EOP_SHIFT) |
 			 (1 << WQ_ENET_FLAGS_CQ_ENTRY_SHIFT));
-		if (p->ol_flags & PKT_TX_VLAN) {
+		if (p->ol_flags & RTE_MBUF_F_TX_VLAN) {
 			desc->header_length_flags |=
 				1 << WQ_ENET_FLAGS_VLAN_TAG_INSERT_SHIFT;
 		}
@@ -643,9 +643,9 @@ static void enqueue_simple_pkts(struct rte_mbuf **pkts,
 		 * is 0, so no need to set offload_mode.
 		 */
 		mss = 0;
-		if (p->ol_flags & PKT_TX_IP_CKSUM)
+		if (p->ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			mss |= ENIC_CALC_IP_CKSUM << WQ_ENET_MSS_SHIFT;
-		if (p->ol_flags & PKT_TX_L4_MASK)
+		if (p->ol_flags & RTE_MBUF_F_TX_L4_MASK)
 			mss |= ENIC_CALC_TCP_UDP_CKSUM << WQ_ENET_MSS_SHIFT;
 		desc->mss_loopback = mss;
 
diff --git a/drivers/net/enic/enic_rxtx_common.h b/drivers/net/enic/enic_rxtx_common.h
index d8668d1898..9d6d3476b0 100644
--- a/drivers/net/enic/enic_rxtx_common.h
+++ b/drivers/net/enic/enic_rxtx_common.h
@@ -209,11 +209,11 @@ enic_cq_rx_to_pkt_flags(struct cq_desc *cqd, struct rte_mbuf *mbuf)
 
 	/* VLAN STRIPPED flag. The L2 packet type updated here also */
 	if (bwflags & CQ_ENET_RQ_DESC_FLAGS_VLAN_STRIPPED) {
-		pkt_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		pkt_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mbuf->packet_type |= RTE_PTYPE_L2_ETHER;
 	} else {
 		if (vlan_tci != 0) {
-			pkt_flags |= PKT_RX_VLAN;
+			pkt_flags |= RTE_MBUF_F_RX_VLAN;
 			mbuf->packet_type |= RTE_PTYPE_L2_ETHER_VLAN;
 		} else {
 			mbuf->packet_type |= RTE_PTYPE_L2_ETHER;
@@ -227,16 +227,16 @@ enic_cq_rx_to_pkt_flags(struct cq_desc *cqd, struct rte_mbuf *mbuf)
 		clsf_cqd = (struct cq_enet_rq_clsf_desc *)cqd;
 		filter_id = clsf_cqd->filter_id;
 		if (filter_id) {
-			pkt_flags |= PKT_RX_FDIR;
+			pkt_flags |= RTE_MBUF_F_RX_FDIR;
 			if (filter_id != ENIC_MAGIC_FILTER_ID) {
 				/* filter_id = mark id + 1, so subtract 1 */
 				mbuf->hash.fdir.hi = filter_id - 1;
-				pkt_flags |= PKT_RX_FDIR_ID;
+				pkt_flags |= RTE_MBUF_F_RX_FDIR_ID;
 			}
 		}
 	} else if (enic_cq_rx_desc_rss_type(cqrd)) {
 		/* RSS flag */
-		pkt_flags |= PKT_RX_RSS_HASH;
+		pkt_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mbuf->hash.rss = enic_cq_rx_desc_rss_hash(cqrd);
 	}
 
@@ -254,17 +254,17 @@ enic_cq_rx_to_pkt_flags(struct cq_desc *cqd, struct rte_mbuf *mbuf)
 			 */
 			if (mbuf->packet_type & RTE_PTYPE_L3_IPV4) {
 				if (enic_cq_rx_desc_ipv4_csum_ok(cqrd))
-					pkt_flags |= PKT_RX_IP_CKSUM_GOOD;
+					pkt_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 				else
-					pkt_flags |= PKT_RX_IP_CKSUM_BAD;
+					pkt_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			}
 
 			if (l4_flags == RTE_PTYPE_L4_UDP ||
 			    l4_flags == RTE_PTYPE_L4_TCP) {
 				if (enic_cq_rx_desc_tcp_udp_csum_ok(cqrd))
-					pkt_flags |= PKT_RX_L4_CKSUM_GOOD;
+					pkt_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 				else
-					pkt_flags |= PKT_RX_L4_CKSUM_BAD;
+					pkt_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			}
 		}
 	}
diff --git a/drivers/net/enic/enic_rxtx_vec_avx2.c b/drivers/net/enic/enic_rxtx_vec_avx2.c
index 1848f52717..600efff270 100644
--- a/drivers/net/enic/enic_rxtx_vec_avx2.c
+++ b/drivers/net/enic/enic_rxtx_vec_avx2.c
@@ -167,21 +167,21 @@ enic_noscatter_vec_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 			0x80, 0x80, 11, 10,
 			0x80, 0x80, 11, 10,
 			0x80, 0x80, 11, 10);
-	/* PKT_RX_RSS_HASH is 1<<1 so fits in 8-bit integer */
+	/* RTE_MBUF_F_RX_RSS_HASH is 1<<1 so fits in 8-bit integer */
 	const __m256i rss_shuffle =
 		_mm256_set_epi8(/* second 128 bits */
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
 			0, /* rss_types = 0 */
 			/* first 128 bits */
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
 			0 /* rss_types = 0 */);
 	/*
 	 * VLAN offload flags.
@@ -191,8 +191,8 @@ enic_noscatter_vec_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	 */
 	const __m256i vlan_shuffle =
 		_mm256_set_epi32(0, 0, 0, 0,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, PKT_RX_VLAN);
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, RTE_MBUF_F_RX_VLAN);
 	/* Use the same shuffle index as vlan_shuffle */
 	const __m256i vlan_ptype_shuffle =
 		_mm256_set_epi32(0, 0, 0, 0,
@@ -211,39 +211,39 @@ enic_noscatter_vec_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	const __m256i csum_shuffle =
 		_mm256_set_epi8(/* second 128 bits */
 			/* 1111 ip4+ip4_ok+l4+l4_ok */
-			((PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1),
+			((RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1),
 			/* 1110 ip4_ok+ip4+l4+!l4_ok */
-			((PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1),
-			(PKT_RX_IP_CKSUM_GOOD >> 1), /* 1101 ip4+ip4_ok */
-			(PKT_RX_IP_CKSUM_GOOD >> 1), /* 1100 ip4_ok+ip4 */
-			(PKT_RX_L4_CKSUM_GOOD >> 1), /* 1011 l4+l4_ok */
-			(PKT_RX_L4_CKSUM_BAD >> 1),  /* 1010 l4+!l4_ok */
+			((RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1),
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD >> 1), /* 1101 ip4+ip4_ok */
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD >> 1), /* 1100 ip4_ok+ip4 */
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD >> 1), /* 1011 l4+l4_ok */
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD >> 1),  /* 1010 l4+!l4_ok */
 			0, /* 1001 */
 			0, /* 1000 */
 			/* 0111 !ip4_ok+ip4+l4+l4_ok */
-			((PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD) >> 1),
+			((RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1),
 			/* 0110 !ip4_ok+ip4+l4+!l4_ok */
-			((PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD) >> 1),
-			(PKT_RX_IP_CKSUM_BAD >> 1),  /* 0101 !ip4_ok+ip4 */
-			(PKT_RX_IP_CKSUM_BAD >> 1),  /* 0100 !ip4_ok+ip4 */
-			(PKT_RX_L4_CKSUM_GOOD >> 1), /* 0011 l4+l4_ok */
-			(PKT_RX_L4_CKSUM_BAD >> 1),  /* 0010 l4+!l4_ok */
+			((RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1),
+			(RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1),  /* 0101 !ip4_ok+ip4 */
+			(RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1),  /* 0100 !ip4_ok+ip4 */
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD >> 1), /* 0011 l4+l4_ok */
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD >> 1),  /* 0010 l4+!l4_ok */
 			0, /* 0001 */
 			0, /* 0000 */
 			/* first 128 bits */
-			((PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1),
-			((PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1),
-			(PKT_RX_IP_CKSUM_GOOD >> 1),
-			(PKT_RX_IP_CKSUM_GOOD >> 1),
-			(PKT_RX_L4_CKSUM_GOOD >> 1),
-			(PKT_RX_L4_CKSUM_BAD >> 1),
+			((RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1),
+			((RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1),
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD >> 1),
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD >> 1),
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD >> 1),
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD >> 1),
 			0, 0,
-			((PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD) >> 1),
-			((PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD) >> 1),
-			(PKT_RX_IP_CKSUM_BAD >> 1),
-			(PKT_RX_IP_CKSUM_BAD >> 1),
-			(PKT_RX_L4_CKSUM_GOOD >> 1),
-			(PKT_RX_L4_CKSUM_BAD >> 1),
+			((RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1),
+			((RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1),
+			(RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1),
+			(RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1),
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD >> 1),
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD >> 1),
 			0, 0);
 	/*
 	 * Non-fragment PTYPEs.
@@ -471,7 +471,7 @@ enic_noscatter_vec_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 			break;
 
 		/*
-		 * Compute PKT_RX_RSS_HASH.
+		 * Compute RTE_MBUF_F_RX_RSS_HASH.
 		 * Use 2 shifts and 1 shuffle for 8 desc: 0.375 inst/desc
 		 * RSS types in byte 0, 4, 8, 12, 16, 20, 24, 28
 		 * Everything else is zero.
@@ -479,7 +479,7 @@ enic_noscatter_vec_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		__m256i rss_types =
 			_mm256_srli_epi32(_mm256_slli_epi32(flags0_7, 10), 28);
 		/*
-		 * RSS flags (PKT_RX_RSS_HASH) are in
+		 * RSS flags (RTE_MBUF_F_RX_RSS_HASH) are in
 		 * byte 0, 4, 8, 12, 16, 20, 24, 28
 		 * Everything else is zero.
 		 */
@@ -557,7 +557,7 @@ enic_noscatter_vec_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		vlan0_7 = _mm256_sub_epi32(zero4, vlan0_7);
 
 		/*
-		 * Compute PKT_RX_VLAN and PKT_RX_VLAN_STRIPPED.
+		 * Compute RTE_MBUF_F_RX_VLAN and RTE_MBUF_F_RX_VLAN_STRIPPED.
 		 * Use 3 shifts, 1 or,  1 shuffle for 8 desc: 0.625 inst/desc
 		 * VLAN offload flags in byte 0, 4, 8, 12, 16, 20, 24, 28
 		 * Everything else is zero.
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index 496e72a003..b232d09104 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -37,16 +37,15 @@ static inline void dump_rxd(union fm10k_rx_desc *rxd)
 }
 #endif
 
-#define FM10K_TX_OFFLOAD_MASK (  \
-		PKT_TX_VLAN |        \
-		PKT_TX_IPV6 |            \
-		PKT_TX_IPV4 |            \
-		PKT_TX_IP_CKSUM |        \
-		PKT_TX_L4_MASK |         \
-		PKT_TX_TCP_SEG)
+#define FM10K_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_VLAN |        \
+		RTE_MBUF_F_TX_IPV6 |            \
+		RTE_MBUF_F_TX_IPV4 |            \
+		RTE_MBUF_F_TX_IP_CKSUM |        \
+		RTE_MBUF_F_TX_L4_MASK |         \
+		RTE_MBUF_F_TX_TCP_SEG)
 
 #define FM10K_TX_OFFLOAD_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ FM10K_TX_OFFLOAD_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ FM10K_TX_OFFLOAD_MASK)
 
 /* @note: When this function is changed, make corresponding change to
  * fm10k_dev_supported_ptypes_get()
@@ -78,21 +77,21 @@ rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d)
 						>> FM10K_RXD_PKTTYPE_SHIFT];
 
 	if (d->w.pkt_info & FM10K_RXD_RSSTYPE_MASK)
-		m->ol_flags |= PKT_RX_RSS_HASH;
+		m->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 
 	if (unlikely((d->d.staterr &
 		(FM10K_RXD_STATUS_IPCS | FM10K_RXD_STATUS_IPE)) ==
 		(FM10K_RXD_STATUS_IPCS | FM10K_RXD_STATUS_IPE)))
-		m->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	if (unlikely((d->d.staterr &
 		(FM10K_RXD_STATUS_L4CS | FM10K_RXD_STATUS_L4E)) ==
 		(FM10K_RXD_STATUS_L4CS | FM10K_RXD_STATUS_L4E)))
-		m->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	else
-		m->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+		m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 }
 
 uint16_t
@@ -131,10 +130,10 @@ fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		 * Packets in fm10k device always carry at least one VLAN tag.
 		 * For those packets coming in without VLAN tag,
 		 * the port default VLAN tag will be used.
-		 * So, always PKT_RX_VLAN flag is set and vlan_tci
+		 * So, always RTE_MBUF_F_RX_VLAN flag is set and vlan_tci
 		 * is valid for each RX packet's mbuf.
 		 */
-		mbuf->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mbuf->vlan_tci = desc.w.vlan;
 		/**
 		 * mbuf->vlan_tci_outer is an idle field in fm10k driver,
@@ -292,10 +291,10 @@ fm10k_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		 * Packets in fm10k device always carry at least one VLAN tag.
 		 * For those packets coming in without VLAN tag,
 		 * the port default VLAN tag will be used.
-		 * So, always PKT_RX_VLAN flag is set and vlan_tci
+		 * So, always RTE_MBUF_F_RX_VLAN flag is set and vlan_tci
 		 * is valid for each RX packet's mbuf.
 		 */
-		first_seg->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		first_seg->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		first_seg->vlan_tci = desc.w.vlan;
 		/**
 		 * mbuf->vlan_tci_outer is an idle field in fm10k driver,
@@ -605,11 +604,11 @@ static inline void tx_xmit_pkt(struct fm10k_tx_queue *q, struct rte_mbuf *mb)
 	/* set checksum flags on first descriptor of packet. SCTP checksum
 	 * offload is not supported, but we do not explicitly check for this
 	 * case in favor of greatly simplified processing. */
-	if (mb->ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK | PKT_TX_TCP_SEG))
+	if (mb->ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK | RTE_MBUF_F_TX_TCP_SEG))
 		q->hw_ring[q->next_free].flags |= FM10K_TXD_FLAG_CSUM;
 
 	/* set vlan if requested */
-	if (mb->ol_flags & PKT_TX_VLAN)
+	if (mb->ol_flags & RTE_MBUF_F_TX_VLAN)
 		q->hw_ring[q->next_free].vlan = mb->vlan_tci;
 	else
 		q->hw_ring[q->next_free].vlan = 0;
@@ -620,9 +619,9 @@ static inline void tx_xmit_pkt(struct fm10k_tx_queue *q, struct rte_mbuf *mb)
 	q->hw_ring[q->next_free].buflen =
 			rte_cpu_to_le_16(rte_pktmbuf_data_len(mb));
 
-	if (mb->ol_flags & PKT_TX_TCP_SEG) {
+	if (mb->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		hdrlen = mb->l2_len + mb->l3_len + mb->l4_len;
-		hdrlen += (mb->ol_flags & PKT_TX_TUNNEL_MASK) ?
+		hdrlen += (mb->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 			  mb->outer_l2_len + mb->outer_l3_len : 0;
 		if (q->hw_ring[q->next_free].flags & FM10K_TXD_FLAG_FTAG)
 			hdrlen += sizeof(struct fm10k_ftag);
@@ -699,7 +698,7 @@ fm10k_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 	for (i = 0; i < nb_pkts; i++) {
 		m = tx_pkts[i];
 
-		if ((m->ol_flags & PKT_TX_TCP_SEG) &&
+		if ((m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) &&
 				(m->tso_segsz < FM10K_TSO_MINMSS)) {
 			rte_errno = EINVAL;
 			return i;
diff --git a/drivers/net/fm10k/fm10k_rxtx_vec.c b/drivers/net/fm10k/fm10k_rxtx_vec.c
index 83af01dc2d..7ecba9fef2 100644
--- a/drivers/net/fm10k/fm10k_rxtx_vec.c
+++ b/drivers/net/fm10k/fm10k_rxtx_vec.c
@@ -38,7 +38,7 @@ fm10k_reset_tx_queue(struct fm10k_tx_queue *txq);
 #define RXEFLAG_SHIFT     (13)
 /* IPE/L4E flag shift */
 #define L3L4EFLAG_SHIFT     (14)
-/* shift PKT_RX_L4_CKSUM_GOOD into one byte by 1 bit */
+/* shift RTE_MBUF_F_RX_L4_CKSUM_GOOD into one byte by 1 bit */
 #define CKSUM_SHIFT     (1)
 
 static inline void
@@ -52,10 +52,10 @@ fm10k_desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
 
 	const __m128i pkttype_msk = _mm_set_epi16(
 			0x0000, 0x0000, 0x0000, 0x0000,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 
 	/* mask everything except rss type */
 	const __m128i rsstype_msk = _mm_set_epi16(
@@ -75,10 +75,10 @@ fm10k_desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
 	const __m128i l3l4cksum_flag = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			(PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD) >> CKSUM_SHIFT,
-			(PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD) >> CKSUM_SHIFT,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> CKSUM_SHIFT,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> CKSUM_SHIFT);
+			(RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> CKSUM_SHIFT,
+			(RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> CKSUM_SHIFT,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> CKSUM_SHIFT,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> CKSUM_SHIFT);
 
 	const __m128i rxe_flag = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
@@ -87,9 +87,10 @@ fm10k_desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
 
 	/* map rss type to rss hash flag */
 	const __m128i rss_flags = _mm_set_epi8(0, 0, 0, 0,
-			0, 0, 0, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH, 0,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, 0);
+			0, 0, 0, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, 0, RTE_MBUF_F_RX_RSS_HASH, 0,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, 0);
 
 	/* Calculate RSS_hash and Vlan fields */
 	ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
diff --git a/drivers/net/hinic/hinic_pmd_rx.c b/drivers/net/hinic/hinic_pmd_rx.c
index 842399cc4c..311b22ccd1 100644
--- a/drivers/net/hinic/hinic_pmd_rx.c
+++ b/drivers/net/hinic/hinic_pmd_rx.c
@@ -802,7 +802,7 @@ static inline uint64_t hinic_rx_rss_hash(uint32_t offload_type,
 	rss_type = HINIC_GET_RSS_TYPES(offload_type);
 	if (likely(rss_type != 0)) {
 		*rss_hash = cqe_hass_val;
-		return PKT_RX_RSS_HASH;
+		return RTE_MBUF_F_RX_RSS_HASH;
 	}
 
 	return 0;
@@ -815,33 +815,33 @@ static inline uint64_t hinic_rx_csum(uint32_t status, struct hinic_rxq *rxq)
 	struct hinic_nic_dev *nic_dev = rxq->nic_dev;
 
 	if (unlikely(!(nic_dev->rx_csum_en & HINIC_RX_CSUM_OFFLOAD_EN)))
-		return PKT_RX_IP_CKSUM_UNKNOWN;
+		return RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 
 	/* most case checksum is ok */
 	checksum_err = HINIC_GET_RX_CSUM_ERR(status);
 	if (likely(checksum_err == 0))
-		return (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		return (RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 
 	/* If BYPASS bit set, all other status indications should be ignored */
 	if (unlikely(HINIC_CSUM_ERR_BYPASSED(checksum_err)))
-		return PKT_RX_IP_CKSUM_UNKNOWN;
+		return RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 
 	flags = 0;
 
 	/* IP checksum error */
 	if (HINIC_CSUM_ERR_IP(checksum_err))
-		flags |= PKT_RX_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		flags |= PKT_RX_IP_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	/* L4 checksum error */
 	if (HINIC_CSUM_ERR_L4(checksum_err))
-		flags |= PKT_RX_L4_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	else
-		flags |= PKT_RX_L4_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	if (unlikely(HINIC_CSUM_ERR_OTHER(checksum_err)))
-		flags = PKT_RX_L4_CKSUM_NONE;
+		flags = RTE_MBUF_F_RX_L4_CKSUM_NONE;
 
 	rxq->rxq_stats.errors++;
 
@@ -861,7 +861,7 @@ static inline uint64_t hinic_rx_vlan(uint32_t offload_type, uint32_t vlan_len,
 
 	*vlan_tci = vlan_tag;
 
-	return PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+	return RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 }
 
 static inline u32 hinic_rx_alloc_mbuf_bulk(struct hinic_rxq *rxq,
@@ -1061,7 +1061,7 @@ u16 hinic_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts)
 		/* lro offload */
 		lro_num = HINIC_GET_RX_NUM_LRO(cqe.status);
 		if (unlikely(lro_num != 0)) {
-			rxm->ol_flags |= PKT_RX_LRO;
+			rxm->ol_flags |= RTE_MBUF_F_RX_LRO;
 			rxm->tso_segsz = pkt_len / lro_num;
 		}
 
diff --git a/drivers/net/hinic/hinic_pmd_tx.c b/drivers/net/hinic/hinic_pmd_tx.c
index e14937139d..2688817f37 100644
--- a/drivers/net/hinic/hinic_pmd_tx.c
+++ b/drivers/net/hinic/hinic_pmd_tx.c
@@ -592,7 +592,7 @@ hinic_fill_tx_offload_info(struct rte_mbuf *mbuf,
 	task->pkt_info2 = 0;
 
 	/* Base VLAN */
-	if (unlikely(ol_flags & PKT_TX_VLAN)) {
+	if (unlikely(ol_flags & RTE_MBUF_F_TX_VLAN)) {
 		vlan_tag = mbuf->vlan_tci;
 		hinic_set_vlan_tx_offload(task, queue_info, vlan_tag,
 					  vlan_tag >> VLAN_PRIO_SHIFT);
@@ -602,7 +602,7 @@ hinic_fill_tx_offload_info(struct rte_mbuf *mbuf,
 	if (unlikely(!(ol_flags & HINIC_TX_CKSUM_OFFLOAD_MASK)))
 		return;
 
-	if ((ol_flags & PKT_TX_TCP_SEG))
+	if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 		/* set tso info for task and qsf */
 		hinic_set_tso_info(task, queue_info, mbuf, tx_off_info);
 	else /* just support l4 checksum offload */
@@ -718,7 +718,7 @@ hinic_ipv4_phdr_cksum(const struct rte_ipv4_hdr *ipv4_hdr, uint64_t ol_flags)
 	psd_hdr.dst_addr = ipv4_hdr->dst_addr;
 	psd_hdr.zero = 0;
 	psd_hdr.proto = ipv4_hdr->next_proto_id;
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		psd_hdr.len = 0;
 	} else {
 		psd_hdr.len =
@@ -738,7 +738,7 @@ hinic_ipv6_phdr_cksum(const struct rte_ipv6_hdr *ipv6_hdr, uint64_t ol_flags)
 	} psd_hdr;
 
 	psd_hdr.proto = (ipv6_hdr->proto << 24);
-	if (ol_flags & PKT_TX_TCP_SEG)
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 		psd_hdr.len = 0;
 	else
 		psd_hdr.len = ipv6_hdr->payload_len;
@@ -754,10 +754,10 @@ static inline void hinic_get_outer_cs_pld_offset(struct rte_mbuf *m,
 {
 	uint64_t ol_flags = m->ol_flags;
 
-	if ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_UDP_CKSUM)
+	if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_UDP_CKSUM)
 		off_info->payload_offset = m->outer_l2_len + m->outer_l3_len +
 					   m->l2_len + m->l3_len;
-	else if ((ol_flags & PKT_TX_TCP_CKSUM) || (ol_flags & PKT_TX_TCP_SEG))
+	else if ((ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) || (ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 		off_info->payload_offset = m->outer_l2_len + m->outer_l3_len +
 					   m->l2_len + m->l3_len + m->l4_len;
 }
@@ -767,10 +767,10 @@ static inline void hinic_get_pld_offset(struct rte_mbuf *m,
 {
 	uint64_t ol_flags = m->ol_flags;
 
-	if (((ol_flags & PKT_TX_L4_MASK) == PKT_TX_UDP_CKSUM) ||
-	    ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_SCTP_CKSUM))
+	if (((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_UDP_CKSUM) ||
+	    ((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_SCTP_CKSUM))
 		off_info->payload_offset = m->l2_len + m->l3_len;
-	else if ((ol_flags & PKT_TX_TCP_CKSUM) || (ol_flags & PKT_TX_TCP_SEG))
+	else if ((ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) || (ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 		off_info->payload_offset = m->l2_len + m->l3_len +
 					   m->l4_len;
 }
@@ -845,11 +845,11 @@ static inline uint8_t hinic_analyze_l3_type(struct rte_mbuf *mbuf)
 	uint8_t l3_type;
 	uint64_t ol_flags = mbuf->ol_flags;
 
-	if (ol_flags & PKT_TX_IPV4)
-		l3_type = (ol_flags & PKT_TX_IP_CKSUM) ?
+	if (ol_flags & RTE_MBUF_F_TX_IPV4)
+		l3_type = (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) ?
 			  IPV4_PKT_WITH_CHKSUM_OFFLOAD :
 			  IPV4_PKT_NO_CHKSUM_OFFLOAD;
-	else if (ol_flags & PKT_TX_IPV6)
+	else if (ol_flags & RTE_MBUF_F_TX_IPV6)
 		l3_type = IPV6_PKT;
 	else
 		l3_type = UNKNOWN_L3TYPE;
@@ -866,11 +866,11 @@ static inline void hinic_calculate_tcp_checksum(struct rte_mbuf *mbuf,
 	struct rte_tcp_hdr *tcp_hdr;
 	uint64_t ol_flags = mbuf->ol_flags;
 
-	if (ol_flags & PKT_TX_IPV4) {
+	if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		ipv4_hdr = rte_pktmbuf_mtod_offset(mbuf, struct rte_ipv4_hdr *,
 						   inner_l3_offset);
 
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			ipv4_hdr->hdr_checksum = 0;
 
 		tcp_hdr = (struct rte_tcp_hdr *)((char *)ipv4_hdr +
@@ -898,11 +898,11 @@ static inline void hinic_calculate_udp_checksum(struct rte_mbuf *mbuf,
 	struct rte_udp_hdr *udp_hdr;
 	uint64_t ol_flags = mbuf->ol_flags;
 
-	if (ol_flags & PKT_TX_IPV4) {
+	if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		ipv4_hdr = rte_pktmbuf_mtod_offset(mbuf, struct rte_ipv4_hdr *,
 						   inner_l3_offset);
 
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			ipv4_hdr->hdr_checksum = 0;
 
 		udp_hdr = (struct rte_udp_hdr *)((char *)ipv4_hdr +
@@ -938,21 +938,21 @@ static inline void hinic_calculate_checksum(struct rte_mbuf *mbuf,
 {
 	uint64_t ol_flags = mbuf->ol_flags;
 
-	switch (ol_flags & PKT_TX_L4_MASK) {
-	case PKT_TX_UDP_CKSUM:
+	switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		hinic_calculate_udp_checksum(mbuf, off_info, inner_l3_offset);
 		break;
 
-	case PKT_TX_TCP_CKSUM:
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		hinic_calculate_tcp_checksum(mbuf, off_info, inner_l3_offset);
 		break;
 
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		hinic_calculate_sctp_checksum(off_info);
 		break;
 
 	default:
-		if (ol_flags & PKT_TX_TCP_SEG)
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			hinic_calculate_tcp_checksum(mbuf, off_info,
 						     inner_l3_offset);
 		break;
@@ -970,8 +970,8 @@ static inline int hinic_tx_offload_pkt_prepare(struct rte_mbuf *m,
 		return 0;
 
 	/* Support only vxlan offload */
-	if (unlikely((ol_flags & PKT_TX_TUNNEL_MASK) &&
-	    !(ol_flags & PKT_TX_TUNNEL_VXLAN)))
+	if (unlikely((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) &&
+		     !(ol_flags & RTE_MBUF_F_TX_TUNNEL_VXLAN)))
 		return -ENOTSUP;
 
 #ifdef RTE_LIBRTE_ETHDEV_DEBUG
@@ -979,7 +979,7 @@ static inline int hinic_tx_offload_pkt_prepare(struct rte_mbuf *m,
 		return -EINVAL;
 #endif
 
-	if (ol_flags & PKT_TX_TUNNEL_VXLAN) {
+	if (ol_flags & RTE_MBUF_F_TX_TUNNEL_VXLAN) {
 		off_info->tunnel_type = TUNNEL_UDP_NO_CSUM;
 
 		/* inner_l4_tcp_udp csum should be set to calculate outer
@@ -987,9 +987,9 @@ static inline int hinic_tx_offload_pkt_prepare(struct rte_mbuf *m,
 		 */
 		off_info->inner_l4_tcp_udp = 1;
 
-		if ((ol_flags & PKT_TX_OUTER_IP_CKSUM) ||
-		    (ol_flags & PKT_TX_OUTER_IPV6) ||
-		    (ol_flags & PKT_TX_TCP_SEG)) {
+		if ((ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) ||
+		    (ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) ||
+		    (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 			inner_l3_offset = m->l2_len + m->outer_l2_len +
 					  m->outer_l3_len;
 			off_info->outer_l2_len = m->outer_l2_len;
@@ -1057,7 +1057,7 @@ static inline bool hinic_get_sge_txoff_info(struct rte_mbuf *mbuf_pkt,
 	sqe_info->cpy_mbuf_cnt = 0;
 
 	/* non tso mbuf */
-	if (likely(!(mbuf_pkt->ol_flags & PKT_TX_TCP_SEG))) {
+	if (likely(!(mbuf_pkt->ol_flags & RTE_MBUF_F_TX_TCP_SEG))) {
 		if (unlikely(mbuf_pkt->pkt_len > MAX_SINGLE_SGE_SIZE)) {
 			/* non tso packet len must less than 64KB */
 			return false;
diff --git a/drivers/net/hinic/hinic_pmd_tx.h b/drivers/net/hinic/hinic_pmd_tx.h
index d98abad8da..a3ec6299fb 100644
--- a/drivers/net/hinic/hinic_pmd_tx.h
+++ b/drivers/net/hinic/hinic_pmd_tx.h
@@ -13,13 +13,12 @@
 #define HINIC_GET_WQ_TAIL(txq)		\
 		((txq)->wq->queue_buf_vaddr + (txq)->wq->wq_buf_size)
 
-#define HINIC_TX_CKSUM_OFFLOAD_MASK (	\
-		PKT_TX_IP_CKSUM |	\
-		PKT_TX_TCP_CKSUM |	\
-		PKT_TX_UDP_CKSUM |      \
-		PKT_TX_SCTP_CKSUM |	\
-		PKT_TX_OUTER_IP_CKSUM |	\
-		PKT_TX_TCP_SEG)
+#define HINIC_TX_CKSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM |	\
+		RTE_MBUF_F_TX_TCP_CKSUM |	\
+		RTE_MBUF_F_TX_UDP_CKSUM |      \
+		RTE_MBUF_F_TX_SCTP_CKSUM |	\
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM |	\
+		RTE_MBUF_F_TX_TCP_SEG)
 
 enum sq_wqe_type {
 	SQ_NORMAL_WQE = 0,
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index 0e4e4269a1..7c8ee12b21 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -623,7 +623,7 @@ struct hns3_hw {
 	 *  - HNS3_SPECIAL_PORT_SW_CKSUM_MODE
 	 *     In this mode, HW can not do checksum for special UDP port like
 	 *     4789, 4790, 6081 for non-tunnel UDP packets and UDP tunnel
-	 *     packets without the PKT_TX_TUNEL_MASK in the mbuf. So, PMD need
+	 *     packets without the RTE_MBUF_F_TX_TUNEL_MASK in the mbuf. So, PMD need
 	 *     do the checksum for these packets to avoid a checksum error.
 	 *
 	 *  - HNS3_SPECIAL_PORT_HW_CKSUM_MODE
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 59ba9e7454..45e22edd74 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -2329,11 +2329,11 @@ hns3_rxd_to_vlan_tci(struct hns3_rx_queue *rxq, struct rte_mbuf *mb,
 		mb->vlan_tci = 0;
 		return;
 	case HNS3_INNER_STRP_VLAN_VLD:
-		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mb->vlan_tci = rte_le_to_cpu_16(rxd->rx.vlan_tag);
 		return;
 	case HNS3_OUTER_STRP_VLAN_VLD:
-		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mb->vlan_tci = rte_le_to_cpu_16(rxd->rx.ot_vlan_tag);
 		return;
 	default:
@@ -2383,7 +2383,7 @@ hns3_rx_ptp_timestamp_handle(struct hns3_rx_queue *rxq, struct rte_mbuf *mbuf,
 	struct hns3_pf *pf = HNS3_DEV_PRIVATE_TO_PF(rxq->hns);
 	uint64_t timestamp = rte_le_to_cpu_64(rxd->timestamp);
 
-	mbuf->ol_flags |= PKT_RX_IEEE1588_PTP | PKT_RX_IEEE1588_TMST;
+	mbuf->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP | RTE_MBUF_F_RX_IEEE1588_TMST;
 	if (hns3_timestamp_rx_dynflag > 0) {
 		*RTE_MBUF_DYNFIELD(mbuf, hns3_timestamp_dynfield_offset,
 			rte_mbuf_timestamp_t *) = timestamp;
@@ -2469,11 +2469,11 @@ hns3_recv_pkts_simple(void *rx_queue,
 		rxm->data_len = rxm->pkt_len;
 		rxm->port = rxq->port_id;
 		rxm->hash.rss = rte_le_to_cpu_32(rxd.rx.rss_hash);
-		rxm->ol_flags |= PKT_RX_RSS_HASH;
+		rxm->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		if (unlikely(bd_base_info & BIT(HNS3_RXD_LUM_B))) {
 			rxm->hash.fdir.hi =
 				rte_le_to_cpu_16(rxd.rx.fd_id);
-			rxm->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+			rxm->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		}
 		rxm->nb_segs = 1;
 		rxm->next = NULL;
@@ -2488,7 +2488,7 @@ hns3_recv_pkts_simple(void *rx_queue,
 		rxm->packet_type = hns3_rx_calc_ptype(rxq, l234_info, ol_info);
 
 		if (rxm->packet_type == RTE_PTYPE_L2_ETHER_TIMESYNC)
-			rxm->ol_flags |= PKT_RX_IEEE1588_PTP;
+			rxm->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP;
 
 		hns3_rxd_to_vlan_tci(rxq, rxm, l234_info, &rxd);
 
@@ -2687,17 +2687,17 @@ hns3_recv_scattered_pkts(void *rx_queue,
 
 		first_seg->port = rxq->port_id;
 		first_seg->hash.rss = rte_le_to_cpu_32(rxd.rx.rss_hash);
-		first_seg->ol_flags = PKT_RX_RSS_HASH;
+		first_seg->ol_flags = RTE_MBUF_F_RX_RSS_HASH;
 		if (unlikely(bd_base_info & BIT(HNS3_RXD_LUM_B))) {
 			first_seg->hash.fdir.hi =
 				rte_le_to_cpu_16(rxd.rx.fd_id);
-			first_seg->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+			first_seg->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		}
 
 		gro_size = hns3_get_field(bd_base_info, HNS3_RXD_GRO_SIZE_M,
 					  HNS3_RXD_GRO_SIZE_S);
 		if (gro_size != 0) {
-			first_seg->ol_flags |= PKT_RX_LRO;
+			first_seg->ol_flags |= RTE_MBUF_F_RX_LRO;
 			first_seg->tso_segsz = gro_size;
 		}
 
@@ -2712,7 +2712,7 @@ hns3_recv_scattered_pkts(void *rx_queue,
 						l234_info, ol_info);
 
 		if (first_seg->packet_type == RTE_PTYPE_L2_ETHER_TIMESYNC)
-			rxm->ol_flags |= PKT_RX_IEEE1588_PTP;
+			rxm->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP;
 
 		hns3_rxd_to_vlan_tci(rxq, first_seg, l234_info, &rxd);
 
@@ -3139,7 +3139,7 @@ hns3_restore_gro_conf(struct hns3_hw *hw)
 static inline bool
 hns3_pkt_is_tso(struct rte_mbuf *m)
 {
-	return (m->tso_segsz != 0 && m->ol_flags & PKT_TX_TCP_SEG);
+	return (m->tso_segsz != 0 && m->ol_flags & RTE_MBUF_F_TX_TCP_SEG);
 }
 
 static void
@@ -3172,7 +3172,7 @@ hns3_fill_first_desc(struct hns3_tx_queue *txq, struct hns3_desc *desc,
 	uint32_t paylen;
 
 	hdr_len = rxm->l2_len + rxm->l3_len + rxm->l4_len;
-	hdr_len += (ol_flags & PKT_TX_TUNNEL_MASK) ?
+	hdr_len += (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 			   rxm->outer_l2_len + rxm->outer_l3_len : 0;
 	paylen = rxm->pkt_len - hdr_len;
 	desc->tx.paylen_fd_dop_ol4cs |= rte_cpu_to_le_32(paylen);
@@ -3190,11 +3190,11 @@ hns3_fill_first_desc(struct hns3_tx_queue *txq, struct hns3_desc *desc,
 	 * To avoid the VLAN of Tx descriptor is overwritten by PVID, it should
 	 * be added to the position close to the IP header when PVID is enabled.
 	 */
-	if (!txq->pvid_sw_shift_en && ol_flags & (PKT_TX_VLAN |
-				PKT_TX_QINQ)) {
+	if (!txq->pvid_sw_shift_en && ol_flags & (RTE_MBUF_F_TX_VLAN |
+				                  RTE_MBUF_F_TX_QINQ)) {
 		desc->tx.ol_type_vlan_len_msec |=
 				rte_cpu_to_le_32(BIT(HNS3_TXD_OVLAN_B));
-		if (ol_flags & PKT_TX_QINQ)
+		if (ol_flags & RTE_MBUF_F_TX_QINQ)
 			desc->tx.outer_vlan_tag =
 					rte_cpu_to_le_16(rxm->vlan_tci_outer);
 		else
@@ -3202,14 +3202,14 @@ hns3_fill_first_desc(struct hns3_tx_queue *txq, struct hns3_desc *desc,
 					rte_cpu_to_le_16(rxm->vlan_tci);
 	}
 
-	if (ol_flags & PKT_TX_QINQ ||
-	    ((ol_flags & PKT_TX_VLAN) && txq->pvid_sw_shift_en)) {
+	if (ol_flags & RTE_MBUF_F_TX_QINQ ||
+	    ((ol_flags & RTE_MBUF_F_TX_VLAN) && txq->pvid_sw_shift_en)) {
 		desc->tx.type_cs_vlan_tso_len |=
 					rte_cpu_to_le_32(BIT(HNS3_TXD_VLAN_B));
 		desc->tx.vlan_tag = rte_cpu_to_le_16(rxm->vlan_tci);
 	}
 
-	if (ol_flags & PKT_TX_IEEE1588_TMST)
+	if (ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 		desc->tx.tp_fe_sc_vld_ra_ri |=
 				rte_cpu_to_le_16(BIT(HNS3_TXD_TSYN_B));
 }
@@ -3331,14 +3331,14 @@ hns3_parse_outer_params(struct rte_mbuf *m, uint32_t *ol_type_vlan_len_msec)
 	uint64_t ol_flags = m->ol_flags;
 
 	/* (outer) IP header type */
-	if (ol_flags & PKT_TX_OUTER_IPV4) {
-		if (ol_flags & PKT_TX_OUTER_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IPV4) {
+		if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 			tmp |= hns3_gen_field_val(HNS3_TXD_OL3T_M,
 					HNS3_TXD_OL3T_S, HNS3_OL3T_IPV4_CSUM);
 		else
 			tmp |= hns3_gen_field_val(HNS3_TXD_OL3T_M,
 				HNS3_TXD_OL3T_S, HNS3_OL3T_IPV4_NO_CSUM);
-	} else if (ol_flags & PKT_TX_OUTER_IPV6) {
+	} else if (ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) {
 		tmp |= hns3_gen_field_val(HNS3_TXD_OL3T_M, HNS3_TXD_OL3T_S,
 					HNS3_OL3T_IPV6);
 	}
@@ -3358,10 +3358,10 @@ hns3_parse_inner_params(struct rte_mbuf *m, uint32_t *ol_type_vlan_len_msec,
 	uint64_t ol_flags = m->ol_flags;
 	uint16_t inner_l2_len;
 
-	switch (ol_flags & PKT_TX_TUNNEL_MASK) {
-	case PKT_TX_TUNNEL_VXLAN_GPE:
-	case PKT_TX_TUNNEL_GENEVE:
-	case PKT_TX_TUNNEL_VXLAN:
+	switch (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
 		/* MAC in UDP tunnelling packet, include VxLAN and GENEVE */
 		tmp_outer |= hns3_gen_field_val(HNS3_TXD_TUNTYPE_M,
 				HNS3_TXD_TUNTYPE_S, HNS3_TUN_MAC_IN_UDP);
@@ -3380,7 +3380,7 @@ hns3_parse_inner_params(struct rte_mbuf *m, uint32_t *ol_type_vlan_len_msec,
 
 		inner_l2_len = m->l2_len - RTE_ETHER_VXLAN_HLEN;
 		break;
-	case PKT_TX_TUNNEL_GRE:
+	case RTE_MBUF_F_TX_TUNNEL_GRE:
 		tmp_outer |= hns3_gen_field_val(HNS3_TXD_TUNTYPE_M,
 					HNS3_TXD_TUNTYPE_S, HNS3_TUN_NVGRE);
 		/*
@@ -3429,7 +3429,7 @@ hns3_parse_tunneling_params(struct hns3_tx_queue *txq, struct rte_mbuf *m,
 	 * calculations, the length of the L2 header include the outer and
 	 * inner, will be filled during the parsing of tunnel packects.
 	 */
-	if (!(ol_flags & PKT_TX_TUNNEL_MASK)) {
+	if (!(ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 		/*
 		 * For non tunnel type the tunnel type id is 0, so no need to
 		 * assign a value to it. Only the inner(normal) L2 header length
@@ -3445,7 +3445,7 @@ hns3_parse_tunneling_params(struct hns3_tx_queue *txq, struct rte_mbuf *m,
 		 * calculate the header length.
 		 */
 		if (unlikely(!(ol_flags &
-			(PKT_TX_OUTER_IP_CKSUM | PKT_TX_OUTER_UDP_CKSUM)) &&
+			(RTE_MBUF_F_TX_OUTER_IP_CKSUM | RTE_MBUF_F_TX_OUTER_UDP_CKSUM)) &&
 					m->outer_l2_len == 0)) {
 			struct rte_net_hdr_lens hdr_len;
 			(void)rte_net_get_ptype(m, &hdr_len,
@@ -3462,7 +3462,7 @@ hns3_parse_tunneling_params(struct hns3_tx_queue *txq, struct rte_mbuf *m,
 
 	desc->tx.ol_type_vlan_len_msec = rte_cpu_to_le_32(tmp_outer);
 	desc->tx.type_cs_vlan_tso_len = rte_cpu_to_le_32(tmp_inner);
-	tmp_ol4cs = ol_flags & PKT_TX_OUTER_UDP_CKSUM ?
+	tmp_ol4cs = ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM ?
 			BIT(HNS3_TXD_OL4CS_B) : 0;
 	desc->tx.paylen_fd_dop_ol4cs = rte_cpu_to_le_32(tmp_ol4cs);
 
@@ -3477,9 +3477,9 @@ hns3_parse_l3_cksum_params(struct rte_mbuf *m, uint32_t *type_cs_vlan_tso_len)
 	uint32_t tmp;
 
 	tmp = *type_cs_vlan_tso_len;
-	if (ol_flags & PKT_TX_IPV4)
+	if (ol_flags & RTE_MBUF_F_TX_IPV4)
 		l3_type = HNS3_L3T_IPV4;
-	else if (ol_flags & PKT_TX_IPV6)
+	else if (ol_flags & RTE_MBUF_F_TX_IPV6)
 		l3_type = HNS3_L3T_IPV6;
 	else
 		l3_type = HNS3_L3T_NONE;
@@ -3491,7 +3491,7 @@ hns3_parse_l3_cksum_params(struct rte_mbuf *m, uint32_t *type_cs_vlan_tso_len)
 	tmp |= hns3_gen_field_val(HNS3_TXD_L3T_M, HNS3_TXD_L3T_S, l3_type);
 
 	/* Enable L3 checksum offloads */
-	if (ol_flags & PKT_TX_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 		tmp |= BIT(HNS3_TXD_L3CS_B);
 	*type_cs_vlan_tso_len = tmp;
 }
@@ -3502,20 +3502,20 @@ hns3_parse_l4_cksum_params(struct rte_mbuf *m, uint32_t *type_cs_vlan_tso_len)
 	uint64_t ol_flags = m->ol_flags;
 	uint32_t tmp;
 	/* Enable L4 checksum offloads */
-	switch (ol_flags & (PKT_TX_L4_MASK | PKT_TX_TCP_SEG)) {
-	case PKT_TX_TCP_CKSUM | PKT_TX_TCP_SEG:
-	case PKT_TX_TCP_CKSUM:
-	case PKT_TX_TCP_SEG:
+	switch (ol_flags & (RTE_MBUF_F_TX_L4_MASK | RTE_MBUF_F_TX_TCP_SEG)) {
+	case RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_TCP_SEG:
+	case RTE_MBUF_F_TX_TCP_CKSUM:
+	case RTE_MBUF_F_TX_TCP_SEG:
 		tmp = *type_cs_vlan_tso_len;
 		tmp |= hns3_gen_field_val(HNS3_TXD_L4T_M, HNS3_TXD_L4T_S,
 					HNS3_L4T_TCP);
 		break;
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		tmp = *type_cs_vlan_tso_len;
 		tmp |= hns3_gen_field_val(HNS3_TXD_L4T_M, HNS3_TXD_L4T_S,
 					HNS3_L4T_UDP);
 		break;
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		tmp = *type_cs_vlan_tso_len;
 		tmp |= hns3_gen_field_val(HNS3_TXD_L4T_M, HNS3_TXD_L4T_S,
 					HNS3_L4T_SCTP);
@@ -3572,7 +3572,7 @@ hns3_pkt_need_linearized(struct rte_mbuf *tx_pkts, uint32_t bd_num,
 
 	/* ensure the first 8 frags is greater than mss + header */
 	hdr_len = tx_pkts->l2_len + tx_pkts->l3_len + tx_pkts->l4_len;
-	hdr_len += (tx_pkts->ol_flags & PKT_TX_TUNNEL_MASK) ?
+	hdr_len += (tx_pkts->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 		   tx_pkts->outer_l2_len + tx_pkts->outer_l3_len : 0;
 	if (tot_len + m_last->data_len < tx_pkts->tso_segsz + hdr_len)
 		return true;
@@ -3602,15 +3602,15 @@ hns3_outer_ipv4_cksum_prepared(struct rte_mbuf *m, uint64_t ol_flags,
 	struct rte_ipv4_hdr *ipv4_hdr;
 	ipv4_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv4_hdr *,
 					   m->outer_l2_len);
-	if (ol_flags & PKT_TX_OUTER_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 		ipv4_hdr->hdr_checksum = 0;
-	if (ol_flags & PKT_TX_OUTER_UDP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM) {
 		struct rte_udp_hdr *udp_hdr;
 		/*
 		 * If OUTER_UDP_CKSUM is support, HW can caclulate the pseudo
 		 * header for TSO packets
 		 */
-		if (ol_flags & PKT_TX_TCP_SEG)
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			return true;
 		udp_hdr = rte_pktmbuf_mtod_offset(m, struct rte_udp_hdr *,
 				m->outer_l2_len + m->outer_l3_len);
@@ -3629,13 +3629,13 @@ hns3_outer_ipv6_cksum_prepared(struct rte_mbuf *m, uint64_t ol_flags,
 	struct rte_ipv6_hdr *ipv6_hdr;
 	ipv6_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv6_hdr *,
 					   m->outer_l2_len);
-	if (ol_flags & PKT_TX_OUTER_UDP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM) {
 		struct rte_udp_hdr *udp_hdr;
 		/*
 		 * If OUTER_UDP_CKSUM is support, HW can caclulate the pseudo
 		 * header for TSO packets
 		 */
-		if (ol_flags & PKT_TX_TCP_SEG)
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			return true;
 		udp_hdr = rte_pktmbuf_mtod_offset(m, struct rte_udp_hdr *,
 				m->outer_l2_len + m->outer_l3_len);
@@ -3654,10 +3654,10 @@ hns3_outer_header_cksum_prepare(struct rte_mbuf *m)
 	uint32_t paylen, hdr_len, l4_proto;
 	struct rte_udp_hdr *udp_hdr;
 
-	if (!(ol_flags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6)))
+	if (!(ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IPV6)))
 		return;
 
-	if (ol_flags & PKT_TX_OUTER_IPV4) {
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IPV4) {
 		if (hns3_outer_ipv4_cksum_prepared(m, ol_flags, &l4_proto))
 			return;
 	} else {
@@ -3666,7 +3666,7 @@ hns3_outer_header_cksum_prepare(struct rte_mbuf *m)
 	}
 
 	/* driver should ensure the outer udp cksum is 0 for TUNNEL TSO */
-	if (l4_proto == IPPROTO_UDP && (ol_flags & PKT_TX_TCP_SEG)) {
+	if (l4_proto == IPPROTO_UDP && (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		hdr_len = m->l2_len + m->l3_len + m->l4_len;
 		hdr_len += m->outer_l2_len + m->outer_l3_len;
 		paylen = m->pkt_len - hdr_len;
@@ -3692,7 +3692,7 @@ hns3_check_tso_pkt_valid(struct rte_mbuf *m)
 		return -EINVAL;
 
 	hdr_len = m->l2_len + m->l3_len + m->l4_len;
-	hdr_len += (m->ol_flags & PKT_TX_TUNNEL_MASK) ?
+	hdr_len += (m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 			m->outer_l2_len + m->outer_l3_len : 0;
 	if (hdr_len > HNS3_MAX_TSO_HDR_SIZE)
 		return -EINVAL;
@@ -3742,12 +3742,12 @@ hns3_vld_vlan_chk(struct hns3_tx_queue *txq, struct rte_mbuf *m)
 	 * implementation function named hns3_prep_pkts to inform users that
 	 * these packets will be discarded.
 	 */
-	if (m->ol_flags & PKT_TX_QINQ)
+	if (m->ol_flags & RTE_MBUF_F_TX_QINQ)
 		return -EINVAL;
 
 	eh = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 	if (eh->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN)) {
-		if (m->ol_flags & PKT_TX_VLAN)
+		if (m->ol_flags & RTE_MBUF_F_TX_VLAN)
 			return -EINVAL;
 
 		/* Ensure the incoming packet is not a QinQ packet */
@@ -3767,7 +3767,7 @@ hns3_udp_cksum_help(struct rte_mbuf *m)
 	uint16_t cksum = 0;
 	uint32_t l4_len;
 
-	if (ol_flags & PKT_TX_IPV4) {
+	if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		struct rte_ipv4_hdr *ipv4_hdr = rte_pktmbuf_mtod_offset(m,
 				struct rte_ipv4_hdr *, m->l2_len);
 		l4_len = rte_be_to_cpu_16(ipv4_hdr->total_length) - m->l3_len;
@@ -3798,8 +3798,8 @@ hns3_validate_tunnel_cksum(struct hns3_tx_queue *tx_queue, struct rte_mbuf *m)
 	uint16_t dst_port;
 
 	if (tx_queue->udp_cksum_mode == HNS3_SPECIAL_PORT_HW_CKSUM_MODE ||
-	    ol_flags & PKT_TX_TUNNEL_MASK ||
-	    (ol_flags & PKT_TX_L4_MASK) != PKT_TX_UDP_CKSUM)
+	    ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK ||
+	    (ol_flags & RTE_MBUF_F_TX_L4_MASK) != RTE_MBUF_F_TX_UDP_CKSUM)
 		return true;
 	/*
 	 * A UDP packet with the same dst_port as VXLAN\VXLAN_GPE\GENEVE will
@@ -3816,7 +3816,7 @@ hns3_validate_tunnel_cksum(struct hns3_tx_queue *tx_queue, struct rte_mbuf *m)
 	case RTE_VXLAN_GPE_DEFAULT_PORT:
 	case RTE_GENEVE_DEFAULT_PORT:
 		udp_hdr->dgram_cksum = hns3_udp_cksum_help(m);
-		m->ol_flags = ol_flags & ~PKT_TX_L4_MASK;
+		m->ol_flags = ol_flags & ~RTE_MBUF_F_TX_L4_MASK;
 		return false;
 	default:
 		return true;
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index cd7c21c1d0..fd2ffc30d9 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -471,7 +471,7 @@ struct hns3_tx_queue {
 	 *  - HNS3_SPECIAL_PORT_SW_CKSUM_MODE
 	 *     In this mode, HW can not do checksum for special UDP port like
 	 *     4789, 4790, 6081 for non-tunnel UDP packets and UDP tunnel
-	 *     packets without the PKT_TX_TUNEL_MASK in the mbuf. So, PMD need
+	 *     packets without the RTE_MBUF_F_TX_TUNEL_MASK in the mbuf. So, PMD need
 	 *     do the checksum for these packets to avoid a checksum error.
 	 *
 	 *  - HNS3_SPECIAL_PORT_HW_CKSUM_MODE
@@ -545,12 +545,11 @@ struct hns3_queue_info {
 	unsigned int socket_id;
 };
 
-#define HNS3_TX_CKSUM_OFFLOAD_MASK ( \
-	PKT_TX_OUTER_UDP_CKSUM | \
-	PKT_TX_OUTER_IP_CKSUM | \
-	PKT_TX_IP_CKSUM | \
-	PKT_TX_TCP_SEG | \
-	PKT_TX_L4_MASK)
+#define HNS3_TX_CKSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \
+	RTE_MBUF_F_TX_OUTER_IP_CKSUM | \
+	RTE_MBUF_F_TX_IP_CKSUM | \
+	RTE_MBUF_F_TX_TCP_SEG | \
+	RTE_MBUF_F_TX_L4_MASK)
 
 enum hns3_cksum_status {
 	HNS3_CKSUM_NONE = 0,
@@ -574,29 +573,29 @@ hns3_rx_set_cksum_flag(struct hns3_rx_queue *rxq,
 				 BIT(HNS3_RXD_OL4E_B))
 
 	if (likely((l234_info & HNS3_RXD_CKSUM_ERR_MASK) == 0)) {
-		rxm->ol_flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		rxm->ol_flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 		return;
 	}
 
 	if (unlikely(l234_info & BIT(HNS3_RXD_L3E_B))) {
-		rxm->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+		rxm->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		rxq->dfx_stats.l3_csum_errors++;
 	} else {
-		rxm->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+		rxm->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 	}
 
 	if (unlikely(l234_info & BIT(HNS3_RXD_L4E_B))) {
-		rxm->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+		rxm->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		rxq->dfx_stats.l4_csum_errors++;
 	} else {
-		rxm->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+		rxm->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 	}
 
 	if (unlikely(l234_info & BIT(HNS3_RXD_OL3E_B)))
 		rxq->dfx_stats.ol3_csum_errors++;
 
 	if (unlikely(l234_info & BIT(HNS3_RXD_OL4E_B))) {
-		rxm->ol_flags |= PKT_RX_OUTER_L4_CKSUM_BAD;
+		rxm->ol_flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 		rxq->dfx_stats.ol4_csum_errors++;
 	}
 }
diff --git a/drivers/net/hns3/hns3_rxtx_vec_neon.h b/drivers/net/hns3/hns3_rxtx_vec_neon.h
index 74c848d5ef..0edd4756f1 100644
--- a/drivers/net/hns3/hns3_rxtx_vec_neon.h
+++ b/drivers/net/hns3/hns3_rxtx_vec_neon.h
@@ -105,7 +105,7 @@ hns3_desc_parse_field(struct hns3_rx_queue *rxq,
 		pkt = sw_ring[i].mbuf;
 
 		/* init rte_mbuf.rearm_data last 64-bit */
-		pkt->ol_flags = PKT_RX_RSS_HASH;
+		pkt->ol_flags = RTE_MBUF_F_RX_RSS_HASH;
 
 		l234_info = rxdp[i].rx.l234_info;
 		ol_info = rxdp[i].rx.ol_info;
diff --git a/drivers/net/hns3/hns3_rxtx_vec_sve.c b/drivers/net/hns3/hns3_rxtx_vec_sve.c
index d5c49333b2..be1fdbcdf0 100644
--- a/drivers/net/hns3/hns3_rxtx_vec_sve.c
+++ b/drivers/net/hns3/hns3_rxtx_vec_sve.c
@@ -43,7 +43,7 @@ hns3_desc_parse_field_sve(struct hns3_rx_queue *rxq,
 
 	for (i = 0; i < (int)bd_vld_num; i++) {
 		/* init rte_mbuf.rearm_data last 64-bit */
-		rx_pkts[i]->ol_flags = PKT_RX_RSS_HASH;
+		rx_pkts[i]->ol_flags = RTE_MBUF_F_RX_RSS_HASH;
 
 		ret = hns3_handle_bdinfo(rxq, rx_pkts[i], key->bd_base_info[i],
 					 key->l234_info[i]);
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 33a6a9e840..4cf4fa8a45 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -44,42 +44,39 @@
 #define I40E_TXD_CMD (I40E_TX_DESC_CMD_EOP | I40E_TX_DESC_CMD_RS)
 
 #ifdef RTE_LIBRTE_IEEE1588
-#define I40E_TX_IEEE1588_TMST PKT_TX_IEEE1588_TMST
+#define I40E_TX_IEEE1588_TMST RTE_MBUF_F_TX_IEEE1588_TMST
 #else
 #define I40E_TX_IEEE1588_TMST 0
 #endif
 
-#define I40E_TX_CKSUM_OFFLOAD_MASK (		 \
-		PKT_TX_IP_CKSUM |		 \
-		PKT_TX_L4_MASK |		 \
-		PKT_TX_TCP_SEG |		 \
-		PKT_TX_OUTER_IP_CKSUM)
-
-#define I40E_TX_OFFLOAD_MASK (  \
-		PKT_TX_OUTER_IPV4 |	\
-		PKT_TX_OUTER_IPV6 |	\
-		PKT_TX_IPV4 |		\
-		PKT_TX_IPV6 |		\
-		PKT_TX_IP_CKSUM |       \
-		PKT_TX_L4_MASK |        \
-		PKT_TX_OUTER_IP_CKSUM | \
-		PKT_TX_TCP_SEG |        \
-		PKT_TX_QINQ |       \
-		PKT_TX_VLAN |	\
-		PKT_TX_TUNNEL_MASK |	\
+#define I40E_TX_CKSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM |		 \
+		RTE_MBUF_F_TX_L4_MASK |		 \
+		RTE_MBUF_F_TX_TCP_SEG |		 \
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+
+#define I40E_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_OUTER_IPV4 |	\
+		RTE_MBUF_F_TX_OUTER_IPV6 |	\
+		RTE_MBUF_F_TX_IPV4 |		\
+		RTE_MBUF_F_TX_IPV6 |		\
+		RTE_MBUF_F_TX_IP_CKSUM |       \
+		RTE_MBUF_F_TX_L4_MASK |        \
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM | \
+		RTE_MBUF_F_TX_TCP_SEG |        \
+		RTE_MBUF_F_TX_QINQ |       \
+		RTE_MBUF_F_TX_VLAN |	\
+		RTE_MBUF_F_TX_TUNNEL_MASK |	\
 		I40E_TX_IEEE1588_TMST)
 
 #define I40E_TX_OFFLOAD_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ I40E_TX_OFFLOAD_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ I40E_TX_OFFLOAD_MASK)
 
-#define I40E_TX_OFFLOAD_SIMPLE_SUP_MASK ( \
-		PKT_TX_IPV4 | \
-		PKT_TX_IPV6 | \
-		PKT_TX_OUTER_IPV4 | \
-		PKT_TX_OUTER_IPV6)
+#define I40E_TX_OFFLOAD_SIMPLE_SUP_MASK (RTE_MBUF_F_TX_IPV4 | \
+		RTE_MBUF_F_TX_IPV6 | \
+		RTE_MBUF_F_TX_OUTER_IPV4 | \
+		RTE_MBUF_F_TX_OUTER_IPV6)
 
 #define I40E_TX_OFFLOAD_SIMPLE_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ I40E_TX_OFFLOAD_SIMPLE_SUP_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ I40E_TX_OFFLOAD_SIMPLE_SUP_MASK)
 
 static int
 i40e_monitor_callback(const uint64_t value,
@@ -119,7 +116,7 @@ i40e_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union i40e_rx_desc *rxdp)
 {
 	if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
 		(1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)) {
-		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mb->vlan_tci =
 			rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1);
 		PMD_RX_LOG(DEBUG, "Descriptor l2tag1: %u",
@@ -130,8 +127,8 @@ i40e_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union i40e_rx_desc *rxdp)
 #ifndef RTE_LIBRTE_I40E_16BYTE_RX_DESC
 	if (rte_le_to_cpu_16(rxdp->wb.qword2.ext_status) &
 		(1 << I40E_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT)) {
-		mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ |
-			PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+		mb->ol_flags |= RTE_MBUF_F_RX_QINQ_STRIPPED | RTE_MBUF_F_RX_QINQ |
+			RTE_MBUF_F_RX_VLAN_STRIPPED | RTE_MBUF_F_RX_VLAN;
 		mb->vlan_tci_outer = mb->vlan_tci;
 		mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_2);
 		PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
@@ -154,11 +151,11 @@ i40e_rxd_status_to_pkt_flags(uint64_t qword)
 	/* Check if RSS_HASH */
 	flags = (((qword >> I40E_RX_DESC_STATUS_FLTSTAT_SHIFT) &
 					I40E_RX_DESC_FLTSTAT_RSS_HASH) ==
-			I40E_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0;
+			I40E_RX_DESC_FLTSTAT_RSS_HASH) ? RTE_MBUF_F_RX_RSS_HASH : 0;
 
 	/* Check if FDIR Match */
 	flags |= (qword & (1 << I40E_RX_DESC_STATUS_FLM_SHIFT) ?
-							PKT_RX_FDIR : 0);
+							RTE_MBUF_F_RX_FDIR : 0);
 
 	return flags;
 }
@@ -171,22 +168,22 @@ i40e_rxd_error_to_pkt_flags(uint64_t qword)
 
 #define I40E_RX_ERR_BITS 0x3f
 	if (likely((error_bits & I40E_RX_ERR_BITS) == 0)) {
-		flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 		return flags;
 	}
 
 	if (unlikely(error_bits & (1 << I40E_RX_DESC_ERROR_IPE_SHIFT)))
-		flags |= PKT_RX_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		flags |= PKT_RX_IP_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	if (unlikely(error_bits & (1 << I40E_RX_DESC_ERROR_L4E_SHIFT)))
-		flags |= PKT_RX_L4_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	else
-		flags |= PKT_RX_L4_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	if (unlikely(error_bits & (1 << I40E_RX_DESC_ERROR_EIPE_SHIFT)))
-		flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 
 	return flags;
 }
@@ -205,9 +202,9 @@ i40e_get_iee15888_flags(struct rte_mbuf *mb, uint64_t qword)
 
 	if ((mb->packet_type & RTE_PTYPE_L2_MASK)
 			== RTE_PTYPE_L2_ETHER_TIMESYNC)
-		pkt_flags = PKT_RX_IEEE1588_PTP;
+		pkt_flags = RTE_MBUF_F_RX_IEEE1588_PTP;
 	if (tsyn & 0x04) {
-		pkt_flags |= PKT_RX_IEEE1588_TMST;
+		pkt_flags |= RTE_MBUF_F_RX_IEEE1588_TMST;
 		mb->timesync = tsyn & 0x03;
 	}
 
@@ -233,21 +230,21 @@ i40e_rxd_build_fdir(volatile union i40e_rx_desc *rxdp, struct rte_mbuf *mb)
 	if (flexbh == I40E_RX_DESC_EXT_STATUS_FLEXBH_FD_ID) {
 		mb->hash.fdir.hi =
 			rte_le_to_cpu_32(rxdp->wb.qword3.hi_dword.fd_id);
-		flags |= PKT_RX_FDIR_ID;
+		flags |= RTE_MBUF_F_RX_FDIR_ID;
 	} else if (flexbh == I40E_RX_DESC_EXT_STATUS_FLEXBH_FLEX) {
 		mb->hash.fdir.hi =
 			rte_le_to_cpu_32(rxdp->wb.qword3.hi_dword.flex_bytes_hi);
-		flags |= PKT_RX_FDIR_FLX;
+		flags |= RTE_MBUF_F_RX_FDIR_FLX;
 	}
 	if (flexbl == I40E_RX_DESC_EXT_STATUS_FLEXBL_FLEX) {
 		mb->hash.fdir.lo =
 			rte_le_to_cpu_32(rxdp->wb.qword3.lo_dword.flex_bytes_lo);
-		flags |= PKT_RX_FDIR_FLX;
+		flags |= RTE_MBUF_F_RX_FDIR_FLX;
 	}
 #else
 	mb->hash.fdir.hi =
 		rte_le_to_cpu_32(rxdp->wb.qword0.hi_dword.fd_id);
-	flags |= PKT_RX_FDIR_ID;
+	flags |= RTE_MBUF_F_RX_FDIR_ID;
 #endif
 	return flags;
 }
@@ -258,11 +255,11 @@ i40e_parse_tunneling_params(uint64_t ol_flags,
 			    uint32_t *cd_tunneling)
 {
 	/* EIPT: External (outer) IP header type */
-	if (ol_flags & PKT_TX_OUTER_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 		*cd_tunneling |= I40E_TX_CTX_EXT_IP_IPV4;
-	else if (ol_flags & PKT_TX_OUTER_IPV4)
+	else if (ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)
 		*cd_tunneling |= I40E_TX_CTX_EXT_IP_IPV4_NO_CSUM;
-	else if (ol_flags & PKT_TX_OUTER_IPV6)
+	else if (ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)
 		*cd_tunneling |= I40E_TX_CTX_EXT_IP_IPV6;
 
 	/* EIPLEN: External (outer) IP header length, in DWords */
@@ -270,15 +267,15 @@ i40e_parse_tunneling_params(uint64_t ol_flags,
 		I40E_TXD_CTX_QW0_EXT_IPLEN_SHIFT;
 
 	/* L4TUNT: L4 Tunneling Type */
-	switch (ol_flags & PKT_TX_TUNNEL_MASK) {
-	case PKT_TX_TUNNEL_IPIP:
+	switch (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+	case RTE_MBUF_F_TX_TUNNEL_IPIP:
 		/* for non UDP / GRE tunneling, set to 00b */
 		break;
-	case PKT_TX_TUNNEL_VXLAN:
-	case PKT_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 		*cd_tunneling |= I40E_TXD_CTX_UDP_TUNNELING;
 		break;
-	case PKT_TX_TUNNEL_GRE:
+	case RTE_MBUF_F_TX_TUNNEL_GRE:
 		*cd_tunneling |= I40E_TXD_CTX_GRE_TUNNELING;
 		break;
 	default:
@@ -306,7 +303,7 @@ i40e_txd_enable_checksum(uint64_t ol_flags,
 			union i40e_tx_offload tx_offload)
 {
 	/* Set MACLEN */
-	if (ol_flags & PKT_TX_TUNNEL_MASK)
+	if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 		*td_offset |= (tx_offload.outer_l2_len >> 1)
 				<< I40E_TX_DESC_LENGTH_MACLEN_SHIFT;
 	else
@@ -314,21 +311,21 @@ i40e_txd_enable_checksum(uint64_t ol_flags,
 			<< I40E_TX_DESC_LENGTH_MACLEN_SHIFT;
 
 	/* Enable L3 checksum offloads */
-	if (ol_flags & PKT_TX_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		*td_cmd |= I40E_TX_DESC_CMD_IIPT_IPV4_CSUM;
 		*td_offset |= (tx_offload.l3_len >> 2)
 				<< I40E_TX_DESC_LENGTH_IPLEN_SHIFT;
-	} else if (ol_flags & PKT_TX_IPV4) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		*td_cmd |= I40E_TX_DESC_CMD_IIPT_IPV4;
 		*td_offset |= (tx_offload.l3_len >> 2)
 				<< I40E_TX_DESC_LENGTH_IPLEN_SHIFT;
-	} else if (ol_flags & PKT_TX_IPV6) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV6) {
 		*td_cmd |= I40E_TX_DESC_CMD_IIPT_IPV6;
 		*td_offset |= (tx_offload.l3_len >> 2)
 				<< I40E_TX_DESC_LENGTH_IPLEN_SHIFT;
 	}
 
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		*td_cmd |= I40E_TX_DESC_CMD_L4T_EOFT_TCP;
 		*td_offset |= (tx_offload.l4_len >> 2)
 			<< I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
@@ -336,18 +333,18 @@ i40e_txd_enable_checksum(uint64_t ol_flags,
 	}
 
 	/* Enable L4 checksum offloads */
-	switch (ol_flags & PKT_TX_L4_MASK) {
-	case PKT_TX_TCP_CKSUM:
+	switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		*td_cmd |= I40E_TX_DESC_CMD_L4T_EOFT_TCP;
 		*td_offset |= (sizeof(struct rte_tcp_hdr) >> 2) <<
 				I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
 		break;
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		*td_cmd |= I40E_TX_DESC_CMD_L4T_EOFT_SCTP;
 		*td_offset |= (sizeof(struct rte_sctp_hdr) >> 2) <<
 				I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
 		break;
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		*td_cmd |= I40E_TX_DESC_CMD_L4T_EOFT_UDP;
 		*td_offset |= (sizeof(struct rte_udp_hdr) >> 2) <<
 				I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
@@ -526,10 +523,10 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
 				ptype_tbl[(uint8_t)((qword1 &
 				I40E_RXD_QW1_PTYPE_MASK) >>
 				I40E_RXD_QW1_PTYPE_SHIFT)];
-			if (pkt_flags & PKT_RX_RSS_HASH)
+			if (pkt_flags & RTE_MBUF_F_RX_RSS_HASH)
 				mb->hash.rss = rte_le_to_cpu_32(\
 					rxdp[j].wb.qword0.hi_dword.rss);
-			if (pkt_flags & PKT_RX_FDIR)
+			if (pkt_flags & RTE_MBUF_F_RX_FDIR)
 				pkt_flags |= i40e_rxd_build_fdir(&rxdp[j], mb);
 
 #ifdef RTE_LIBRTE_IEEE1588
@@ -789,10 +786,10 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		rxm->packet_type =
 			ptype_tbl[(uint8_t)((qword1 &
 			I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT)];
-		if (pkt_flags & PKT_RX_RSS_HASH)
+		if (pkt_flags & RTE_MBUF_F_RX_RSS_HASH)
 			rxm->hash.rss =
 				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
-		if (pkt_flags & PKT_RX_FDIR)
+		if (pkt_flags & RTE_MBUF_F_RX_FDIR)
 			pkt_flags |= i40e_rxd_build_fdir(&rxd, rxm);
 
 #ifdef RTE_LIBRTE_IEEE1588
@@ -957,10 +954,10 @@ i40e_recv_scattered_pkts(void *rx_queue,
 		first_seg->packet_type =
 			ptype_tbl[(uint8_t)((qword1 &
 			I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT)];
-		if (pkt_flags & PKT_RX_RSS_HASH)
+		if (pkt_flags & RTE_MBUF_F_RX_RSS_HASH)
 			first_seg->hash.rss =
 				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
-		if (pkt_flags & PKT_RX_FDIR)
+		if (pkt_flags & RTE_MBUF_F_RX_FDIR)
 			pkt_flags |= i40e_rxd_build_fdir(&rxd, first_seg);
 
 #ifdef RTE_LIBRTE_IEEE1588
@@ -1004,13 +1001,13 @@ i40e_recv_scattered_pkts(void *rx_queue,
 static inline uint16_t
 i40e_calc_context_desc(uint64_t flags)
 {
-	static uint64_t mask = PKT_TX_OUTER_IP_CKSUM |
-		PKT_TX_TCP_SEG |
-		PKT_TX_QINQ |
-		PKT_TX_TUNNEL_MASK;
+	static uint64_t mask = RTE_MBUF_F_TX_OUTER_IP_CKSUM |
+		RTE_MBUF_F_TX_TCP_SEG |
+		RTE_MBUF_F_TX_QINQ |
+		RTE_MBUF_F_TX_TUNNEL_MASK;
 
 #ifdef RTE_LIBRTE_IEEE1588
-	mask |= PKT_TX_IEEE1588_TMST;
+	mask |= RTE_MBUF_F_TX_IEEE1588_TMST;
 #endif
 
 	return (flags & mask) ? 1 : 0;
@@ -1029,7 +1026,7 @@ i40e_set_tso_ctx(struct rte_mbuf *mbuf, union i40e_tx_offload tx_offload)
 	}
 
 	hdr_len = tx_offload.l2_len + tx_offload.l3_len + tx_offload.l4_len;
-	hdr_len += (mbuf->ol_flags & PKT_TX_TUNNEL_MASK) ?
+	hdr_len += (mbuf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 		   tx_offload.outer_l2_len + tx_offload.outer_l3_len : 0;
 
 	cd_cmd = I40E_TX_CTX_DESC_TSO;
@@ -1122,7 +1119,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		 * the mbuf data size exceeds max data size that hw allows
 		 * per tx desc.
 		 */
-		if (ol_flags & PKT_TX_TCP_SEG)
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			nb_used = (uint16_t)(i40e_calc_pkt_desc(tx_pkt) +
 					     nb_ctx);
 		else
@@ -1151,7 +1148,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Descriptor based VLAN insertion */
-		if (ol_flags & (PKT_TX_VLAN | PKT_TX_QINQ)) {
+		if (ol_flags & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) {
 			td_cmd |= I40E_TX_DESC_CMD_IL2TAG1;
 			td_tag = tx_pkt->vlan_tci;
 		}
@@ -1161,7 +1158,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 
 		/* Fill in tunneling parameters if necessary */
 		cd_tunneling_params = 0;
-		if (ol_flags & PKT_TX_TUNNEL_MASK)
+		if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 			i40e_parse_tunneling_params(ol_flags, tx_offload,
 						    &cd_tunneling_params);
 		/* Enable checksum offloading */
@@ -1186,12 +1183,12 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			}
 
 			/* TSO enabled means no timestamp */
-			if (ol_flags & PKT_TX_TCP_SEG)
+			if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 				cd_type_cmd_tso_mss |=
 					i40e_set_tso_ctx(tx_pkt, tx_offload);
 			else {
 #ifdef RTE_LIBRTE_IEEE1588
-				if (ol_flags & PKT_TX_IEEE1588_TMST)
+				if (ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 					cd_type_cmd_tso_mss |=
 						((uint64_t)I40E_TX_CTX_DESC_TSYN <<
 						 I40E_TXD_CTX_QW1_CMD_SHIFT);
@@ -1200,7 +1197,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 
 			ctx_txd->tunneling_params =
 				rte_cpu_to_le_32(cd_tunneling_params);
-			if (ol_flags & PKT_TX_QINQ) {
+			if (ol_flags & RTE_MBUF_F_TX_QINQ) {
 				cd_l2tag2 = tx_pkt->vlan_tci_outer;
 				cd_type_cmd_tso_mss |=
 					((uint64_t)I40E_TX_CTX_DESC_IL2TAG2 <<
@@ -1239,7 +1236,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			slen = m_seg->data_len;
 			buf_dma_addr = rte_mbuf_data_iova(m_seg);
 
-			while ((ol_flags & PKT_TX_TCP_SEG) &&
+			while ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) &&
 				unlikely(slen > I40E_MAX_DATA_PER_TXD)) {
 				txd->buffer_addr =
 					rte_cpu_to_le_64(buf_dma_addr);
@@ -1580,7 +1577,7 @@ i40e_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		ol_flags = m->ol_flags;
 
 		/* Check for m->nb_segs to not exceed the limits. */
-		if (!(ol_flags & PKT_TX_TCP_SEG)) {
+		if (!(ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 			if (m->nb_segs > I40E_TX_MAX_MTU_SEG ||
 			    m->pkt_len > I40E_FRAME_SIZE_MAX) {
 				rte_errno = EINVAL;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index b99323992f..d0bf86dfba 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -117,26 +117,26 @@ desc_to_olflags_v(vector unsigned long descs[4], struct rte_mbuf **rx_pkts)
 	/* map rss and vlan type to rss hash and vlan flag */
 	const vector unsigned char vlan_flags = (vector unsigned char){
 			0, 0, 0, 0,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0, 0, 0,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0};
 
 	const vector unsigned char rss_flags = (vector unsigned char){
-			0, PKT_RX_FDIR, 0, 0,
-			0, 0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH | PKT_RX_FDIR,
+			0, RTE_MBUF_F_RX_FDIR, 0, 0,
+			0, 0, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR,
 			0, 0, 0, 0,
 			0, 0, 0, 0};
 
 	const vector unsigned char l3_l4e_flags = (vector unsigned char){
 			0,
-			PKT_RX_IP_CKSUM_BAD,
-			PKT_RX_L4_CKSUM_BAD,
-			PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD,
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD,
-			PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD,
-			PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD
-					     | PKT_RX_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_L4_CKSUM_BAD,
+			RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD
+					     | RTE_MBUF_F_RX_IP_CKSUM_BAD,
 			0, 0, 0, 0, 0, 0, 0, 0};
 
 	vlan0 = (vector unsigned int)vec_mergel(descs[0], descs[1]);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index 3b9eef91a9..ca10e0dd15 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -78,7 +78,7 @@ desc_fdir_processing_32b(volatile union i40e_rx_desc *rxdp,
 	 * - Position that bit correctly based on packet number
 	 * - OR in the resulting bit to mbuf_flags
 	 */
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
 	__m256i mbuf_flag_mask = _mm256_set_epi32(0, 0, 0, 1 << 13,
 						  0, 0, 0, 1 << 13);
 	__m256i desc_flag_bit =  _mm256_and_si256(mbuf_flag_mask, fdir_mask);
@@ -208,8 +208,8 @@ _recv_raw_pkts_vec_avx2(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	 * destination
 	 */
 	const __m256i vlan_flags_shuf = _mm256_set_epi32(
-			0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0,
-			0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0);
+			0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0,
+			0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0);
 	/*
 	 * data to be shuffled by result of flag mask, shifted down 11.
 	 * If RSS/FDIR bits are set, shuffle moves appropriate flags in
@@ -217,11 +217,11 @@ _recv_raw_pkts_vec_avx2(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	 */
 	const __m256i rss_flags_shuf = _mm256_set_epi8(
 			0, 0, 0, 0, 0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH, 0, 0,
-			0, 0, PKT_RX_FDIR, 0, /* end up 128-bits */
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH, 0, 0,
+			0, 0, RTE_MBUF_F_RX_FDIR, 0, /* end up 128-bits */
 			0, 0, 0, 0, 0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH, 0, 0,
-			0, 0, PKT_RX_FDIR, 0);
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH, 0, 0,
+			0, 0, RTE_MBUF_F_RX_FDIR, 0);
 
 	/*
 	 * data to be shuffled by the result of the flags mask shifted by 22
@@ -229,37 +229,37 @@ _recv_raw_pkts_vec_avx2(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	 */
 	const __m256i l3_l4_flags_shuf = _mm256_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 			/* shift right 1 bit to make sure it not exceed 255 */
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD  |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD  |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD  | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD  | PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD  | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD  | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
 			/* second 128-bits */
 			0, 0, 0, 0, 0, 0, 0, 0,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD  |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD  |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD  | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD  | PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1);
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD  | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD  | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 
 	const __m256i cksum_mask = _mm256_set1_epi32(
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD);
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 
 	RTE_SET_USED(avx_aligned); /* for 32B descriptors we don't use this */
 
@@ -442,7 +442,7 @@ _recv_raw_pkts_vec_avx2(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 			 * order (hi->lo): [1, 3, 5, 7, 0, 2, 4, 6]
 			 * Then OR FDIR flags to mbuf_flags on FDIR ID hit.
 			 */
-			RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
+			RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
 			const __m256i pkt_fdir_bit = _mm256_set1_epi32(1 << 13);
 			const __m256i fdir_mask = _mm256_cmpeq_epi32(fdir, fdir_id);
 			__m256i fdir_bits = _mm256_and_si256(fdir_mask, pkt_fdir_bit);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index bd21d64223..2c779fa2a6 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -204,7 +204,7 @@ desc_fdir_processing_32b(volatile union i40e_rx_desc *rxdp,
 	 * - Position that bit correctly based on packet number
 	 * - OR in the resulting bit to mbuf_flags
 	 */
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
 	__m256i mbuf_flag_mask = _mm256_set_epi32(0, 0, 0, 1 << 13,
 						  0, 0, 0, 1 << 13);
 	__m256i desc_flag_bit =  _mm256_and_si256(mbuf_flag_mask, fdir_mask);
@@ -319,8 +319,8 @@ _recv_raw_pkts_vec_avx512(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	 * destination
 	 */
 	const __m256i vlan_flags_shuf = _mm256_set_epi32
-		(0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0,
-		0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0);
+		(0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0,
+		0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0);
 
 	/* data to be shuffled by result of flag mask, shifted down 11.
 	 * If RSS/FDIR bits are set, shuffle moves appropriate flags in
@@ -328,11 +328,11 @@ _recv_raw_pkts_vec_avx512(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	 */
 	const __m256i rss_flags_shuf = _mm256_set_epi8
 		(0, 0, 0, 0, 0, 0, 0, 0,
-		PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH, 0, 0,
-		0, 0, PKT_RX_FDIR, 0, /* end up 128-bits */
+		RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH, 0, 0,
+		0, 0, RTE_MBUF_F_RX_FDIR, 0, /* end up 128-bits */
 		0, 0, 0, 0, 0, 0, 0, 0,
-		PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH, 0, 0,
-		0, 0, PKT_RX_FDIR, 0);
+		RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH, 0, 0,
+		0, 0, RTE_MBUF_F_RX_FDIR, 0);
 
 	/* data to be shuffled by the result of the flags mask shifted by 22
 	 * bits.  This gives use the l3_l4 flags.
@@ -340,33 +340,33 @@ _recv_raw_pkts_vec_avx512(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	const __m256i l3_l4_flags_shuf = _mm256_set_epi8
 		(0, 0, 0, 0, 0, 0, 0, 0,
 		/* shift right 1 bit to make sure it not exceed 255 */
-		(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-		PKT_RX_IP_CKSUM_BAD >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1,
 		/* second 128-bits */
 		0, 0, 0, 0, 0, 0, 0, 0,
-		(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-		PKT_RX_IP_CKSUM_BAD >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1);
+		(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1);
 
 	const __m256i cksum_mask = _mm256_set1_epi32
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-		PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-		PKT_RX_OUTER_IP_CKSUM_BAD);
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+		RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 
 	uint16_t i, received;
 
@@ -571,7 +571,7 @@ _recv_raw_pkts_vec_avx512(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 			 * order (hi->lo): [1, 3, 5, 7, 0, 2, 4, 6]
 			 * Then OR FDIR flags to mbuf_flags on FDIR ID hit.
 			 */
-			RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
+			RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
 			const __m256i pkt_fdir_bit = _mm256_set1_epi32(1 << 13);
 			const __m256i fdir_mask =
 				_mm256_cmpeq_epi32(fdir, fdir_id);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index b2683fda60..b9d9dec769 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -93,43 +93,43 @@ desc_to_olflags_v(struct i40e_rx_queue *rxq, uint64x2_t descs[4],
 			0x1c03804, 0x1c03804, 0x1c03804, 0x1c03804};
 
 	const uint32x4_t cksum_mask = {
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD};
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD};
 
 	/* map rss and vlan type to rss hash and vlan flag */
 	const uint8x16_t vlan_flags = {
 			0, 0, 0, 0,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0, 0, 0,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0};
 
 	const uint8x16_t rss_flags = {
-			0, PKT_RX_FDIR, 0, 0,
-			0, 0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH | PKT_RX_FDIR,
+			0, RTE_MBUF_F_RX_FDIR, 0, 0,
+			0, 0, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR,
 			0, 0, 0, 0,
 			0, 0, 0, 0};
 
 	const uint8x16_t l3_l4e_flags = {
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1,
-			PKT_RX_IP_CKSUM_BAD >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-			 PKT_RX_L4_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+			 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
 			0, 0, 0, 0, 0, 0, 0, 0};
 
 	vlan0 = vzipq_u32(vreinterpretq_u32_u64(descs[0]),
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index b235502db5..497b2404c6 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -143,7 +143,7 @@ descs_to_fdir_32b(volatile union i40e_rx_desc *rxdp, struct rte_mbuf **rx_pkt)
 	 * correct location in the mbuf->olflags
 	 */
 	const uint32_t FDIR_ID_BIT_SHIFT = 13;
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << FDIR_ID_BIT_SHIFT));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << FDIR_ID_BIT_SHIFT));
 	v_fd_id_mask = _mm_srli_epi32(v_fd_id_mask, 31);
 	v_fd_id_mask = _mm_slli_epi32(v_fd_id_mask, FDIR_ID_BIT_SHIFT);
 
@@ -203,9 +203,9 @@ descs_to_fdir_16b(__m128i fltstat, __m128i descs[4], struct rte_mbuf **rx_pkt)
 	__m128i v_desc0_mask = _mm_and_si128(v_desc_fdir_mask, v_desc0_shift);
 	descs[0] = _mm_blendv_epi8(descs[0], _mm_setzero_si128(), v_desc0_mask);
 
-	/* Shift to 1 or 0 bit per u32 lane, then to PKT_RX_FDIR_ID offset */
+	/* Shift to 1 or 0 bit per u32 lane, then to RTE_MBUF_F_RX_FDIR_ID offset */
 	const uint32_t FDIR_ID_BIT_SHIFT = 13;
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << FDIR_ID_BIT_SHIFT));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << FDIR_ID_BIT_SHIFT));
 	__m128i v_mask_one_bit = _mm_srli_epi32(v_fdir_id_mask, 31);
 	return _mm_slli_epi32(v_mask_one_bit, FDIR_ID_BIT_SHIFT);
 }
@@ -228,44 +228,44 @@ desc_to_olflags_v(struct i40e_rx_queue *rxq, volatile union i40e_rx_desc *rxdp,
 			0x1c03804, 0x1c03804, 0x1c03804, 0x1c03804);
 
 	const __m128i cksum_mask = _mm_set_epi32(
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD);
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 
 	/* map rss and vlan type to rss hash and vlan flag */
 	const __m128i vlan_flags = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
-			0, 0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
+			0, 0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
 			0, 0, 0, 0);
 
 	const __m128i rss_flags = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH, 0, 0,
-			0, 0, PKT_RX_FDIR, 0);
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH, 0, 0,
+			0, 0, RTE_MBUF_F_RX_FDIR, 0);
 
 	const __m128i l3_l4e_flags = _mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 			/* shift right 1 bit to make sure it not exceed 255 */
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD  |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD  |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD  | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD  | PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1);
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD  | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD  | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 
 	/* Unpack "status" from quadword 1, bits 0:32 */
 	vlan0 = _mm_unpackhi_epi32(descs[0], descs[1]);
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index cba1ba8052..1cc1e96b1f 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -379,14 +379,14 @@ iavf_rxd_to_pkt_fields_by_comms_ovs(__rte_unused struct iavf_rx_queue *rxq,
 #endif
 
 	if (desc->flow_id != 0xFFFFFFFF) {
-		mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mb->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
 	}
 
 #ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
 	stat_err = rte_le_to_cpu_16(desc->status_error0);
 	if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
-		mb->ol_flags |= PKT_RX_RSS_HASH;
+		mb->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
 	}
 #endif
@@ -403,13 +403,13 @@ iavf_rxd_to_pkt_fields_by_comms_aux_v1(struct iavf_rx_queue *rxq,
 
 	stat_err = rte_le_to_cpu_16(desc->status_error0);
 	if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
-		mb->ol_flags |= PKT_RX_RSS_HASH;
+		mb->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
 	}
 
 #ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
 	if (desc->flow_id != 0xFFFFFFFF) {
-		mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mb->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
 	}
 
@@ -445,13 +445,13 @@ iavf_rxd_to_pkt_fields_by_comms_aux_v2(struct iavf_rx_queue *rxq,
 
 	stat_err = rte_le_to_cpu_16(desc->status_error0);
 	if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
-		mb->ol_flags |= PKT_RX_RSS_HASH;
+		mb->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
 	}
 
 #ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
 	if (desc->flow_id != 0xFFFFFFFF) {
-		mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mb->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
 	}
 
@@ -1044,7 +1044,7 @@ iavf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union iavf_rx_desc *rxdp)
 {
 	if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
 		(1 << IAVF_RX_DESC_STATUS_L2TAG1P_SHIFT)) {
-		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mb->vlan_tci =
 			rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1);
 	} else {
@@ -1072,7 +1072,7 @@ iavf_flex_rxd_to_vlan_tci(struct rte_mbuf *mb,
 #endif
 
 	if (vlan_tci) {
-		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mb->vlan_tci = vlan_tci;
 	}
 }
@@ -1089,26 +1089,26 @@ iavf_rxd_to_pkt_flags(uint64_t qword)
 	/* Check if RSS_HASH */
 	flags = (((qword >> IAVF_RX_DESC_STATUS_FLTSTAT_SHIFT) &
 					IAVF_RX_DESC_FLTSTAT_RSS_HASH) ==
-			IAVF_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0;
+			IAVF_RX_DESC_FLTSTAT_RSS_HASH) ? RTE_MBUF_F_RX_RSS_HASH : 0;
 
 	/* Check if FDIR Match */
 	flags |= (qword & (1 << IAVF_RX_DESC_STATUS_FLM_SHIFT) ?
-				PKT_RX_FDIR : 0);
+				RTE_MBUF_F_RX_FDIR : 0);
 
 	if (likely((error_bits & IAVF_RX_ERR_BITS) == 0)) {
-		flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 		return flags;
 	}
 
 	if (unlikely(error_bits & (1 << IAVF_RX_DESC_ERROR_IPE_SHIFT)))
-		flags |= PKT_RX_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		flags |= PKT_RX_IP_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	if (unlikely(error_bits & (1 << IAVF_RX_DESC_ERROR_L4E_SHIFT)))
-		flags |= PKT_RX_L4_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	else
-		flags |= PKT_RX_L4_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	/* TODO: Oversize error bit is not processed here */
 
@@ -1129,12 +1129,12 @@ iavf_rxd_build_fdir(volatile union iavf_rx_desc *rxdp, struct rte_mbuf *mb)
 	if (flexbh == IAVF_RX_DESC_EXT_STATUS_FLEXBH_FD_ID) {
 		mb->hash.fdir.hi =
 			rte_le_to_cpu_32(rxdp->wb.qword3.hi_dword.fd_id);
-		flags |= PKT_RX_FDIR_ID;
+		flags |= RTE_MBUF_F_RX_FDIR_ID;
 	}
 #else
 	mb->hash.fdir.hi =
 		rte_le_to_cpu_32(rxdp->wb.qword0.hi_dword.fd_id);
-	flags |= PKT_RX_FDIR_ID;
+	flags |= RTE_MBUF_F_RX_FDIR_ID;
 #endif
 	return flags;
 }
@@ -1158,22 +1158,22 @@ iavf_flex_rxd_error_to_pkt_flags(uint16_t stat_err0)
 		return 0;
 
 	if (likely(!(stat_err0 & IAVF_RX_FLEX_ERR0_BITS))) {
-		flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 		return flags;
 	}
 
 	if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S)))
-		flags |= PKT_RX_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		flags |= PKT_RX_IP_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_L4E_S)))
-		flags |= PKT_RX_L4_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	else
-		flags |= PKT_RX_L4_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)))
-		flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 
 	return flags;
 }
@@ -1292,11 +1292,11 @@ iavf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 			ptype_tbl[(uint8_t)((qword1 &
 			IAVF_RXD_QW1_PTYPE_MASK) >> IAVF_RXD_QW1_PTYPE_SHIFT)];
 
-		if (pkt_flags & PKT_RX_RSS_HASH)
+		if (pkt_flags & RTE_MBUF_F_RX_RSS_HASH)
 			rxm->hash.rss =
 				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
 
-		if (pkt_flags & PKT_RX_FDIR)
+		if (pkt_flags & RTE_MBUF_F_RX_FDIR)
 			pkt_flags |= iavf_rxd_build_fdir(&rxd, rxm);
 
 		rxm->ol_flags |= pkt_flags;
@@ -1693,11 +1693,11 @@ iavf_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 			ptype_tbl[(uint8_t)((qword1 &
 			IAVF_RXD_QW1_PTYPE_MASK) >> IAVF_RXD_QW1_PTYPE_SHIFT)];
 
-		if (pkt_flags & PKT_RX_RSS_HASH)
+		if (pkt_flags & RTE_MBUF_F_RX_RSS_HASH)
 			first_seg->hash.rss =
 				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
 
-		if (pkt_flags & PKT_RX_FDIR)
+		if (pkt_flags & RTE_MBUF_F_RX_FDIR)
 			pkt_flags |= iavf_rxd_build_fdir(&rxd, first_seg);
 
 		first_seg->ol_flags |= pkt_flags;
@@ -1862,11 +1862,11 @@ iavf_rx_scan_hw_ring(struct iavf_rx_queue *rxq)
 				IAVF_RXD_QW1_PTYPE_MASK) >>
 				IAVF_RXD_QW1_PTYPE_SHIFT)];
 
-			if (pkt_flags & PKT_RX_RSS_HASH)
+			if (pkt_flags & RTE_MBUF_F_RX_RSS_HASH)
 				mb->hash.rss = rte_le_to_cpu_32(
 					rxdp[j].wb.qword0.hi_dword.rss);
 
-			if (pkt_flags & PKT_RX_FDIR)
+			if (pkt_flags & RTE_MBUF_F_RX_FDIR)
 				pkt_flags |= iavf_rxd_build_fdir(&rxdp[j], mb);
 
 			mb->ol_flags |= pkt_flags;
@@ -2072,9 +2072,9 @@ iavf_xmit_cleanup(struct iavf_tx_queue *txq)
 static inline uint16_t
 iavf_calc_context_desc(uint64_t flags, uint8_t vlan_flag)
 {
-	if (flags & PKT_TX_TCP_SEG)
+	if (flags & RTE_MBUF_F_TX_TCP_SEG)
 		return 1;
-	if (flags & PKT_TX_VLAN &&
+	if (flags & RTE_MBUF_F_TX_VLAN &&
 	    vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2)
 		return 1;
 	return 0;
@@ -2091,21 +2091,21 @@ iavf_txd_enable_checksum(uint64_t ol_flags,
 		      IAVF_TX_DESC_LENGTH_MACLEN_SHIFT;
 
 	/* Enable L3 checksum offloads */
-	if (ol_flags & PKT_TX_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		*td_cmd |= IAVF_TX_DESC_CMD_IIPT_IPV4_CSUM;
 		*td_offset |= (tx_offload.l3_len >> 2) <<
 			      IAVF_TX_DESC_LENGTH_IPLEN_SHIFT;
-	} else if (ol_flags & PKT_TX_IPV4) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		*td_cmd |= IAVF_TX_DESC_CMD_IIPT_IPV4;
 		*td_offset |= (tx_offload.l3_len >> 2) <<
 			      IAVF_TX_DESC_LENGTH_IPLEN_SHIFT;
-	} else if (ol_flags & PKT_TX_IPV6) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV6) {
 		*td_cmd |= IAVF_TX_DESC_CMD_IIPT_IPV6;
 		*td_offset |= (tx_offload.l3_len >> 2) <<
 			      IAVF_TX_DESC_LENGTH_IPLEN_SHIFT;
 	}
 
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		*td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_TCP;
 		*td_offset |= (tx_offload.l4_len >> 2) <<
 			      IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
@@ -2113,18 +2113,18 @@ iavf_txd_enable_checksum(uint64_t ol_flags,
 	}
 
 	/* Enable L4 checksum offloads */
-	switch (ol_flags & PKT_TX_L4_MASK) {
-	case PKT_TX_TCP_CKSUM:
+	switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		*td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_TCP;
 		*td_offset |= (sizeof(struct rte_tcp_hdr) >> 2) <<
 			      IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
 		break;
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		*td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_SCTP;
 		*td_offset |= (sizeof(struct rte_sctp_hdr) >> 2) <<
 			      IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
 		break;
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		*td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_UDP;
 		*td_offset |= (sizeof(struct rte_udp_hdr) >> 2) <<
 			      IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
@@ -2260,7 +2260,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Descriptor based VLAN insertion */
-		if (ol_flags & PKT_TX_VLAN &&
+		if (ol_flags & RTE_MBUF_F_TX_VLAN &&
 		    txq->vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG1) {
 			td_cmd |= IAVF_TX_DESC_CMD_IL2TAG1;
 			td_tag = tx_pkt->vlan_tci;
@@ -2297,12 +2297,12 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			}
 
 			/* TSO enabled */
-			if (ol_flags & PKT_TX_TCP_SEG)
+			if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 				cd_type_cmd_tso_mss |=
 					iavf_set_tso_ctx(tx_pkt, tx_offload);
 
-			if (ol_flags & PKT_TX_VLAN &&
-			   txq->vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2) {
+			if (ol_flags & RTE_MBUF_F_TX_VLAN &&
+			    txq->vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2) {
 				cd_type_cmd_tso_mss |= IAVF_TX_CTX_DESC_IL2TAG2
 					<< IAVF_TXD_CTX_QW1_CMD_SHIFT;
 				cd_l2tag2 = tx_pkt->vlan_tci;
@@ -2415,7 +2415,7 @@ iavf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		ol_flags = m->ol_flags;
 
 		/* Check condition for nb_segs > IAVF_TX_MAX_MTU_SEG. */
-		if (!(ol_flags & PKT_TX_TCP_SEG)) {
+		if (!(ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 			if (m->nb_segs > IAVF_TX_MAX_MTU_SEG) {
 				rte_errno = EINVAL;
 				return i;
@@ -2446,7 +2446,7 @@ iavf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		}
 
 		if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_QOS &&
-		    ol_flags & (PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN)) {
+		    ol_flags & (RTE_MBUF_F_RX_VLAN_STRIPPED | RTE_MBUF_F_RX_VLAN)) {
 			ret = iavf_check_vlan_up2tc(txq, m);
 			if (ret != 0) {
 				rte_errno = -ret;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 6c3fdbb3b2..2ff18505f4 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -52,23 +52,21 @@
 #define IAVF_TSO_MAX_SEG          UINT8_MAX
 #define IAVF_TX_MAX_MTU_SEG       8
 
-#define IAVF_TX_CKSUM_OFFLOAD_MASK (		 \
-		PKT_TX_IP_CKSUM |		 \
-		PKT_TX_L4_MASK |		 \
-		PKT_TX_TCP_SEG)
-
-#define IAVF_TX_OFFLOAD_MASK (  \
-		PKT_TX_OUTER_IPV6 |		 \
-		PKT_TX_OUTER_IPV4 |		 \
-		PKT_TX_IPV6 |			 \
-		PKT_TX_IPV4 |			 \
-		PKT_TX_VLAN |		 \
-		PKT_TX_IP_CKSUM |		 \
-		PKT_TX_L4_MASK |		 \
-		PKT_TX_TCP_SEG)
+#define IAVF_TX_CKSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM |		 \
+		RTE_MBUF_F_TX_L4_MASK |		 \
+		RTE_MBUF_F_TX_TCP_SEG)
+
+#define IAVF_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_OUTER_IPV6 |		 \
+		RTE_MBUF_F_TX_OUTER_IPV4 |		 \
+		RTE_MBUF_F_TX_IPV6 |			 \
+		RTE_MBUF_F_TX_IPV4 |			 \
+		RTE_MBUF_F_TX_VLAN |		 \
+		RTE_MBUF_F_TX_IP_CKSUM |		 \
+		RTE_MBUF_F_TX_L4_MASK |		 \
+		RTE_MBUF_F_TX_TCP_SEG)
 
 #define IAVF_TX_OFFLOAD_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ IAVF_TX_OFFLOAD_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IAVF_TX_OFFLOAD_MASK)
 
 /**
  * Rx Flex Descriptors
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 475070e036..b1d70036e5 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -127,8 +127,8 @@ _iavf_recv_raw_pkts_vec_avx2(struct iavf_rx_queue *rxq,
 	 * destination
 	 */
 	const __m256i vlan_flags_shuf =
-		_mm256_set_epi32(0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0,
-				 0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0);
+		_mm256_set_epi32(0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0,
+				 0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0);
 	/**
 	 * data to be shuffled by result of flag mask, shifted down 11.
 	 * If RSS/FDIR bits are set, shuffle moves appropriate flags in
@@ -136,11 +136,11 @@ _iavf_recv_raw_pkts_vec_avx2(struct iavf_rx_queue *rxq,
 	 */
 	const __m256i rss_flags_shuf =
 		_mm256_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
-				PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH,
-				0, 0, 0, 0, PKT_RX_FDIR, 0,/* end up 128-bits */
+				RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH,
+				0, 0, 0, 0, RTE_MBUF_F_RX_FDIR, 0,/* end up 128-bits */
 				0, 0, 0, 0, 0, 0, 0, 0,
-				PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH,
-				0, 0, 0, 0, PKT_RX_FDIR, 0);
+				RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH,
+				0, 0, 0, 0, RTE_MBUF_F_RX_FDIR, 0);
 
 	/**
 	 * data to be shuffled by the result of the flags mask shifted by 22
@@ -148,33 +148,33 @@ _iavf_recv_raw_pkts_vec_avx2(struct iavf_rx_queue *rxq,
 	 */
 	const __m256i l3_l4_flags_shuf = _mm256_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 			/* shift right 1 bit to make sure it not exceed 255 */
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-			 PKT_RX_L4_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-			PKT_RX_IP_CKSUM_BAD >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+			 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1,
 			/* second 128-bits */
 			0, 0, 0, 0, 0, 0, 0, 0,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-			 PKT_RX_L4_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-			PKT_RX_IP_CKSUM_BAD >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1);
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+			 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1);
 
 	const __m256i cksum_mask =
-		 _mm256_set1_epi32(PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-				   PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-				   PKT_RX_OUTER_IP_CKSUM_BAD);
+		 _mm256_set1_epi32(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+				   RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+				   RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 
 	RTE_SET_USED(avx_aligned); /* for 32B descriptors we don't use this */
 
@@ -502,10 +502,10 @@ static inline __m256i
 flex_rxd_to_fdir_flags_vec_avx2(const __m256i fdir_id0_7)
 {
 #define FDID_MIS_MAGIC 0xFFFFFFFF
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR != (1 << 2));
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
-	const __m256i pkt_fdir_bit = _mm256_set1_epi32(PKT_RX_FDIR |
-			PKT_RX_FDIR_ID);
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR != (1 << 2));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
+	const __m256i pkt_fdir_bit = _mm256_set1_epi32(RTE_MBUF_F_RX_FDIR |
+			RTE_MBUF_F_RX_FDIR_ID);
 	/* desc->flow_id field == 0xFFFFFFFF means fdir mismatch */
 	const __m256i fdir_mis_mask = _mm256_set1_epi32(FDID_MIS_MAGIC);
 	__m256i fdir_mask = _mm256_cmpeq_epi32(fdir_id0_7,
@@ -626,36 +626,36 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
 	 */
 	const __m256i l3_l4_flags_shuf = _mm256_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 			/* shift right 1 bit to make sure it not exceed 255 */
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
 			/* second 128-bits */
 			0, 0, 0, 0, 0, 0, 0, 0,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1);
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 	const __m256i cksum_mask =
-		 _mm256_set1_epi32(PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-				   PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-				   PKT_RX_OUTER_IP_CKSUM_BAD);
+		 _mm256_set1_epi32(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+				   RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+				   RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 	/**
 	 * data to be shuffled by result of flag mask, shifted down 12.
 	 * If RSS(bit12)/VLAN(bit13) are set,
@@ -664,27 +664,27 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
 	const __m256i rss_flags_shuf = _mm256_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH, 0,
-			PKT_RX_RSS_HASH, 0,
+			RTE_MBUF_F_RX_RSS_HASH, 0,
+			RTE_MBUF_F_RX_RSS_HASH, 0,
 			/* end up 128-bits */
 			0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH, 0,
-			PKT_RX_RSS_HASH, 0);
+			RTE_MBUF_F_RX_RSS_HASH, 0,
+			RTE_MBUF_F_RX_RSS_HASH, 0);
 
 	const __m256i vlan_flags_shuf = _mm256_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
 			0, 0,
 			/* end up 128-bits */
 			0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
 			0, 0);
 
 	uint16_t i, received;
@@ -1025,8 +1025,8 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
 							0, 0, 0, 0,
 							0, 0, 0, 0,
 							0, 0,
-							PKT_RX_VLAN |
-							PKT_RX_VLAN_STRIPPED,
+							RTE_MBUF_F_RX_VLAN |
+							RTE_MBUF_F_RX_VLAN_STRIPPED,
 							0);
 
 				vlan_flags =
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 571161c0cd..6d19a2ecdc 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -431,8 +431,8 @@ _iavf_recv_raw_pkts_vec_avx512(struct iavf_rx_queue *rxq,
 			 * destination
 			 */
 			const __m256i vlan_flags_shuf =
-				_mm256_set_epi32(0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0,
-						 0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0);
+				_mm256_set_epi32(0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0,
+						 0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0);
 #endif
 
 #ifdef IAVF_RX_RSS_OFFLOAD
@@ -443,11 +443,11 @@ _iavf_recv_raw_pkts_vec_avx512(struct iavf_rx_queue *rxq,
 			 */
 			const __m256i rss_flags_shuf =
 				_mm256_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
-						PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH,
-						0, 0, 0, 0, PKT_RX_FDIR, 0,/* end up 128-bits */
+						RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH,
+						0, 0, 0, 0, RTE_MBUF_F_RX_FDIR, 0,/* end up 128-bits */
 						0, 0, 0, 0, 0, 0, 0, 0,
-						PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH,
-						0, 0, 0, 0, PKT_RX_FDIR, 0);
+						RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH,
+						0, 0, 0, 0, RTE_MBUF_F_RX_FDIR, 0);
 #endif
 
 #ifdef IAVF_RX_CSUM_OFFLOAD
@@ -457,33 +457,33 @@ _iavf_recv_raw_pkts_vec_avx512(struct iavf_rx_queue *rxq,
 			 */
 			const __m256i l3_l4_flags_shuf = _mm256_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 					/* shift right 1 bit to make sure it not exceed 255 */
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-					 PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-					 PKT_RX_L4_CKSUM_BAD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-					PKT_RX_IP_CKSUM_BAD >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+					 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+					 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+					RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1,
 					/* second 128-bits */
 					0, 0, 0, 0, 0, 0, 0, 0,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-					 PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-					 PKT_RX_L4_CKSUM_BAD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-					PKT_RX_IP_CKSUM_BAD >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1);
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+					 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+					 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+					RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1);
 
 			const __m256i cksum_mask =
-				_mm256_set1_epi32(PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-						  PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-						  PKT_RX_OUTER_IP_CKSUM_BAD);
+				_mm256_set1_epi32(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+						  RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+						  RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 #endif
 
 #if defined(IAVF_RX_CSUM_OFFLOAD) || defined(IAVF_RX_VLAN_OFFLOAD) || defined(IAVF_RX_RSS_OFFLOAD)
@@ -688,10 +688,10 @@ static __rte_always_inline __m256i
 flex_rxd_to_fdir_flags_vec_avx512(const __m256i fdir_id0_7)
 {
 #define FDID_MIS_MAGIC 0xFFFFFFFF
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR != (1 << 2));
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
-	const __m256i pkt_fdir_bit = _mm256_set1_epi32(PKT_RX_FDIR |
-						       PKT_RX_FDIR_ID);
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR != (1 << 2));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
+	const __m256i pkt_fdir_bit = _mm256_set1_epi32(RTE_MBUF_F_RX_FDIR |
+						       RTE_MBUF_F_RX_FDIR_ID);
 	/* desc->flow_id field == 0xFFFFFFFF means fdir mismatch */
 	const __m256i fdir_mis_mask = _mm256_set1_epi32(FDID_MIS_MAGIC);
 	__m256i fdir_mask = _mm256_cmpeq_epi32(fdir_id0_7,
@@ -974,36 +974,36 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
 			 */
 			const __m256i l3_l4_flags_shuf = _mm256_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 					/* shift right 1 bit to make sure it not exceed 255 */
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-					 PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-					 PKT_RX_IP_CKSUM_GOOD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-					 PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-					 PKT_RX_IP_CKSUM_GOOD) >> 1,
-					(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-					(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+					 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+					 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+					 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+					 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
 					/* second 128-bits */
 					0, 0, 0, 0, 0, 0, 0, 0,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-					 PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-					 PKT_RX_IP_CKSUM_GOOD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-					 PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-					 PKT_RX_IP_CKSUM_GOOD) >> 1,
-					(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-					(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1);
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+					 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+					 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+					 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+					 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 			const __m256i cksum_mask =
-				_mm256_set1_epi32(PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-						  PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-						  PKT_RX_OUTER_IP_CKSUM_BAD);
+				_mm256_set1_epi32(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+						  RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+						  RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 #endif
 #if defined(IAVF_RX_VLAN_OFFLOAD) || defined(IAVF_RX_RSS_OFFLOAD)
 			/**
@@ -1015,28 +1015,28 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
 					(0, 0, 0, 0,
 					 0, 0, 0, 0,
 					 0, 0, 0, 0,
-					 PKT_RX_RSS_HASH, 0,
-					 PKT_RX_RSS_HASH, 0,
+					 RTE_MBUF_F_RX_RSS_HASH, 0,
+					 RTE_MBUF_F_RX_RSS_HASH, 0,
 					 /* end up 128-bits */
 					 0, 0, 0, 0,
 					 0, 0, 0, 0,
 					 0, 0, 0, 0,
-					 PKT_RX_RSS_HASH, 0,
-					 PKT_RX_RSS_HASH, 0);
+					 RTE_MBUF_F_RX_RSS_HASH, 0,
+					 RTE_MBUF_F_RX_RSS_HASH, 0);
 
 			const __m256i vlan_flags_shuf = _mm256_set_epi8
 					(0, 0, 0, 0,
 					 0, 0, 0, 0,
 					 0, 0, 0, 0,
-					 PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-					 PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
+					 RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+					 RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
 					 0, 0,
 					 /* end up 128-bits */
 					 0, 0, 0, 0,
 					 0, 0, 0, 0,
 					 0, 0, 0, 0,
-					 PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-					 PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
+					 RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+					 RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
 					 0, 0);
 #endif
 
@@ -1273,8 +1273,8 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
 							 0, 0, 0, 0,
 							 0, 0, 0, 0,
 							 0, 0,
-							 PKT_RX_VLAN |
-							 PKT_RX_VLAN_STRIPPED,
+							 RTE_MBUF_F_RX_VLAN |
+							 RTE_MBUF_F_RX_VLAN_STRIPPED,
 							 0);
 
 					vlan_flags =
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 457d6339e1..1fd37b74c1 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -326,33 +326,33 @@ iavf_txd_enable_offload(__rte_unused struct rte_mbuf *tx_pkt,
 		     IAVF_TX_DESC_LENGTH_MACLEN_SHIFT;
 
 	/* Enable L3 checksum offloads */
-	if (ol_flags & PKT_TX_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		td_cmd |= IAVF_TX_DESC_CMD_IIPT_IPV4_CSUM;
 		td_offset |= (tx_pkt->l3_len >> 2) <<
 			     IAVF_TX_DESC_LENGTH_IPLEN_SHIFT;
-	} else if (ol_flags & PKT_TX_IPV4) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		td_cmd |= IAVF_TX_DESC_CMD_IIPT_IPV4;
 		td_offset |= (tx_pkt->l3_len >> 2) <<
 			     IAVF_TX_DESC_LENGTH_IPLEN_SHIFT;
-	} else if (ol_flags & PKT_TX_IPV6) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV6) {
 		td_cmd |= IAVF_TX_DESC_CMD_IIPT_IPV6;
 		td_offset |= (tx_pkt->l3_len >> 2) <<
 			     IAVF_TX_DESC_LENGTH_IPLEN_SHIFT;
 	}
 
 	/* Enable L4 checksum offloads */
-	switch (ol_flags & PKT_TX_L4_MASK) {
-	case PKT_TX_TCP_CKSUM:
+	switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_TCP;
 		td_offset |= (sizeof(struct rte_tcp_hdr) >> 2) <<
 			     IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
 		break;
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_SCTP;
 		td_offset |= (sizeof(struct rte_sctp_hdr) >> 2) <<
 			     IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
 		break;
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_UDP;
 		td_offset |= (sizeof(struct rte_udp_hdr) >> 2) <<
 			     IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
@@ -365,7 +365,7 @@ iavf_txd_enable_offload(__rte_unused struct rte_mbuf *tx_pkt,
 #endif
 
 #ifdef IAVF_TX_VLAN_QINQ_OFFLOAD
-	if (ol_flags & (PKT_TX_VLAN | PKT_TX_QINQ)) {
+	if (ol_flags & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) {
 		td_cmd |= IAVF_TX_DESC_CMD_IL2TAG1;
 		*txd_hi |= ((uint64_t)tx_pkt->vlan_tci <<
 			    IAVF_TXD_QW1_L2TAG1_SHIFT);
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index ee1e905525..363d0e62df 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -108,42 +108,42 @@ desc_to_olflags_v(struct iavf_rx_queue *rxq, __m128i descs[4],
 			0x1c03804, 0x1c03804, 0x1c03804, 0x1c03804);
 
 	const __m128i cksum_mask = _mm_set_epi32(
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD);
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 
 	/* map rss and vlan type to rss hash and vlan flag */
 	const __m128i vlan_flags = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
-			0, 0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
+			0, 0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
 			0, 0, 0, 0);
 
 	const __m128i rss_flags = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH, 0, 0,
-			0, 0, PKT_RX_FDIR, 0);
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH, 0, 0,
+			0, 0, RTE_MBUF_F_RX_FDIR, 0);
 
 	const __m128i l3_l4e_flags = _mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 			/* shift right 1 bit to make sure it not exceed 255 */
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-			 PKT_RX_L4_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-			PKT_RX_IP_CKSUM_BAD >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1);
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+			 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1);
 
 	vlan0 = _mm_unpackhi_epi32(descs[0], descs[1]);
 	vlan1 = _mm_unpackhi_epi32(descs[2], descs[3]);
@@ -193,10 +193,10 @@ static inline __m128i
 flex_rxd_to_fdir_flags_vec(const __m128i fdir_id0_3)
 {
 #define FDID_MIS_MAGIC 0xFFFFFFFF
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR != (1 << 2));
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
-	const __m128i pkt_fdir_bit = _mm_set1_epi32(PKT_RX_FDIR |
-			PKT_RX_FDIR_ID);
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR != (1 << 2));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
+	const __m128i pkt_fdir_bit = _mm_set1_epi32(RTE_MBUF_F_RX_FDIR |
+			RTE_MBUF_F_RX_FDIR_ID);
 	/* desc->flow_id field == 0xFFFFFFFF means fdir mismatch */
 	const __m128i fdir_mis_mask = _mm_set1_epi32(FDID_MIS_MAGIC);
 	__m128i fdir_mask = _mm_cmpeq_epi32(fdir_id0_3,
@@ -225,43 +225,43 @@ flex_desc_to_olflags_v(struct iavf_rx_queue *rxq, __m128i descs[4],
 	const __m128i desc_mask = _mm_set_epi32(0x3070, 0x3070,
 						0x3070, 0x3070);
 
-	const __m128i cksum_mask = _mm_set_epi32(PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD,
-						 PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD,
-						 PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD,
-						 PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD);
+	const __m128i cksum_mask = _mm_set_epi32(RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+						 RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+						 RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+						 RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 
 	/* map the checksum, rss and vlan fields to the checksum, rss
 	 * and vlan flag
 	 */
 	const __m128i cksum_flags = _mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 			/* shift right 1 bit to make sure it not exceed 255 */
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1);
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 
 	const __m128i rss_vlan_flags = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_RSS_HASH, 0);
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_RSS_HASH, 0);
 
 	/* merge 4 descriptors */
 	flags = _mm_unpackhi_epi32(descs[0], descs[1]);
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 5d7ab4f047..0be124e170 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -10,11 +10,10 @@
 #include "ice_rxtx.h"
 #include "ice_rxtx_vec_common.h"
 
-#define ICE_TX_CKSUM_OFFLOAD_MASK (		 \
-		PKT_TX_IP_CKSUM |		 \
-		PKT_TX_L4_MASK |		 \
-		PKT_TX_TCP_SEG |		 \
-		PKT_TX_OUTER_IP_CKSUM)
+#define ICE_TX_CKSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM |		 \
+		RTE_MBUF_F_TX_L4_MASK |		 \
+		RTE_MBUF_F_TX_TCP_SEG |		 \
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 
 /* Offset of mbuf dynamic field for protocol extraction data */
 int rte_net_ice_dynfield_proto_xtr_metadata_offs = -1;
@@ -88,13 +87,13 @@ ice_rxd_to_pkt_fields_by_comms_generic(__rte_unused struct ice_rx_queue *rxq,
 	uint16_t stat_err = rte_le_to_cpu_16(desc->status_error0);
 
 	if (likely(stat_err & (1 << ICE_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
-		mb->ol_flags |= PKT_RX_RSS_HASH;
+		mb->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
 	}
 
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
 	if (desc->flow_id != 0xFFFFFFFF) {
-		mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mb->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
 	}
 #endif
@@ -112,14 +111,14 @@ ice_rxd_to_pkt_fields_by_comms_ovs(__rte_unused struct ice_rx_queue *rxq,
 #endif
 
 	if (desc->flow_id != 0xFFFFFFFF) {
-		mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mb->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
 	}
 
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
 	stat_err = rte_le_to_cpu_16(desc->status_error0);
 	if (likely(stat_err & (1 << ICE_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
-		mb->ol_flags |= PKT_RX_RSS_HASH;
+		mb->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
 	}
 #endif
@@ -136,13 +135,13 @@ ice_rxd_to_pkt_fields_by_comms_aux_v1(struct ice_rx_queue *rxq,
 
 	stat_err = rte_le_to_cpu_16(desc->status_error0);
 	if (likely(stat_err & (1 << ICE_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
-		mb->ol_flags |= PKT_RX_RSS_HASH;
+		mb->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
 	}
 
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
 	if (desc->flow_id != 0xFFFFFFFF) {
-		mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mb->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
 	}
 
@@ -178,13 +177,13 @@ ice_rxd_to_pkt_fields_by_comms_aux_v2(struct ice_rx_queue *rxq,
 
 	stat_err = rte_le_to_cpu_16(desc->status_error0);
 	if (likely(stat_err & (1 << ICE_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
-		mb->ol_flags |= PKT_RX_RSS_HASH;
+		mb->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
 	}
 
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
 	if (desc->flow_id != 0xFFFFFFFF) {
-		mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mb->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
 	}
 
@@ -1473,27 +1472,27 @@ ice_rxd_error_to_pkt_flags(uint16_t stat_err0)
 		return 0;
 
 	if (likely(!(stat_err0 & ICE_RX_FLEX_ERR0_BITS))) {
-		flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 		return flags;
 	}
 
 	if (unlikely(stat_err0 & (1 << ICE_RX_FLEX_DESC_STATUS0_XSUM_IPE_S)))
-		flags |= PKT_RX_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		flags |= PKT_RX_IP_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	if (unlikely(stat_err0 & (1 << ICE_RX_FLEX_DESC_STATUS0_XSUM_L4E_S)))
-		flags |= PKT_RX_L4_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	else
-		flags |= PKT_RX_L4_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	if (unlikely(stat_err0 & (1 << ICE_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)))
-		flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 
 	if (unlikely(stat_err0 & (1 << ICE_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S)))
-		flags |= PKT_RX_OUTER_L4_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 	else
-		flags |= PKT_RX_OUTER_L4_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
 
 	return flags;
 }
@@ -1503,7 +1502,7 @@ ice_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union ice_rx_flex_desc *rxdp)
 {
 	if (rte_le_to_cpu_16(rxdp->wb.status_error0) &
 	    (1 << ICE_RX_FLEX_DESC_STATUS0_L2TAG1P_S)) {
-		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mb->vlan_tci =
 			rte_le_to_cpu_16(rxdp->wb.l2tag1);
 		PMD_RX_LOG(DEBUG, "Descriptor l2tag1: %u",
@@ -1515,8 +1514,8 @@ ice_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union ice_rx_flex_desc *rxdp)
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
 	if (rte_le_to_cpu_16(rxdp->wb.status_error1) &
 	    (1 << ICE_RX_FLEX_DESC_STATUS1_L2TAG2P_S)) {
-		mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ |
-				PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+		mb->ol_flags |= RTE_MBUF_F_RX_QINQ_STRIPPED | RTE_MBUF_F_RX_QINQ |
+				RTE_MBUF_F_RX_VLAN_STRIPPED | RTE_MBUF_F_RX_VLAN;
 		mb->vlan_tci_outer = mb->vlan_tci;
 		mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd);
 		PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
@@ -2319,11 +2318,11 @@ ice_parse_tunneling_params(uint64_t ol_flags,
 			    uint32_t *cd_tunneling)
 {
 	/* EIPT: External (outer) IP header type */
-	if (ol_flags & PKT_TX_OUTER_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 		*cd_tunneling |= ICE_TX_CTX_EIPT_IPV4;
-	else if (ol_flags & PKT_TX_OUTER_IPV4)
+	else if (ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)
 		*cd_tunneling |= ICE_TX_CTX_EIPT_IPV4_NO_CSUM;
-	else if (ol_flags & PKT_TX_OUTER_IPV6)
+	else if (ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)
 		*cd_tunneling |= ICE_TX_CTX_EIPT_IPV6;
 
 	/* EIPLEN: External (outer) IP header length, in DWords */
@@ -2331,16 +2330,16 @@ ice_parse_tunneling_params(uint64_t ol_flags,
 		ICE_TXD_CTX_QW0_EIPLEN_S;
 
 	/* L4TUNT: L4 Tunneling Type */
-	switch (ol_flags & PKT_TX_TUNNEL_MASK) {
-	case PKT_TX_TUNNEL_IPIP:
+	switch (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+	case RTE_MBUF_F_TX_TUNNEL_IPIP:
 		/* for non UDP / GRE tunneling, set to 00b */
 		break;
-	case PKT_TX_TUNNEL_VXLAN:
-	case PKT_TX_TUNNEL_GTP:
-	case PKT_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
+	case RTE_MBUF_F_TX_TUNNEL_GTP:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 		*cd_tunneling |= ICE_TXD_CTX_UDP_TUNNELING;
 		break;
-	case PKT_TX_TUNNEL_GRE:
+	case RTE_MBUF_F_TX_TUNNEL_GRE:
 		*cd_tunneling |= ICE_TXD_CTX_GRE_TUNNELING;
 		break;
 	default:
@@ -2377,7 +2376,7 @@ ice_txd_enable_checksum(uint64_t ol_flags,
 			union ice_tx_offload tx_offload)
 {
 	/* Set MACLEN */
-	if (ol_flags & PKT_TX_TUNNEL_MASK)
+	if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 		*td_offset |= (tx_offload.outer_l2_len >> 1)
 			<< ICE_TX_DESC_LEN_MACLEN_S;
 	else
@@ -2385,21 +2384,21 @@ ice_txd_enable_checksum(uint64_t ol_flags,
 			<< ICE_TX_DESC_LEN_MACLEN_S;
 
 	/* Enable L3 checksum offloads */
-	if (ol_flags & PKT_TX_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4_CSUM;
 		*td_offset |= (tx_offload.l3_len >> 2) <<
 			      ICE_TX_DESC_LEN_IPLEN_S;
-	} else if (ol_flags & PKT_TX_IPV4) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4;
 		*td_offset |= (tx_offload.l3_len >> 2) <<
 			      ICE_TX_DESC_LEN_IPLEN_S;
-	} else if (ol_flags & PKT_TX_IPV6) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV6) {
 		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV6;
 		*td_offset |= (tx_offload.l3_len >> 2) <<
 			      ICE_TX_DESC_LEN_IPLEN_S;
 	}
 
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_TCP;
 		*td_offset |= (tx_offload.l4_len >> 2) <<
 			      ICE_TX_DESC_LEN_L4_LEN_S;
@@ -2407,18 +2406,18 @@ ice_txd_enable_checksum(uint64_t ol_flags,
 	}
 
 	/* Enable L4 checksum offloads */
-	switch (ol_flags & PKT_TX_L4_MASK) {
-	case PKT_TX_TCP_CKSUM:
+	switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_TCP;
 		*td_offset |= (sizeof(struct rte_tcp_hdr) >> 2) <<
 			      ICE_TX_DESC_LEN_L4_LEN_S;
 		break;
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_SCTP;
 		*td_offset |= (sizeof(struct rte_sctp_hdr) >> 2) <<
 			      ICE_TX_DESC_LEN_L4_LEN_S;
 		break;
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_UDP;
 		*td_offset |= (sizeof(struct rte_udp_hdr) >> 2) <<
 			      ICE_TX_DESC_LEN_L4_LEN_S;
@@ -2496,10 +2495,10 @@ ice_build_ctob(uint32_t td_cmd,
 static inline uint16_t
 ice_calc_context_desc(uint64_t flags)
 {
-	static uint64_t mask = PKT_TX_TCP_SEG |
-		PKT_TX_QINQ |
-		PKT_TX_OUTER_IP_CKSUM |
-		PKT_TX_TUNNEL_MASK;
+	static uint64_t mask = RTE_MBUF_F_TX_TCP_SEG |
+		RTE_MBUF_F_TX_QINQ |
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM |
+		RTE_MBUF_F_TX_TUNNEL_MASK;
 
 	return (flags & mask) ? 1 : 0;
 }
@@ -2517,7 +2516,7 @@ ice_set_tso_ctx(struct rte_mbuf *mbuf, union ice_tx_offload tx_offload)
 	}
 
 	hdr_len = tx_offload.l2_len + tx_offload.l3_len + tx_offload.l4_len;
-	hdr_len += (mbuf->ol_flags & PKT_TX_TUNNEL_MASK) ?
+	hdr_len += (mbuf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 		   tx_offload.outer_l2_len + tx_offload.outer_l3_len : 0;
 
 	cd_cmd = ICE_TX_CTX_DESC_TSO;
@@ -2604,7 +2603,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		 * the mbuf data size exceeds max data size that hw allows
 		 * per tx desc.
 		 */
-		if (ol_flags & PKT_TX_TCP_SEG)
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			nb_used = (uint16_t)(ice_calc_pkt_desc(tx_pkt) +
 					     nb_ctx);
 		else
@@ -2633,14 +2632,14 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Descriptor based VLAN insertion */
-		if (ol_flags & (PKT_TX_VLAN | PKT_TX_QINQ)) {
+		if (ol_flags & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) {
 			td_cmd |= ICE_TX_DESC_CMD_IL2TAG1;
 			td_tag = tx_pkt->vlan_tci;
 		}
 
 		/* Fill in tunneling parameters if necessary */
 		cd_tunneling_params = 0;
-		if (ol_flags & PKT_TX_TUNNEL_MASK)
+		if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 			ice_parse_tunneling_params(ol_flags, tx_offload,
 						   &cd_tunneling_params);
 
@@ -2664,7 +2663,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 				txe->mbuf = NULL;
 			}
 
-			if (ol_flags & PKT_TX_TCP_SEG)
+			if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 				cd_type_cmd_tso_mss |=
 					ice_set_tso_ctx(tx_pkt, tx_offload);
 
@@ -2672,7 +2671,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 				rte_cpu_to_le_32(cd_tunneling_params);
 
 			/* TX context descriptor based double VLAN insert */
-			if (ol_flags & PKT_TX_QINQ) {
+			if (ol_flags & RTE_MBUF_F_TX_QINQ) {
 				cd_l2tag2 = tx_pkt->vlan_tci_outer;
 				cd_type_cmd_tso_mss |=
 					((uint64_t)ICE_TX_CTX_DESC_IL2TAG2 <<
@@ -2700,7 +2699,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			slen = m_seg->data_len;
 			buf_dma_addr = rte_mbuf_data_iova(m_seg);
 
-			while ((ol_flags & PKT_TX_TCP_SEG) &&
+			while ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) &&
 				unlikely(slen > ICE_MAX_DATA_PER_TXD)) {
 				txd->buf_addr = rte_cpu_to_le_64(buf_dma_addr);
 				txd->cmd_type_offset_bsz =
@@ -3287,7 +3286,7 @@ ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		m = tx_pkts[i];
 		ol_flags = m->ol_flags;
 
-		if (ol_flags & PKT_TX_TCP_SEG &&
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG &&
 		    (m->tso_segsz < ICE_MIN_TSO_MSS ||
 		     m->tso_segsz > ICE_MAX_TSO_MSS ||
 		     m->pkt_len > ICE_MAX_TSO_FRAME_SIZE)) {
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 9725ac0180..c20927dc5c 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -20,10 +20,10 @@ static __rte_always_inline __m256i
 ice_flex_rxd_to_fdir_flags_vec_avx2(const __m256i fdir_id0_7)
 {
 #define FDID_MIS_MAGIC 0xFFFFFFFF
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR != (1 << 2));
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
-	const __m256i pkt_fdir_bit = _mm256_set1_epi32(PKT_RX_FDIR |
-			PKT_RX_FDIR_ID);
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR != (1 << 2));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
+	const __m256i pkt_fdir_bit = _mm256_set1_epi32(RTE_MBUF_F_RX_FDIR |
+			RTE_MBUF_F_RX_FDIR_ID);
 	/* desc->flow_id field == 0xFFFFFFFF means fdir mismatch */
 	const __m256i fdir_mis_mask = _mm256_set1_epi32(FDID_MIS_MAGIC);
 	__m256i fdir_mask = _mm256_cmpeq_epi32(fdir_id0_7,
@@ -142,82 +142,82 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	 * bits.  This gives use the l3_l4 flags.
 	 */
 	const __m256i l3_l4_flags_shuf =
-		_mm256_set_epi8((PKT_RX_OUTER_L4_CKSUM_BAD >> 20 |
-		 PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-		  PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
+		_mm256_set_epi8((RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 |
+		 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		  RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
 		/**
 		 * second 128-bits
 		 * shift right 20 bits to use the low two bits to indicate
 		 * outer checksum status
 		 * shift right 1 bit to make sure it not exceed 255
 		 */
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1);
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 	const __m256i cksum_mask =
-		 _mm256_set1_epi32(PKT_RX_IP_CKSUM_MASK |
-				   PKT_RX_L4_CKSUM_MASK |
-				   PKT_RX_OUTER_IP_CKSUM_BAD |
-				   PKT_RX_OUTER_L4_CKSUM_MASK);
+		 _mm256_set1_epi32(RTE_MBUF_F_RX_IP_CKSUM_MASK |
+				   RTE_MBUF_F_RX_L4_CKSUM_MASK |
+				   RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+				   RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK);
 	/**
 	 * data to be shuffled by result of flag mask, shifted down 12.
 	 * If RSS(bit12)/VLAN(bit13) are set,
@@ -226,16 +226,16 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	const __m256i rss_vlan_flags_shuf = _mm256_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_RSS_HASH, 0,
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_RSS_HASH, 0,
 			/* end up 128-bits */
 			0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_RSS_HASH, 0);
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_RSS_HASH, 0);
 
 	RTE_SET_USED(avx_aligned); /* for 32B descriptors we don't use this */
 
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 5bba9887d2..1fe3de5aa2 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -135,10 +135,10 @@ static inline __m256i
 ice_flex_rxd_to_fdir_flags_vec_avx512(const __m256i fdir_id0_7)
 {
 #define FDID_MIS_MAGIC 0xFFFFFFFF
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR != (1 << 2));
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
-	const __m256i pkt_fdir_bit = _mm256_set1_epi32(PKT_RX_FDIR |
-			PKT_RX_FDIR_ID);
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR != (1 << 2));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
+	const __m256i pkt_fdir_bit = _mm256_set1_epi32(RTE_MBUF_F_RX_FDIR |
+			RTE_MBUF_F_RX_FDIR_ID);
 	/* desc->flow_id field == 0xFFFFFFFF means fdir mismatch */
 	const __m256i fdir_mis_mask = _mm256_set1_epi32(FDID_MIS_MAGIC);
 	__m256i fdir_mask = _mm256_cmpeq_epi32(fdir_id0_7,
@@ -242,82 +242,82 @@ _ice_recv_raw_pkts_vec_avx512(struct ice_rx_queue *rxq,
 	 * bits.  This gives use the l3_l4 flags.
 	 */
 	const __m256i l3_l4_flags_shuf =
-		_mm256_set_epi8((PKT_RX_OUTER_L4_CKSUM_BAD >> 20 |
-		 PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-		  PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
+		_mm256_set_epi8((RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 |
+		 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		  RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
 		/**
 		 * second 128-bits
 		 * shift right 20 bits to use the low two bits to indicate
 		 * outer checksum status
 		 * shift right 1 bit to make sure it not exceed 255
 		 */
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1);
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 	const __m256i cksum_mask =
-		 _mm256_set1_epi32(PKT_RX_IP_CKSUM_MASK |
-				   PKT_RX_L4_CKSUM_MASK |
-				   PKT_RX_OUTER_IP_CKSUM_BAD |
-				   PKT_RX_OUTER_L4_CKSUM_MASK);
+		 _mm256_set1_epi32(RTE_MBUF_F_RX_IP_CKSUM_MASK |
+				   RTE_MBUF_F_RX_L4_CKSUM_MASK |
+				   RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+				   RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK);
 	/**
 	 * data to be shuffled by result of flag mask, shifted down 12.
 	 * If RSS(bit12)/VLAN(bit13) are set,
@@ -326,16 +326,16 @@ _ice_recv_raw_pkts_vec_avx512(struct ice_rx_queue *rxq,
 	const __m256i rss_vlan_flags_shuf = _mm256_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_RSS_HASH, 0,
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_RSS_HASH, 0,
 			/* 2nd 128-bits */
 			0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_RSS_HASH, 0);
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_RSS_HASH, 0);
 
 	uint16_t i, received;
 
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 2d8ef7dc8a..6de054f237 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -565,33 +565,33 @@ ice_txd_enable_offload(struct rte_mbuf *tx_pkt,
 			ICE_TX_DESC_LEN_MACLEN_S;
 
 	/* Enable L3 checksum offload */
-	if (ol_flags & PKT_TX_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4_CSUM;
 		td_offset |= (tx_pkt->l3_len >> 2) <<
 			ICE_TX_DESC_LEN_IPLEN_S;
-	} else if (ol_flags & PKT_TX_IPV4) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4;
 		td_offset |= (tx_pkt->l3_len >> 2) <<
 			ICE_TX_DESC_LEN_IPLEN_S;
-	} else if (ol_flags & PKT_TX_IPV6) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV6) {
 		td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV6;
 		td_offset |= (tx_pkt->l3_len >> 2) <<
 			ICE_TX_DESC_LEN_IPLEN_S;
 	}
 
 	/* Enable L4 checksum offloads */
-	switch (ol_flags & PKT_TX_L4_MASK) {
-	case PKT_TX_TCP_CKSUM:
+	switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_TCP;
 		td_offset |= (sizeof(struct rte_tcp_hdr) >> 2) <<
 			ICE_TX_DESC_LEN_L4_LEN_S;
 		break;
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_SCTP;
 		td_offset |= (sizeof(struct rte_sctp_hdr) >> 2) <<
 			ICE_TX_DESC_LEN_L4_LEN_S;
 		break;
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_UDP;
 		td_offset |= (sizeof(struct rte_udp_hdr) >> 2) <<
 			ICE_TX_DESC_LEN_L4_LEN_S;
@@ -603,7 +603,7 @@ ice_txd_enable_offload(struct rte_mbuf *tx_pkt,
 	*txd_hi |= ((uint64_t)td_offset) << ICE_TXD_QW1_OFFSET_S;
 
 	/* Tx VLAN/QINQ insertion Offload */
-	if (ol_flags & (PKT_TX_VLAN | PKT_TX_QINQ)) {
+	if (ol_flags & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) {
 		td_cmd |= ICE_TX_DESC_CMD_IL2TAG1;
 		*txd_hi |= ((uint64_t)tx_pkt->vlan_tci <<
 				ICE_TXD_QW1_L2TAG1_S);
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 653bd28b41..df1347e64d 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -14,10 +14,10 @@ static inline __m128i
 ice_flex_rxd_to_fdir_flags_vec(const __m128i fdir_id0_3)
 {
 #define FDID_MIS_MAGIC 0xFFFFFFFF
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR != (1 << 2));
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
-	const __m128i pkt_fdir_bit = _mm_set1_epi32(PKT_RX_FDIR |
-			PKT_RX_FDIR_ID);
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR != (1 << 2));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
+	const __m128i pkt_fdir_bit = _mm_set1_epi32(RTE_MBUF_F_RX_FDIR |
+			RTE_MBUF_F_RX_FDIR_ID);
 	/* desc->flow_id field == 0xFFFFFFFF means fdir mismatch */
 	const __m128i fdir_mis_mask = _mm_set1_epi32(FDID_MIS_MAGIC);
 	__m128i fdir_mask = _mm_cmpeq_epi32(fdir_id0_3,
@@ -116,72 +116,72 @@ ice_rx_desc_to_olflags_v(struct ice_rx_queue *rxq, __m128i descs[4],
 	 */
 	const __m128i desc_mask = _mm_set_epi32(0x30f0, 0x30f0,
 						0x30f0, 0x30f0);
-	const __m128i cksum_mask = _mm_set_epi32(PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD,
-						 PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD,
-						 PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD,
-						 PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD);
+	const __m128i cksum_mask = _mm_set_epi32(RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+						 RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+						 RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+						 RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 
 	/* map the checksum, rss and vlan fields to the checksum, rss
 	 * and vlan flag
 	 */
 	const __m128i cksum_flags =
-		_mm_set_epi8((PKT_RX_OUTER_L4_CKSUM_BAD >> 20 |
-		 PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-		  PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
+		_mm_set_epi8((RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 |
+		 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		  RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
 		/**
 		 * shift right 20 bits to use the low two bits to indicate
 		 * outer checksum status
 		 * shift right 1 bit to make sure it not exceed 255
 		 */
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1);
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 
 	const __m128i rss_vlan_flags = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_RSS_HASH, 0);
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_RSS_HASH, 0);
 
 	/* merge 4 descriptors */
 	flags = _mm_unpackhi_epi32(descs[0], descs[1]);
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index 9848afd9ca..fdb388568b 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -74,17 +74,16 @@
 #define IGC_TSO_MAX_MSS			9216
 
 /* Bit Mask to indicate what bits required for building TX context */
-#define IGC_TX_OFFLOAD_MASK (		\
-		PKT_TX_OUTER_IPV4 |	\
-		PKT_TX_IPV6 |		\
-		PKT_TX_IPV4 |		\
-		PKT_TX_VLAN |	\
-		PKT_TX_IP_CKSUM |	\
-		PKT_TX_L4_MASK |	\
-		PKT_TX_TCP_SEG |	\
-		PKT_TX_UDP_SEG)
-
-#define IGC_TX_OFFLOAD_SEG	(PKT_TX_TCP_SEG | PKT_TX_UDP_SEG)
+#define IGC_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_OUTER_IPV4 |	\
+		RTE_MBUF_F_TX_IPV6 |		\
+		RTE_MBUF_F_TX_IPV4 |		\
+		RTE_MBUF_F_TX_VLAN |	\
+		RTE_MBUF_F_TX_IP_CKSUM |	\
+		RTE_MBUF_F_TX_L4_MASK |	\
+		RTE_MBUF_F_TX_TCP_SEG |	\
+		RTE_MBUF_F_TX_UDP_SEG)
+
+#define IGC_TX_OFFLOAD_SEG	(RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)
 
 #define IGC_ADVTXD_POPTS_TXSM	0x00000200 /* L4 Checksum offload request */
 #define IGC_ADVTXD_POPTS_IXSM	0x00000100 /* IP Checksum offload request */
@@ -92,7 +91,7 @@
 /* L4 Packet TYPE of Reserved */
 #define IGC_ADVTXD_TUCMD_L4T_RSV	0x00001800
 
-#define IGC_TX_OFFLOAD_NOTSUP_MASK (PKT_TX_OFFLOAD_MASK ^ IGC_TX_OFFLOAD_MASK)
+#define IGC_TX_OFFLOAD_NOTSUP_MASK (RTE_MBUF_F_TX_OFFLOAD_MASK ^ IGC_TX_OFFLOAD_MASK)
 
 /**
  * Structure associated with each descriptor of the RX ring of a RX queue.
@@ -215,16 +214,18 @@ struct igc_tx_queue {
 static inline uint64_t
 rx_desc_statuserr_to_pkt_flags(uint32_t statuserr)
 {
-	static uint64_t l4_chksum_flags[] = {0, 0, PKT_RX_L4_CKSUM_GOOD,
-			PKT_RX_L4_CKSUM_BAD};
+	static uint64_t l4_chksum_flags[] = {0, 0,
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD,
+			RTE_MBUF_F_RX_L4_CKSUM_BAD};
 
-	static uint64_t l3_chksum_flags[] = {0, 0, PKT_RX_IP_CKSUM_GOOD,
-			PKT_RX_IP_CKSUM_BAD};
+	static uint64_t l3_chksum_flags[] = {0, 0,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD};
 	uint64_t pkt_flags = 0;
 	uint32_t tmp;
 
 	if (statuserr & IGC_RXD_STAT_VP)
-		pkt_flags |= PKT_RX_VLAN_STRIPPED;
+		pkt_flags |= RTE_MBUF_F_RX_VLAN_STRIPPED;
 
 	tmp = !!(statuserr & (IGC_RXD_STAT_L4CS | IGC_RXD_STAT_UDPCS));
 	tmp = (tmp << 1) | (uint32_t)!!(statuserr & IGC_RXD_EXT_ERR_L4E);
@@ -332,10 +333,10 @@ rx_desc_get_pkt_info(struct igc_rx_queue *rxq, struct rte_mbuf *rxm,
 	rxm->vlan_tci = rte_le_to_cpu_16(rxd->wb.upper.vlan);
 
 	pkt_flags = (hlen_type_rss & IGC_RXD_RSS_TYPE_MASK) ?
-			PKT_RX_RSS_HASH : 0;
+			RTE_MBUF_F_RX_RSS_HASH : 0;
 
 	if (hlen_type_rss & IGC_RXD_VPKT)
-		pkt_flags |= PKT_RX_VLAN;
+		pkt_flags |= RTE_MBUF_F_RX_VLAN;
 
 	pkt_flags |= rx_desc_statuserr_to_pkt_flags(staterr);
 
@@ -1468,7 +1469,7 @@ check_tso_para(uint64_t ol_req, union igc_tx_offload ol_para)
 	if (ol_para.tso_segsz > IGC_TSO_MAX_MSS || ol_para.l2_len +
 		ol_para.l3_len + ol_para.l4_len > IGC_TSO_MAX_HDRLEN) {
 		ol_req &= ~IGC_TX_OFFLOAD_SEG;
-		ol_req |= PKT_TX_TCP_CKSUM;
+		ol_req |= RTE_MBUF_F_TX_TCP_CKSUM;
 	}
 	return ol_req;
 }
@@ -1530,20 +1531,20 @@ igc_set_xmit_ctx(struct igc_tx_queue *txq,
 	/* Specify which HW CTX to upload. */
 	mss_l4len_idx = (ctx_curr << IGC_ADVTXD_IDX_SHIFT);
 
-	if (ol_flags & PKT_TX_VLAN)
+	if (ol_flags & RTE_MBUF_F_TX_VLAN)
 		tx_offload_mask.vlan_tci = 0xffff;
 
 	/* check if TCP segmentation required for this packet */
 	if (ol_flags & IGC_TX_OFFLOAD_SEG) {
 		/* implies IP cksum in IPv4 */
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			type_tucmd_mlhl = IGC_ADVTXD_TUCMD_IPV4 |
 				IGC_ADVTXD_DTYP_CTXT | IGC_ADVTXD_DCMD_DEXT;
 		else
 			type_tucmd_mlhl = IGC_ADVTXD_TUCMD_IPV6 |
 				IGC_ADVTXD_DTYP_CTXT | IGC_ADVTXD_DCMD_DEXT;
 
-		if (ol_flags & PKT_TX_TCP_SEG)
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			type_tucmd_mlhl |= IGC_ADVTXD_TUCMD_L4T_TCP;
 		else
 			type_tucmd_mlhl |= IGC_ADVTXD_TUCMD_L4T_UDP;
@@ -1554,26 +1555,26 @@ igc_set_xmit_ctx(struct igc_tx_queue *txq,
 		mss_l4len_idx |= (uint32_t)tx_offload.l4_len <<
 				IGC_ADVTXD_L4LEN_SHIFT;
 	} else { /* no TSO, check if hardware checksum is needed */
-		if (ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK))
+		if (ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK))
 			tx_offload_mask.data |= TX_MACIP_LEN_CMP_MASK;
 
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			type_tucmd_mlhl = IGC_ADVTXD_TUCMD_IPV4;
 
-		switch (ol_flags & PKT_TX_L4_MASK) {
-		case PKT_TX_TCP_CKSUM:
+		switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+		case RTE_MBUF_F_TX_TCP_CKSUM:
 			type_tucmd_mlhl |= IGC_ADVTXD_TUCMD_L4T_TCP |
 				IGC_ADVTXD_DTYP_CTXT | IGC_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= (uint32_t)sizeof(struct rte_tcp_hdr)
 				<< IGC_ADVTXD_L4LEN_SHIFT;
 			break;
-		case PKT_TX_UDP_CKSUM:
+		case RTE_MBUF_F_TX_UDP_CKSUM:
 			type_tucmd_mlhl |= IGC_ADVTXD_TUCMD_L4T_UDP |
 				IGC_ADVTXD_DTYP_CTXT | IGC_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= (uint32_t)sizeof(struct rte_udp_hdr)
 				<< IGC_ADVTXD_L4LEN_SHIFT;
 			break;
-		case PKT_TX_SCTP_CKSUM:
+		case RTE_MBUF_F_TX_SCTP_CKSUM:
 			type_tucmd_mlhl |= IGC_ADVTXD_TUCMD_L4T_SCTP |
 				IGC_ADVTXD_DTYP_CTXT | IGC_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= (uint32_t)sizeof(struct rte_sctp_hdr)
@@ -1604,7 +1605,7 @@ tx_desc_vlan_flags_to_cmdtype(uint64_t ol_flags)
 	uint32_t cmdtype;
 	static uint32_t vlan_cmd[2] = {0, IGC_ADVTXD_DCMD_VLE};
 	static uint32_t tso_cmd[2] = {0, IGC_ADVTXD_DCMD_TSE};
-	cmdtype = vlan_cmd[(ol_flags & PKT_TX_VLAN) != 0];
+	cmdtype = vlan_cmd[(ol_flags & RTE_MBUF_F_TX_VLAN) != 0];
 	cmdtype |= tso_cmd[(ol_flags & IGC_TX_OFFLOAD_SEG) != 0];
 	return cmdtype;
 }
@@ -1616,8 +1617,8 @@ tx_desc_cksum_flags_to_olinfo(uint64_t ol_flags)
 	static const uint32_t l3_olinfo[2] = {0, IGC_ADVTXD_POPTS_IXSM};
 	uint32_t tmp;
 
-	tmp  = l4_olinfo[(ol_flags & PKT_TX_L4_MASK)  != PKT_TX_L4_NO_CKSUM];
-	tmp |= l3_olinfo[(ol_flags & PKT_TX_IP_CKSUM) != 0];
+	tmp  = l4_olinfo[(ol_flags & RTE_MBUF_F_TX_L4_MASK)  != RTE_MBUF_F_TX_L4_NO_CKSUM];
+	tmp |= l3_olinfo[(ol_flags & RTE_MBUF_F_TX_IP_CKSUM) != 0];
 	tmp |= l4_olinfo[(ol_flags & IGC_TX_OFFLOAD_SEG) != 0];
 	return tmp;
 }
@@ -1774,7 +1775,7 @@ igc_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		 * Timer 0 should be used to for packet timestamping,
 		 * sample the packet timestamp to reg 0
 		 */
-		if (ol_flags & PKT_TX_IEEE1588_TMST)
+		if (ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 			cmd_type_len |= IGC_ADVTXD_MAC_TSTAMP;
 
 		if (tx_ol_req) {
diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
index 431435eea0..db6107dfc1 100644
--- a/drivers/net/ionic/ionic_rxtx.c
+++ b/drivers/net/ionic/ionic_rxtx.c
@@ -258,7 +258,7 @@ ionic_tx_tcp_pseudo_csum(struct rte_mbuf *txm)
 	struct rte_tcp_hdr *tcp_hdr = (struct rte_tcp_hdr *)
 		(l3_hdr + txm->l3_len);
 
-	if (txm->ol_flags & PKT_TX_IP_CKSUM) {
+	if (txm->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		struct rte_ipv4_hdr *ipv4_hdr = (struct rte_ipv4_hdr *)l3_hdr;
 		ipv4_hdr->hdr_checksum = 0;
 		tcp_hdr->cksum = 0;
@@ -279,7 +279,7 @@ ionic_tx_tcp_inner_pseudo_csum(struct rte_mbuf *txm)
 	struct rte_tcp_hdr *tcp_hdr = (struct rte_tcp_hdr *)
 		(l3_hdr + txm->l3_len);
 
-	if (txm->ol_flags & PKT_TX_IPV4) {
+	if (txm->ol_flags & RTE_MBUF_F_TX_IPV4) {
 		struct rte_ipv4_hdr *ipv4_hdr = (struct rte_ipv4_hdr *)l3_hdr;
 		ipv4_hdr->hdr_checksum = 0;
 		tcp_hdr->cksum = 0;
@@ -356,14 +356,14 @@ ionic_tx_tso(struct ionic_tx_qcq *txq, struct rte_mbuf *txm)
 	uint32_t offset = 0;
 	bool start, done;
 	bool encap;
-	bool has_vlan = !!(txm->ol_flags & PKT_TX_VLAN);
+	bool has_vlan = !!(txm->ol_flags & RTE_MBUF_F_TX_VLAN);
 	uint16_t vlan_tci = txm->vlan_tci;
 	uint64_t ol_flags = txm->ol_flags;
 
-	encap = ((ol_flags & PKT_TX_OUTER_IP_CKSUM) ||
-		(ol_flags & PKT_TX_OUTER_UDP_CKSUM)) &&
-		((ol_flags & PKT_TX_OUTER_IPV4) ||
-		(ol_flags & PKT_TX_OUTER_IPV6));
+	encap = ((ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) ||
+		 (ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM)) &&
+		((ol_flags & RTE_MBUF_F_TX_OUTER_IPV4) ||
+		 (ol_flags & RTE_MBUF_F_TX_OUTER_IPV6));
 
 	/* Preload inner-most TCP csum field with IP pseudo hdr
 	 * calculated with IP length set to zero.  HW will later
@@ -478,15 +478,15 @@ ionic_tx(struct ionic_tx_qcq *txq, struct rte_mbuf *txm)
 	desc = &desc_base[q->head_idx];
 	info = IONIC_INFO_PTR(q, q->head_idx);
 
-	if ((ol_flags & PKT_TX_IP_CKSUM) &&
+	if ((ol_flags & RTE_MBUF_F_TX_IP_CKSUM) &&
 	    (txq->flags & IONIC_QCQ_F_CSUM_L3)) {
 		opcode = IONIC_TXQ_DESC_OPCODE_CSUM_HW;
 		flags |= IONIC_TXQ_DESC_FLAG_CSUM_L3;
 	}
 
-	if (((ol_flags & PKT_TX_TCP_CKSUM) &&
+	if (((ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) &&
 	     (txq->flags & IONIC_QCQ_F_CSUM_TCP)) ||
-	    ((ol_flags & PKT_TX_UDP_CKSUM) &&
+	    ((ol_flags & RTE_MBUF_F_TX_UDP_CKSUM) &&
 	     (txq->flags & IONIC_QCQ_F_CSUM_UDP))) {
 		opcode = IONIC_TXQ_DESC_OPCODE_CSUM_HW;
 		flags |= IONIC_TXQ_DESC_FLAG_CSUM_L4;
@@ -495,11 +495,11 @@ ionic_tx(struct ionic_tx_qcq *txq, struct rte_mbuf *txm)
 	if (opcode == IONIC_TXQ_DESC_OPCODE_CSUM_NONE)
 		stats->no_csum++;
 
-	has_vlan = (ol_flags & PKT_TX_VLAN);
-	encap = ((ol_flags & PKT_TX_OUTER_IP_CKSUM) ||
-			(ol_flags & PKT_TX_OUTER_UDP_CKSUM)) &&
-			((ol_flags & PKT_TX_OUTER_IPV4) ||
-			(ol_flags & PKT_TX_OUTER_IPV6));
+	has_vlan = (ol_flags & RTE_MBUF_F_TX_VLAN);
+	encap = ((ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) ||
+			(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM)) &&
+			((ol_flags & RTE_MBUF_F_TX_OUTER_IPV4) ||
+			 (ol_flags & RTE_MBUF_F_TX_OUTER_IPV6));
 
 	flags |= has_vlan ? IONIC_TXQ_DESC_FLAG_VLAN : 0;
 	flags |= encap ? IONIC_TXQ_DESC_FLAG_ENCAP : 0;
@@ -556,7 +556,7 @@ ionic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			rte_prefetch0(&q->info[next_q_head_idx]);
 		}
 
-		if (tx_pkts[nb_tx]->ol_flags & PKT_TX_TCP_SEG)
+		if (tx_pkts[nb_tx]->ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			err = ionic_tx_tso(txq, tx_pkts[nb_tx]);
 		else
 			err = ionic_tx(txq, tx_pkts[nb_tx]);
@@ -586,16 +586,15 @@ ionic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
  *
  **********************************************************************/
 
-#define IONIC_TX_OFFLOAD_MASK (	\
-	PKT_TX_IPV4 |		\
-	PKT_TX_IPV6 |		\
-	PKT_TX_VLAN |		\
-	PKT_TX_IP_CKSUM |	\
-	PKT_TX_TCP_SEG |	\
-	PKT_TX_L4_MASK)
+#define IONIC_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_IPV4 |		\
+	RTE_MBUF_F_TX_IPV6 |		\
+	RTE_MBUF_F_TX_VLAN |		\
+	RTE_MBUF_F_TX_IP_CKSUM |	\
+	RTE_MBUF_F_TX_TCP_SEG |	\
+	RTE_MBUF_F_TX_L4_MASK)
 
 #define IONIC_TX_OFFLOAD_NOTSUP_MASK \
-	(PKT_TX_OFFLOAD_MASK ^ IONIC_TX_OFFLOAD_MASK)
+	(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IONIC_TX_OFFLOAD_MASK)
 
 uint16_t
 ionic_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
@@ -842,30 +841,30 @@ ionic_rx_clean(struct ionic_rx_qcq *rxq,
 	}
 
 	/* RSS */
-	pkt_flags |= PKT_RX_RSS_HASH;
+	pkt_flags |= RTE_MBUF_F_RX_RSS_HASH;
 	rxm->hash.rss = cq_desc->rss_hash;
 
 	/* Vlan Strip */
 	if (cq_desc->csum_flags & IONIC_RXQ_COMP_CSUM_F_VLAN) {
-		pkt_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		pkt_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		rxm->vlan_tci = cq_desc->vlan_tci;
 	}
 
 	/* Checksum */
 	if (cq_desc->csum_flags & IONIC_RXQ_COMP_CSUM_F_CALC) {
 		if (cq_desc->csum_flags & IONIC_RXQ_COMP_CSUM_F_IP_OK)
-			pkt_flags |= PKT_RX_IP_CKSUM_GOOD;
+			pkt_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 		else if (cq_desc->csum_flags & IONIC_RXQ_COMP_CSUM_F_IP_BAD)
-			pkt_flags |= PKT_RX_IP_CKSUM_BAD;
+			pkt_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 
 		if ((cq_desc->csum_flags & IONIC_RXQ_COMP_CSUM_F_TCP_OK) ||
 			(cq_desc->csum_flags & IONIC_RXQ_COMP_CSUM_F_UDP_OK))
-			pkt_flags |= PKT_RX_L4_CKSUM_GOOD;
+			pkt_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		else if ((cq_desc->csum_flags &
 				IONIC_RXQ_COMP_CSUM_F_TCP_BAD) ||
 				(cq_desc->csum_flags &
 				IONIC_RXQ_COMP_CSUM_F_UDP_BAD))
-			pkt_flags |= PKT_RX_L4_CKSUM_BAD;
+			pkt_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	}
 
 	rxm->ol_flags = pkt_flags;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 47693c0c47..b1e694e2a0 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1967,10 +1967,10 @@ ixgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
 	rxq = dev->data->rx_queues[queue];
 
 	if (on) {
-		rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		rxq->vlan_flags = RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
 	} else {
-		rxq->vlan_flags = PKT_RX_VLAN;
+		rxq->vlan_flags = RTE_MBUF_F_RX_VLAN;
 		rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
 	}
 }
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 717ae8f775..795b3c1363 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -54,27 +54,26 @@
 #include "ixgbe_rxtx.h"
 
 #ifdef RTE_LIBRTE_IEEE1588
-#define IXGBE_TX_IEEE1588_TMST PKT_TX_IEEE1588_TMST
+#define IXGBE_TX_IEEE1588_TMST RTE_MBUF_F_TX_IEEE1588_TMST
 #else
 #define IXGBE_TX_IEEE1588_TMST 0
 #endif
 /* Bit Mask to indicate what bits required for building TX context */
-#define IXGBE_TX_OFFLOAD_MASK (			 \
-		PKT_TX_OUTER_IPV6 |		 \
-		PKT_TX_OUTER_IPV4 |		 \
-		PKT_TX_IPV6 |			 \
-		PKT_TX_IPV4 |			 \
-		PKT_TX_VLAN |		 \
-		PKT_TX_IP_CKSUM |		 \
-		PKT_TX_L4_MASK |		 \
-		PKT_TX_TCP_SEG |		 \
-		PKT_TX_MACSEC |			 \
-		PKT_TX_OUTER_IP_CKSUM |		 \
-		PKT_TX_SEC_OFFLOAD |	 \
+#define IXGBE_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_OUTER_IPV6 |		 \
+		RTE_MBUF_F_TX_OUTER_IPV4 |		 \
+		RTE_MBUF_F_TX_IPV6 |			 \
+		RTE_MBUF_F_TX_IPV4 |			 \
+		RTE_MBUF_F_TX_VLAN |		 \
+		RTE_MBUF_F_TX_IP_CKSUM |		 \
+		RTE_MBUF_F_TX_L4_MASK |		 \
+		RTE_MBUF_F_TX_TCP_SEG |		 \
+		RTE_MBUF_F_TX_MACSEC |			 \
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM |		 \
+		RTE_MBUF_F_TX_SEC_OFFLOAD |	 \
 		IXGBE_TX_IEEE1588_TMST)
 
 #define IXGBE_TX_OFFLOAD_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ IXGBE_TX_OFFLOAD_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IXGBE_TX_OFFLOAD_MASK)
 
 #if 1
 #define RTE_PMD_USE_PREFETCH
@@ -384,14 +383,14 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 	/* Specify which HW CTX to upload. */
 	mss_l4len_idx |= (ctx_idx << IXGBE_ADVTXD_IDX_SHIFT);
 
-	if (ol_flags & PKT_TX_VLAN) {
+	if (ol_flags & RTE_MBUF_F_TX_VLAN) {
 		tx_offload_mask.vlan_tci |= ~0;
 	}
 
 	/* check if TCP segmentation required for this packet */
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		/* implies IP cksum in IPv4 */
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			type_tucmd_mlhl = IXGBE_ADVTXD_TUCMD_IPV4 |
 				IXGBE_ADVTXD_TUCMD_L4T_TCP |
 				IXGBE_ADVTXD_DTYP_CTXT | IXGBE_ADVTXD_DCMD_DEXT;
@@ -407,14 +406,14 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 		mss_l4len_idx |= tx_offload.tso_segsz << IXGBE_ADVTXD_MSS_SHIFT;
 		mss_l4len_idx |= tx_offload.l4_len << IXGBE_ADVTXD_L4LEN_SHIFT;
 	} else { /* no TSO, check if hardware checksum is needed */
-		if (ol_flags & PKT_TX_IP_CKSUM) {
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 			type_tucmd_mlhl = IXGBE_ADVTXD_TUCMD_IPV4;
 			tx_offload_mask.l2_len |= ~0;
 			tx_offload_mask.l3_len |= ~0;
 		}
 
-		switch (ol_flags & PKT_TX_L4_MASK) {
-		case PKT_TX_UDP_CKSUM:
+		switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+		case RTE_MBUF_F_TX_UDP_CKSUM:
 			type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_UDP |
 				IXGBE_ADVTXD_DTYP_CTXT | IXGBE_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= sizeof(struct rte_udp_hdr)
@@ -422,7 +421,7 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 			tx_offload_mask.l2_len |= ~0;
 			tx_offload_mask.l3_len |= ~0;
 			break;
-		case PKT_TX_TCP_CKSUM:
+		case RTE_MBUF_F_TX_TCP_CKSUM:
 			type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_TCP |
 				IXGBE_ADVTXD_DTYP_CTXT | IXGBE_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= sizeof(struct rte_tcp_hdr)
@@ -430,7 +429,7 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 			tx_offload_mask.l2_len |= ~0;
 			tx_offload_mask.l3_len |= ~0;
 			break;
-		case PKT_TX_SCTP_CKSUM:
+		case RTE_MBUF_F_TX_SCTP_CKSUM:
 			type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_SCTP |
 				IXGBE_ADVTXD_DTYP_CTXT | IXGBE_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= sizeof(struct rte_sctp_hdr)
@@ -445,7 +444,7 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 		}
 	}
 
-	if (ol_flags & PKT_TX_OUTER_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) {
 		tx_offload_mask.outer_l2_len |= ~0;
 		tx_offload_mask.outer_l3_len |= ~0;
 		tx_offload_mask.l2_len |= ~0;
@@ -455,7 +454,7 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 			       << IXGBE_ADVTXD_TUNNEL_LEN;
 	}
 #ifdef RTE_LIB_SECURITY
-	if (ol_flags & PKT_TX_SEC_OFFLOAD) {
+	if (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) {
 		union ixgbe_crypto_tx_desc_md *md =
 				(union ixgbe_crypto_tx_desc_md *)mdata;
 		seqnum_seed |=
@@ -479,7 +478,7 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 
 	ctx_txd->type_tucmd_mlhl = rte_cpu_to_le_32(type_tucmd_mlhl);
 	vlan_macip_lens = tx_offload.l3_len;
-	if (ol_flags & PKT_TX_OUTER_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 		vlan_macip_lens |= (tx_offload.outer_l2_len <<
 				    IXGBE_ADVTXD_MACLEN_SHIFT);
 	else
@@ -529,11 +528,11 @@ tx_desc_cksum_flags_to_olinfo(uint64_t ol_flags)
 {
 	uint32_t tmp = 0;
 
-	if ((ol_flags & PKT_TX_L4_MASK) != PKT_TX_L4_NO_CKSUM)
+	if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) != RTE_MBUF_F_TX_L4_NO_CKSUM)
 		tmp |= IXGBE_ADVTXD_POPTS_TXSM;
-	if (ol_flags & PKT_TX_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 		tmp |= IXGBE_ADVTXD_POPTS_IXSM;
-	if (ol_flags & PKT_TX_TCP_SEG)
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 		tmp |= IXGBE_ADVTXD_POPTS_TXSM;
 	return tmp;
 }
@@ -543,13 +542,13 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
 {
 	uint32_t cmdtype = 0;
 
-	if (ol_flags & PKT_TX_VLAN)
+	if (ol_flags & RTE_MBUF_F_TX_VLAN)
 		cmdtype |= IXGBE_ADVTXD_DCMD_VLE;
-	if (ol_flags & PKT_TX_TCP_SEG)
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 		cmdtype |= IXGBE_ADVTXD_DCMD_TSE;
-	if (ol_flags & PKT_TX_OUTER_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 		cmdtype |= (1 << IXGBE_ADVTXD_OUTERIPCS_SHIFT);
-	if (ol_flags & PKT_TX_MACSEC)
+	if (ol_flags & RTE_MBUF_F_TX_MACSEC)
 		cmdtype |= IXGBE_ADVTXD_MAC_LINKSEC;
 	return cmdtype;
 }
@@ -678,7 +677,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		 */
 		ol_flags = tx_pkt->ol_flags;
 #ifdef RTE_LIB_SECURITY
-		use_ipsec = txq->using_ipsec && (ol_flags & PKT_TX_SEC_OFFLOAD);
+		use_ipsec = txq->using_ipsec && (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD);
 #endif
 
 		/* If hardware offload required */
@@ -826,14 +825,14 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			IXGBE_ADVTXD_DCMD_IFCS | IXGBE_ADVTXD_DCMD_DEXT;
 
 #ifdef RTE_LIBRTE_IEEE1588
-		if (ol_flags & PKT_TX_IEEE1588_TMST)
+		if (ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 			cmd_type_len |= IXGBE_ADVTXD_MAC_1588;
 #endif
 
 		olinfo_status = 0;
 		if (tx_ol_req) {
 
-			if (ol_flags & PKT_TX_TCP_SEG) {
+			if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 				/* when TSO is on, paylen in descriptor is the
 				 * not the packet len but the tcp payload len */
 				pkt_len -= (tx_offload.l2_len +
@@ -1433,14 +1432,14 @@ static inline uint64_t
 ixgbe_rxd_pkt_info_to_pkt_flags(uint16_t pkt_info)
 {
 	static uint64_t ip_rss_types_map[16] __rte_cache_aligned = {
-		0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-		0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
-		PKT_RX_RSS_HASH, 0, 0, 0,
-		0, 0, 0,  PKT_RX_FDIR,
+		0, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+		0, RTE_MBUF_F_RX_RSS_HASH, 0, RTE_MBUF_F_RX_RSS_HASH,
+		RTE_MBUF_F_RX_RSS_HASH, 0, 0, 0,
+		0, 0, 0,  RTE_MBUF_F_RX_FDIR,
 	};
 #ifdef RTE_LIBRTE_IEEE1588
 	static uint64_t ip_pkt_etqf_map[8] = {
-		0, 0, 0, PKT_RX_IEEE1588_PTP,
+		0, 0, 0, RTE_MBUF_F_RX_IEEE1588_PTP,
 		0, 0, 0, 0,
 	};
 
@@ -1468,7 +1467,7 @@ rx_desc_status_to_pkt_flags(uint32_t rx_status, uint64_t vlan_flags)
 
 #ifdef RTE_LIBRTE_IEEE1588
 	if (rx_status & IXGBE_RXD_STAT_TMST)
-		pkt_flags = pkt_flags | PKT_RX_IEEE1588_TMST;
+		pkt_flags = pkt_flags | RTE_MBUF_F_RX_IEEE1588_TMST;
 #endif
 	return pkt_flags;
 }
@@ -1484,10 +1483,10 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status, uint16_t pkt_info,
 	 * Bit 30: L4I, L4I integrity error
 	 */
 	static uint64_t error_to_pkt_flags_map[4] = {
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD,
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD,
-		PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD,
-		PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD
 	};
 	pkt_flags = error_to_pkt_flags_map[(rx_status >>
 		IXGBE_RXDADV_ERR_CKSUM_BIT) & IXGBE_RXDADV_ERR_CKSUM_MSK];
@@ -1499,18 +1498,18 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status, uint16_t pkt_info,
 	if ((rx_status & IXGBE_RXDADV_ERR_TCPE) &&
 	    (pkt_info & IXGBE_RXDADV_PKTTYPE_UDP) &&
 	    rx_udp_csum_zero_err)
-		pkt_flags &= ~PKT_RX_L4_CKSUM_BAD;
+		pkt_flags &= ~RTE_MBUF_F_RX_L4_CKSUM_BAD;
 
 	if ((rx_status & IXGBE_RXD_STAT_OUTERIPCS) &&
 	    (rx_status & IXGBE_RXDADV_ERR_OUTERIPER)) {
-		pkt_flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+		pkt_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 	}
 
 #ifdef RTE_LIB_SECURITY
 	if (rx_status & IXGBE_RXD_STAT_SECP) {
-		pkt_flags |= PKT_RX_SEC_OFFLOAD;
+		pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD;
 		if (rx_status & IXGBE_RXDADV_LNKSEC_ERROR_BAD_SIG)
-			pkt_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+			pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
 	}
 #endif
 
@@ -1597,10 +1596,10 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
 				ixgbe_rxd_pkt_info_to_pkt_type
 					(pkt_info[j], rxq->pkt_type_mask);
 
-			if (likely(pkt_flags & PKT_RX_RSS_HASH))
+			if (likely(pkt_flags & RTE_MBUF_F_RX_RSS_HASH))
 				mb->hash.rss = rte_le_to_cpu_32(
 				    rxdp[j].wb.lower.hi_dword.rss);
-			else if (pkt_flags & PKT_RX_FDIR) {
+			else if (pkt_flags & RTE_MBUF_F_RX_FDIR) {
 				mb->hash.fdir.hash = rte_le_to_cpu_16(
 				    rxdp[j].wb.lower.hi_dword.csum_ip.csum) &
 				    IXGBE_ATR_HASH_MASK;
@@ -1918,7 +1917,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		rxm->port = rxq->port_id;
 
 		pkt_info = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
-		/* Only valid if PKT_RX_VLAN set in pkt_flags */
+		/* Only valid if RTE_MBUF_F_RX_VLAN set in pkt_flags */
 		rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
 
 		pkt_flags = rx_desc_status_to_pkt_flags(staterr, vlan_flags);
@@ -1932,10 +1931,10 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 			ixgbe_rxd_pkt_info_to_pkt_type(pkt_info,
 						       rxq->pkt_type_mask);
 
-		if (likely(pkt_flags & PKT_RX_RSS_HASH))
+		if (likely(pkt_flags & RTE_MBUF_F_RX_RSS_HASH))
 			rxm->hash.rss = rte_le_to_cpu_32(
 						rxd.wb.lower.hi_dword.rss);
-		else if (pkt_flags & PKT_RX_FDIR) {
+		else if (pkt_flags & RTE_MBUF_F_RX_FDIR) {
 			rxm->hash.fdir.hash = rte_le_to_cpu_16(
 					rxd.wb.lower.hi_dword.csum_ip.csum) &
 					IXGBE_ATR_HASH_MASK;
@@ -2011,7 +2010,7 @@ ixgbe_fill_cluster_head_buf(
 
 	head->port = rxq->port_id;
 
-	/* The vlan_tci field is only valid when PKT_RX_VLAN is
+	/* The vlan_tci field is only valid when RTE_MBUF_F_RX_VLAN is
 	 * set in the pkt_flags field.
 	 */
 	head->vlan_tci = rte_le_to_cpu_16(desc->wb.upper.vlan);
@@ -2024,9 +2023,9 @@ ixgbe_fill_cluster_head_buf(
 	head->packet_type =
 		ixgbe_rxd_pkt_info_to_pkt_type(pkt_info, rxq->pkt_type_mask);
 
-	if (likely(pkt_flags & PKT_RX_RSS_HASH))
+	if (likely(pkt_flags & RTE_MBUF_F_RX_RSS_HASH))
 		head->hash.rss = rte_le_to_cpu_32(desc->wb.lower.hi_dword.rss);
-	else if (pkt_flags & PKT_RX_FDIR) {
+	else if (pkt_flags & RTE_MBUF_F_RX_FDIR) {
 		head->hash.fdir.hash =
 			rte_le_to_cpu_16(desc->wb.lower.hi_dword.csum_ip.csum)
 							  & IXGBE_ATR_HASH_MASK;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index c541f537c7..90b254ea26 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -105,10 +105,10 @@ desc_to_olflags_v(uint8x16x2_t sterr_tmp1, uint8x16x2_t sterr_tmp2,
 			0x00, 0x00, 0x00, 0x00};
 
 	const uint8x16_t rss_flags = {
-			0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, 0, 0, 0,
-			0, 0, 0, PKT_RX_FDIR};
+			0, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			0, RTE_MBUF_F_RX_RSS_HASH, 0, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, 0, 0, 0,
+			0, 0, 0, RTE_MBUF_F_RX_FDIR};
 
 	/* mask everything except vlan present and l4/ip csum error */
 	const uint8x16_t vlan_csum_msk = {
@@ -123,23 +123,23 @@ desc_to_olflags_v(uint8x16x2_t sterr_tmp1, uint8x16x2_t sterr_tmp2,
 
 	/* map vlan present (0x8), IPE (0x2), L4E (0x1) to ol_flags */
 	const uint8x16_t vlan_csum_map_lo = {
-			PKT_RX_IP_CKSUM_GOOD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
 			0, 0, 0, 0,
-			vlan_flags | PKT_RX_IP_CKSUM_GOOD,
-			vlan_flags | PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD,
-			vlan_flags | PKT_RX_IP_CKSUM_BAD,
-			vlan_flags | PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD,
+			vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_GOOD,
+			vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+			vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_BAD,
+			vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
 			0, 0, 0, 0};
 
 	const uint8x16_t vlan_csum_map_hi = {
-			PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
-			PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
 			0, 0, 0, 0,
-			PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
-			PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
 			0, 0, 0, 0};
 
 	/* change mask from 0x200(IXGBE_RXDADV_PKTTYPE_UDP) to 0x2 */
@@ -153,7 +153,7 @@ desc_to_olflags_v(uint8x16x2_t sterr_tmp1, uint8x16x2_t sterr_tmp2,
 			0, 0, 0, 0};
 
 	const uint8x16_t udp_csum_bad_shuf = {
-			0xFF, ~(uint8_t)PKT_RX_L4_CKSUM_BAD, 0, 0,
+			0xFF, ~(uint8_t)RTE_MBUF_F_RX_L4_CKSUM_BAD, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0};
@@ -194,7 +194,7 @@ desc_to_olflags_v(uint8x16x2_t sterr_tmp1, uint8x16x2_t sterr_tmp2,
 	vtag_lo = vorrq_u8(ptype, vtag_lo);
 
 	/* convert the UDP header present 0x2 to 0x1 for aligning with each
-	 * PKT_RX_L4_CKSUM_BAD value in low byte of 8 bits word ol_flag in
+	 * RTE_MBUF_F_RX_L4_CKSUM_BAD value in low byte of 8 bits word ol_flag in
 	 * vtag_lo (4x8). Then mask out the bad checksum value by shuffle and
 	 * bit-mask.
 	 */
@@ -337,7 +337,7 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	sw_ring = &rxq->sw_ring[rxq->rx_tail];
 
 	/* ensure these 2 flags are in the lower 8 bits */
-	RTE_BUILD_BUG_ON((PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED) > UINT8_MAX);
+	RTE_BUILD_BUG_ON((RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED) > UINT8_MAX);
 	vlan_flags = rxq->vlan_flags & UINT8_MAX;
 
 	/* A. load 4 packet in one loop
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index 1dea95e73b..1eed949495 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -108,9 +108,9 @@ desc_to_olflags_v_ipsec(__m128i descs[4], struct rte_mbuf **rx_pkts)
 	const __m128i ipsec_proc_msk  =
 			_mm_set1_epi32(IXGBE_RXDADV_IPSEC_STATUS_SECP);
 	const __m128i ipsec_err_flag  =
-			_mm_set1_epi32(PKT_RX_SEC_OFFLOAD_FAILED |
-				       PKT_RX_SEC_OFFLOAD);
-	const __m128i ipsec_proc_flag = _mm_set1_epi32(PKT_RX_SEC_OFFLOAD);
+			_mm_set1_epi32(RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED |
+				       RTE_MBUF_F_RX_SEC_OFFLOAD);
+	const __m128i ipsec_proc_flag = _mm_set1_epi32(RTE_MBUF_F_RX_SEC_OFFLOAD);
 
 	rearm = _mm_set_epi32(*rearm3, *rearm2, *rearm1, *rearm0);
 	sterr = _mm_set_epi32(_mm_extract_epi32(descs[3], 2),
@@ -148,10 +148,10 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
 			0x00FF, 0x00FF, 0x00FF, 0x00FF);
 
 	/* map rss type to rss hash flag */
-	const __m128i rss_flags = _mm_set_epi8(PKT_RX_FDIR, 0, 0, 0,
-			0, 0, 0, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH, 0,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, 0);
+	const __m128i rss_flags = _mm_set_epi8(RTE_MBUF_F_RX_FDIR, 0, 0, 0,
+			0, 0, 0, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, 0, RTE_MBUF_F_RX_RSS_HASH, 0,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, 0);
 
 	/* mask everything except vlan present and l4/ip csum error */
 	const __m128i vlan_csum_msk = _mm_set_epi16(
@@ -165,23 +165,23 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
 	/* map vlan present (0x8), IPE (0x2), L4E (0x1) to ol_flags */
 	const __m128i vlan_csum_map_lo = _mm_set_epi8(
 		0, 0, 0, 0,
-		vlan_flags | PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD,
-		vlan_flags | PKT_RX_IP_CKSUM_BAD,
-		vlan_flags | PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD,
-		vlan_flags | PKT_RX_IP_CKSUM_GOOD,
+		vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+		vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_BAD,
+		vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+		vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_GOOD,
 		0, 0, 0, 0,
-		PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD,
-		PKT_RX_IP_CKSUM_BAD,
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD,
-		PKT_RX_IP_CKSUM_GOOD);
+		RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD);
 
 	const __m128i vlan_csum_map_hi = _mm_set_epi8(
 		0, 0, 0, 0,
-		0, PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
-		PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t),
+		0, RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
+		RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t),
 		0, 0, 0, 0,
-		0, PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
-		PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t));
+		0, RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
+		RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t));
 
 	/* mask everything except UDP header present if specified */
 	const __m128i udp_hdr_p_msk = _mm_set_epi16
@@ -190,7 +190,7 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
 
 	const __m128i udp_csum_bad_shuf = _mm_set_epi8
 		(0, 0, 0, 0, 0, 0, 0, 0,
-		 0, 0, 0, 0, 0, 0, ~(uint8_t)PKT_RX_L4_CKSUM_BAD, 0xFF);
+		 0, 0, 0, 0, 0, 0, ~(uint8_t)RTE_MBUF_F_RX_L4_CKSUM_BAD, 0xFF);
 
 	ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
 	ptype1 = _mm_unpacklo_epi16(descs[2], descs[3]);
@@ -228,7 +228,7 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
 	vtag1 = _mm_or_si128(ptype0, vtag1);
 
 	/* convert the UDP header present 0x200 to 0x1 for aligning with each
-	 * PKT_RX_L4_CKSUM_BAD value in low byte of 16 bits word ol_flag in
+	 * RTE_MBUF_F_RX_L4_CKSUM_BAD value in low byte of 16 bits word ol_flag in
 	 * vtag1 (4x16). Then mask out the bad checksum value by shuffle and
 	 * bit-mask.
 	 */
@@ -428,7 +428,7 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	sw_ring = &rxq->sw_ring[rxq->rx_tail];
 
 	/* ensure these 2 flags are in the lower 8 bits */
-	RTE_BUILD_BUG_ON((PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED) > UINT8_MAX);
+	RTE_BUILD_BUG_ON((RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED) > UINT8_MAX);
 	vlan_flags = rxq->vlan_flags & UINT8_MAX;
 
 	/* A. load 4 packet in one loop
diff --git a/drivers/net/liquidio/lio_rxtx.c b/drivers/net/liquidio/lio_rxtx.c
index a067b60e47..0fd6659978 100644
--- a/drivers/net/liquidio/lio_rxtx.c
+++ b/drivers/net/liquidio/lio_rxtx.c
@@ -437,7 +437,7 @@ lio_droq_fast_process_packet(struct lio_device *lio_dev,
 				if (rh->r_dh.has_hash) {
 					uint64_t *hash_ptr;
 
-					nicbuf->ol_flags |= PKT_RX_RSS_HASH;
+					nicbuf->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 					hash_ptr = rte_pktmbuf_mtod(nicbuf,
 								    uint64_t *);
 					lio_swap_8B_data(hash_ptr, 1);
@@ -494,7 +494,7 @@ lio_droq_fast_process_packet(struct lio_device *lio_dev,
 						uint64_t *hash_ptr;
 
 						nicbuf->ol_flags |=
-						    PKT_RX_RSS_HASH;
+						    RTE_MBUF_F_RX_RSS_HASH;
 						hash_ptr = rte_pktmbuf_mtod(
 						    nicbuf, uint64_t *);
 						lio_swap_8B_data(hash_ptr, 1);
@@ -547,10 +547,10 @@ lio_droq_fast_process_packet(struct lio_device *lio_dev,
 		struct rte_mbuf *m = rx_pkts[data_pkts - 1];
 
 		if (rh->r_dh.csum_verified & LIO_IP_CSUM_VERIFIED)
-			m->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 		if (rh->r_dh.csum_verified & LIO_L4_CSUM_VERIFIED)
-			m->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 	}
 
 	if (droq->refill_count >= droq->refill_threshold) {
@@ -1675,13 +1675,13 @@ lio_dev_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
 		cmdsetup.s.iq_no = iq_no;
 
 		/* check checksum offload flags to form cmd */
-		if (m->ol_flags & PKT_TX_IP_CKSUM)
+		if (m->ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			cmdsetup.s.ip_csum = 1;
 
-		if (m->ol_flags & PKT_TX_OUTER_IP_CKSUM)
+		if (m->ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 			cmdsetup.s.tnl_csum = 1;
-		else if ((m->ol_flags & PKT_TX_TCP_CKSUM) ||
-				(m->ol_flags & PKT_TX_UDP_CKSUM))
+		else if ((m->ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) ||
+				(m->ol_flags & RTE_MBUF_F_TX_UDP_CKSUM))
 			cmdsetup.s.transport_csum = 1;
 
 		if (m->nb_segs == 1) {
diff --git a/drivers/net/mlx4/mlx4_rxtx.c b/drivers/net/mlx4/mlx4_rxtx.c
index ecf08f53cf..ed9e41fcde 100644
--- a/drivers/net/mlx4/mlx4_rxtx.c
+++ b/drivers/net/mlx4/mlx4_rxtx.c
@@ -406,7 +406,7 @@ mlx4_tx_burst_tso_get_params(struct rte_mbuf *buf,
 {
 	struct mlx4_sq *sq = &txq->msq;
 	const uint8_t tunneled = txq->priv->hw_csum_l2tun &&
-				 (buf->ol_flags & PKT_TX_TUNNEL_MASK);
+				 (buf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK);
 
 	tinfo->tso_header_size = buf->l2_len + buf->l3_len + buf->l4_len;
 	if (tunneled)
@@ -915,7 +915,7 @@ mlx4_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
 			uint16_t flags16[2];
 		} srcrb;
 		uint32_t lkey;
-		bool tso = txq->priv->tso && (buf->ol_flags & PKT_TX_TCP_SEG);
+		bool tso = txq->priv->tso && (buf->ol_flags & RTE_MBUF_F_TX_TCP_SEG);
 
 		/* Clean up old buffer. */
 		if (likely(elt->buf != NULL)) {
@@ -991,15 +991,15 @@ mlx4_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
 		/* Enable HW checksum offload if requested */
 		if (txq->csum &&
 		    (buf->ol_flags &
-		     (PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM))) {
+		     (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM))) {
 			const uint64_t is_tunneled = (buf->ol_flags &
-						      (PKT_TX_TUNNEL_GRE |
-						       PKT_TX_TUNNEL_VXLAN));
+						      (RTE_MBUF_F_TX_TUNNEL_GRE |
+						       RTE_MBUF_F_TX_TUNNEL_VXLAN));
 
 			if (is_tunneled && txq->csum_l2tun) {
 				owner_opcode |= MLX4_WQE_CTRL_IIP_HDR_CSUM |
 						MLX4_WQE_CTRL_IL4_HDR_CSUM;
-				if (buf->ol_flags & PKT_TX_OUTER_IP_CKSUM)
+				if (buf->ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 					srcrb.flags |=
 					    RTE_BE32(MLX4_WQE_CTRL_IP_HDR_CSUM);
 			} else {
@@ -1112,18 +1112,18 @@ rxq_cq_to_ol_flags(uint32_t flags, int csum, int csum_l2tun)
 		ol_flags |=
 			mlx4_transpose(flags,
 				       MLX4_CQE_STATUS_IP_HDR_CSUM_OK,
-				       PKT_RX_IP_CKSUM_GOOD) |
+				       RTE_MBUF_F_RX_IP_CKSUM_GOOD) |
 			mlx4_transpose(flags,
 				       MLX4_CQE_STATUS_TCP_UDP_CSUM_OK,
-				       PKT_RX_L4_CKSUM_GOOD);
+				       RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 	if ((flags & MLX4_CQE_L2_TUNNEL) && csum_l2tun)
 		ol_flags |=
 			mlx4_transpose(flags,
 				       MLX4_CQE_L2_TUNNEL_IPOK,
-				       PKT_RX_IP_CKSUM_GOOD) |
+				       RTE_MBUF_F_RX_IP_CKSUM_GOOD) |
 			mlx4_transpose(flags,
 				       MLX4_CQE_L2_TUNNEL_L4_CSUM,
-				       PKT_RX_L4_CKSUM_GOOD);
+				       RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 	return ol_flags;
 }
 
@@ -1274,7 +1274,7 @@ mlx4_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
 			/* Update packet information. */
 			pkt->packet_type =
 				rxq_cq_to_pkt_type(cqe, rxq->l2tun_offload);
-			pkt->ol_flags = PKT_RX_RSS_HASH;
+			pkt->ol_flags = RTE_MBUF_F_RX_RSS_HASH;
 			pkt->hash.rss = cqe->immed_rss_invalid;
 			if (rxq->crc_present)
 				len -= RTE_ETHER_CRC_LEN;
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index c914a7120c..ffdd50c93d 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -9275,7 +9275,7 @@ mlx5_flow_tunnel_get_restore_info(struct rte_eth_dev *dev,
 {
 	uint64_t ol_flags = m->ol_flags;
 	const struct mlx5_flow_tbl_data_entry *tble;
-	const uint64_t mask = PKT_RX_FDIR | PKT_RX_FDIR_ID;
+	const uint64_t mask = RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 
 	if (!is_tunnel_offload_active(dev)) {
 		info->flags = 0;
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index e3b1051ba4..3ae62cb8e0 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -692,10 +692,10 @@ rxq_cq_to_ol_flags(volatile struct mlx5_cqe *cqe)
 	ol_flags =
 		TRANSPOSE(flags,
 			  MLX5_CQE_RX_L3_HDR_VALID,
-			  PKT_RX_IP_CKSUM_GOOD) |
+			  RTE_MBUF_F_RX_IP_CKSUM_GOOD) |
 		TRANSPOSE(flags,
 			  MLX5_CQE_RX_L4_HDR_VALID,
-			  PKT_RX_L4_CKSUM_GOOD);
+			  RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 	return ol_flags;
 }
 
@@ -731,7 +731,7 @@ rxq_cq_to_mbuf(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt,
 			rss_hash_res = rte_be_to_cpu_32(mcqe->rx_hash_result);
 		if (rss_hash_res) {
 			pkt->hash.rss = rss_hash_res;
-			pkt->ol_flags |= PKT_RX_RSS_HASH;
+			pkt->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		}
 	}
 	if (rxq->mark) {
@@ -745,9 +745,9 @@ rxq_cq_to_mbuf(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt,
 			mark = ((mcqe->byte_cnt_flow & 0xff) << 8) |
 				(mcqe->flow_tag_high << 16);
 		if (MLX5_FLOW_MARK_IS_VALID(mark)) {
-			pkt->ol_flags |= PKT_RX_FDIR;
+			pkt->ol_flags |= RTE_MBUF_F_RX_FDIR;
 			if (mark != RTE_BE32(MLX5_FLOW_MARK_DEFAULT)) {
-				pkt->ol_flags |= PKT_RX_FDIR_ID;
+				pkt->ol_flags |= RTE_MBUF_F_RX_FDIR_ID;
 				pkt->hash.fdir.hi = mlx5_flow_mark_get(mark);
 			}
 		}
@@ -775,7 +775,7 @@ rxq_cq_to_mbuf(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt,
 			vlan_strip = mcqe->hdr_type &
 				     RTE_BE16(MLX5_CQE_VLAN_STRIPPED);
 		if (vlan_strip) {
-			pkt->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+			pkt->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 			pkt->vlan_tci = rte_be_to_cpu_16(cqe->vlan_info);
 		}
 	}
@@ -863,7 +863,7 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
 			}
 			pkt = seg;
 			MLX5_ASSERT(len >= (rxq->crc_present << 2));
-			pkt->ol_flags &= EXT_ATTACHED_MBUF;
+			pkt->ol_flags &= RTE_MBUF_F_EXTERNAL;
 			rxq_cq_to_mbuf(rxq, pkt, cqe, mcqe);
 			if (rxq->crc_present)
 				len -= RTE_ETHER_CRC_LEN;
@@ -872,7 +872,7 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
 				mlx5_lro_update_hdr
 					(rte_pktmbuf_mtod(pkt, uint8_t *), cqe,
 					 mcqe, rxq, len);
-				pkt->ol_flags |= PKT_RX_LRO;
+				pkt->ol_flags |= RTE_MBUF_F_RX_LRO;
 				pkt->tso_segsz = len / cqe->lro_num_seg;
 			}
 		}
@@ -1144,7 +1144,7 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
 		if (cqe->lro_num_seg > 1) {
 			mlx5_lro_update_hdr(rte_pktmbuf_mtod(pkt, uint8_t *),
 					    cqe, mcqe, rxq, len);
-			pkt->ol_flags |= PKT_RX_LRO;
+			pkt->ol_flags |= RTE_MBUF_F_RX_LRO;
 			pkt->tso_segsz = len / cqe->lro_num_seg;
 		}
 		PKT_LEN(pkt) = len;
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 3f2b99fb65..a3244b8c78 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -488,7 +488,7 @@ mprq_buf_to_pkt(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt, uint32_t len,
 		shinfo = &buf->shinfos[strd_idx];
 		rte_mbuf_ext_refcnt_set(shinfo, 1);
 		/*
-		 * EXT_ATTACHED_MBUF will be set to pkt->ol_flags when
+		 * RTE_MBUF_F_EXTERNAL will be set to pkt->ol_flags when
 		 * attaching the stride to mbuf and more offload flags
 		 * will be added below by calling rxq_cq_to_mbuf().
 		 * Other fields will be overwritten.
@@ -497,7 +497,7 @@ mprq_buf_to_pkt(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt, uint32_t len,
 					  buf_len, shinfo);
 		/* Set mbuf head-room. */
 		SET_DATA_OFF(pkt, RTE_PKTMBUF_HEADROOM);
-		MLX5_ASSERT(pkt->ol_flags == EXT_ATTACHED_MBUF);
+		MLX5_ASSERT(pkt->ol_flags == RTE_MBUF_F_EXTERNAL);
 		MLX5_ASSERT(rte_pktmbuf_tailroom(pkt) >=
 			len - (hdrm_overlap > 0 ? hdrm_overlap : 0));
 		DATA_LEN(pkt) = len;
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index abd8ce7989..43c18c3d1b 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -180,7 +180,7 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl)
 		mbuf_init->nb_segs = 1;
 		mbuf_init->port = rxq->port_id;
 		if (priv->flags & RTE_PKTMBUF_POOL_F_PINNED_EXT_BUF)
-			mbuf_init->ol_flags = EXT_ATTACHED_MBUF;
+			mbuf_init->ol_flags = RTE_MBUF_F_EXTERNAL;
 		/*
 		 * prevent compiler reordering:
 		 * rearm_data covers previous fields.
diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 7b984eff35..646d2a31e2 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -255,10 +255,10 @@ mlx5_set_cksum_table(void)
 
 	/*
 	 * The index should have:
-	 * bit[0] = PKT_TX_TCP_SEG
-	 * bit[2:3] = PKT_TX_UDP_CKSUM, PKT_TX_TCP_CKSUM
-	 * bit[4] = PKT_TX_IP_CKSUM
-	 * bit[8] = PKT_TX_OUTER_IP_CKSUM
+	 * bit[0] = RTE_MBUF_F_TX_TCP_SEG
+	 * bit[2:3] = RTE_MBUF_F_TX_UDP_CKSUM, RTE_MBUF_F_TX_TCP_CKSUM
+	 * bit[4] = RTE_MBUF_F_TX_IP_CKSUM
+	 * bit[8] = RTE_MBUF_F_TX_OUTER_IP_CKSUM
 	 * bit[9] = tunnel
 	 */
 	for (i = 0; i < RTE_DIM(mlx5_cksum_table); ++i) {
@@ -293,10 +293,10 @@ mlx5_set_swp_types_table(void)
 
 	/*
 	 * The index should have:
-	 * bit[0:1] = PKT_TX_L4_MASK
-	 * bit[4] = PKT_TX_IPV6
-	 * bit[8] = PKT_TX_OUTER_IPV6
-	 * bit[9] = PKT_TX_OUTER_UDP
+	 * bit[0:1] = RTE_MBUF_F_TX_L4_MASK
+	 * bit[4] = RTE_MBUF_F_TX_IPV6
+	 * bit[8] = RTE_MBUF_F_TX_OUTER_IPV6
+	 * bit[9] = RTE_MBUF_F_TX_OUTER_UDP
 	 */
 	for (i = 0; i < RTE_DIM(mlx5_swp_types_table); ++i) {
 		v = 0;
@@ -306,7 +306,7 @@ mlx5_set_swp_types_table(void)
 			v |= MLX5_ETH_WQE_L4_OUTER_UDP;
 		if (i & (1 << 4))
 			v |= MLX5_ETH_WQE_L3_INNER_IPV6;
-		if ((i & 3) == (PKT_TX_UDP_CKSUM >> 52))
+		if ((i & 3) == (RTE_MBUF_F_TX_UDP_CKSUM >> 52))
 			v |= MLX5_ETH_WQE_L4_INNER_UDP;
 		mlx5_swp_types_table[i] = v;
 	}
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
index 68cef1a83e..bcf487c34e 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
@@ -283,20 +283,20 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				const vector unsigned char fdir_flags =
 					(vector unsigned char)
 					(vector unsigned int){
-					PKT_RX_FDIR, PKT_RX_FDIR,
-					PKT_RX_FDIR, PKT_RX_FDIR};
+					RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_FDIR,
+					RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_FDIR};
 				const vector unsigned char fdir_all_flags =
 					(vector unsigned char)
 					(vector unsigned int){
-					PKT_RX_FDIR | PKT_RX_FDIR_ID,
-					PKT_RX_FDIR | PKT_RX_FDIR_ID,
-					PKT_RX_FDIR | PKT_RX_FDIR_ID,
-					PKT_RX_FDIR | PKT_RX_FDIR_ID};
+					RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID,
+					RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID,
+					RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID,
+					RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID};
 				vector unsigned char fdir_id_flags =
 					(vector unsigned char)
 					(vector unsigned int){
-					PKT_RX_FDIR_ID, PKT_RX_FDIR_ID,
-					PKT_RX_FDIR_ID, PKT_RX_FDIR_ID};
+					RTE_MBUF_F_RX_FDIR_ID, RTE_MBUF_F_RX_FDIR_ID,
+					RTE_MBUF_F_RX_FDIR_ID, RTE_MBUF_F_RX_FDIR_ID};
 				/* Extract flow_tag field. */
 				vector unsigned char ftag0 = vec_perm(mcqe1,
 							zero, flow_mark_shuf);
@@ -316,7 +316,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 					ol_flags_mask,
 					(vector unsigned long)fdir_all_flags);
 
-				/* Set PKT_RX_FDIR if flow tag is non-zero. */
+				/* Set RTE_MBUF_F_RX_FDIR if flow tag is non-zero. */
 				invalid_mask = (vector unsigned char)
 					vec_cmpeq((vector unsigned int)ftag,
 					(vector unsigned int)zero);
@@ -376,10 +376,10 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				const vector unsigned char vlan_mask =
 					(vector unsigned char)
 					(vector unsigned int) {
-					(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED),
-					(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED),
-					(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED),
-					(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED)};
+					(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED),
+					(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED),
+					(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED),
+					(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED)};
 				const vector unsigned char cv_mask =
 					(vector unsigned char)
 					(vector unsigned int) {
@@ -433,10 +433,10 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 			}
 			const vector unsigned char hash_mask =
 				(vector unsigned char)(vector unsigned int) {
-					PKT_RX_RSS_HASH,
-					PKT_RX_RSS_HASH,
-					PKT_RX_RSS_HASH,
-					PKT_RX_RSS_HASH};
+					RTE_MBUF_F_RX_RSS_HASH,
+					RTE_MBUF_F_RX_RSS_HASH,
+					RTE_MBUF_F_RX_RSS_HASH,
+					RTE_MBUF_F_RX_RSS_HASH};
 			const vector unsigned char rearm_flags =
 				(vector unsigned char)(vector unsigned int) {
 				(uint32_t)t_pkt->ol_flags,
@@ -531,13 +531,13 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq,
 	vector unsigned char pinfo, ptype;
 	vector unsigned char ol_flags = (vector unsigned char)
 		(vector unsigned int){
-			rxq->rss_hash * PKT_RX_RSS_HASH |
+			rxq->rss_hash * RTE_MBUF_F_RX_RSS_HASH |
 				rxq->hw_timestamp * rxq->timestamp_rx_flag,
-			rxq->rss_hash * PKT_RX_RSS_HASH |
+			rxq->rss_hash * RTE_MBUF_F_RX_RSS_HASH |
 				rxq->hw_timestamp * rxq->timestamp_rx_flag,
-			rxq->rss_hash * PKT_RX_RSS_HASH |
+			rxq->rss_hash * RTE_MBUF_F_RX_RSS_HASH |
 				rxq->hw_timestamp * rxq->timestamp_rx_flag,
-			rxq->rss_hash * PKT_RX_RSS_HASH |
+			rxq->rss_hash * RTE_MBUF_F_RX_RSS_HASH |
 				rxq->hw_timestamp * rxq->timestamp_rx_flag};
 	vector unsigned char cv_flags;
 	const vector unsigned char zero = (vector unsigned char){0};
@@ -551,21 +551,21 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq,
 		(vector unsigned char)(vector unsigned int){
 		0x00000003, 0x00000003, 0x00000003, 0x00000003};
 	const vector unsigned char cv_flag_sel = (vector unsigned char){
-		0, (uint8_t)(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED),
-		(uint8_t)(PKT_RX_IP_CKSUM_GOOD >> 1), 0,
-		(uint8_t)(PKT_RX_L4_CKSUM_GOOD >> 1), 0,
-		(uint8_t)((PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1),
+		0, (uint8_t)(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED),
+		(uint8_t)(RTE_MBUF_F_RX_IP_CKSUM_GOOD >> 1), 0,
+		(uint8_t)(RTE_MBUF_F_RX_L4_CKSUM_GOOD >> 1), 0,
+		(uint8_t)((RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1),
 		0, 0, 0, 0, 0, 0, 0, 0, 0};
 	const vector unsigned char cv_mask =
 		(vector unsigned char)(vector unsigned int){
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD |
-		PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD |
-		PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD |
-		PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD |
-		PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED};
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED};
 	const vector unsigned char mbuf_init =
 		(vector unsigned char)vec_vsx_ld
 			(0, (vector unsigned char *)&rxq->mbuf_initializer);
@@ -602,19 +602,19 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq,
 			0xffffff00, 0xffffff00, 0xffffff00, 0xffffff00};
 		const vector unsigned char fdir_flags =
 			(vector unsigned char)(vector unsigned int){
-			PKT_RX_FDIR, PKT_RX_FDIR,
-			PKT_RX_FDIR, PKT_RX_FDIR};
+			RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_FDIR,
+			RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_FDIR};
 		vector unsigned char fdir_id_flags =
 			(vector unsigned char)(vector unsigned int){
-			PKT_RX_FDIR_ID, PKT_RX_FDIR_ID,
-			PKT_RX_FDIR_ID, PKT_RX_FDIR_ID};
+			RTE_MBUF_F_RX_FDIR_ID, RTE_MBUF_F_RX_FDIR_ID,
+			RTE_MBUF_F_RX_FDIR_ID, RTE_MBUF_F_RX_FDIR_ID};
 		vector unsigned char flow_tag, invalid_mask;
 
 		flow_tag = (vector unsigned char)
 			vec_and((vector unsigned long)pinfo,
 			(vector unsigned long)pinfo_ft_mask);
 
-		/* Check if flow tag is non-zero then set PKT_RX_FDIR. */
+		/* Check if flow tag is non-zero then set RTE_MBUF_F_RX_FDIR. */
 		invalid_mask = (vector unsigned char)
 			vec_cmpeq((vector unsigned int)flow_tag,
 			(vector unsigned int)zero);
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
index 5ff792f4cb..aa36df29a0 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
@@ -220,12 +220,12 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				const uint32x4_t ft_mask =
 					vdupq_n_u32(MLX5_FLOW_MARK_DEFAULT);
 				const uint32x4_t fdir_flags =
-					vdupq_n_u32(PKT_RX_FDIR);
+					vdupq_n_u32(RTE_MBUF_F_RX_FDIR);
 				const uint32x4_t fdir_all_flags =
-					vdupq_n_u32(PKT_RX_FDIR |
-						    PKT_RX_FDIR_ID);
+					vdupq_n_u32(RTE_MBUF_F_RX_FDIR |
+						    RTE_MBUF_F_RX_FDIR_ID);
 				uint32x4_t fdir_id_flags =
-					vdupq_n_u32(PKT_RX_FDIR_ID);
+					vdupq_n_u32(RTE_MBUF_F_RX_FDIR_ID);
 				uint32x4_t invalid_mask, ftag;
 
 				__asm__ volatile
@@ -240,7 +240,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				invalid_mask = vceqzq_u32(ftag);
 				ol_flags_mask = vorrq_u32(ol_flags_mask,
 							  fdir_all_flags);
-				/* Set PKT_RX_FDIR if flow tag is non-zero. */
+				/* Set RTE_MBUF_F_RX_FDIR if flow tag is non-zero. */
 				ol_flags = vorrq_u32(ol_flags,
 					vbicq_u32(fdir_flags, invalid_mask));
 				/* Mask out invalid entries. */
@@ -276,8 +276,8 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				const uint8_t pkt_hdr3 =
 					mcq[pos % 8 + 3].hdr_type;
 				const uint32x4_t vlan_mask =
-					vdupq_n_u32(PKT_RX_VLAN |
-						    PKT_RX_VLAN_STRIPPED);
+					vdupq_n_u32(RTE_MBUF_F_RX_VLAN |
+						    RTE_MBUF_F_RX_VLAN_STRIPPED);
 				const uint32x4_t cv_mask =
 					vdupq_n_u32(MLX5_CQE_VLAN_STRIPPED);
 				const uint32x4_t pkt_cv = {
@@ -317,7 +317,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				}
 			}
 			const uint32x4_t hash_flags =
-				vdupq_n_u32(PKT_RX_RSS_HASH);
+				vdupq_n_u32(RTE_MBUF_F_RX_RSS_HASH);
 			const uint32x4_t rearm_flags =
 				vdupq_n_u32((uint32_t)t_pkt->ol_flags);
 
@@ -396,22 +396,22 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq,
 	uint16x4_t ptype;
 	uint32x4_t pinfo, cv_flags;
 	uint32x4_t ol_flags =
-		vdupq_n_u32(rxq->rss_hash * PKT_RX_RSS_HASH |
+		vdupq_n_u32(rxq->rss_hash * RTE_MBUF_F_RX_RSS_HASH |
 			    rxq->hw_timestamp * rxq->timestamp_rx_flag);
 	const uint32x4_t ptype_ol_mask = { 0x106, 0x106, 0x106, 0x106 };
 	const uint8x16_t cv_flag_sel = {
 		0,
-		(uint8_t)(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED),
-		(uint8_t)(PKT_RX_IP_CKSUM_GOOD >> 1),
+		(uint8_t)(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED),
+		(uint8_t)(RTE_MBUF_F_RX_IP_CKSUM_GOOD >> 1),
 		0,
-		(uint8_t)(PKT_RX_L4_CKSUM_GOOD >> 1),
+		(uint8_t)(RTE_MBUF_F_RX_L4_CKSUM_GOOD >> 1),
 		0,
-		(uint8_t)((PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1),
+		(uint8_t)((RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1),
 		0, 0, 0, 0, 0, 0, 0, 0, 0
 	};
 	const uint32x4_t cv_mask =
-		vdupq_n_u32(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD |
-			    PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+		vdupq_n_u32(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			    RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 	const uint64x2_t mbuf_init = vld1q_u64
 				((const uint64_t *)&rxq->mbuf_initializer);
 	uint64x2_t rearm0, rearm1, rearm2, rearm3;
@@ -419,11 +419,11 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq,
 
 	if (rxq->mark) {
 		const uint32x4_t ft_def = vdupq_n_u32(MLX5_FLOW_MARK_DEFAULT);
-		const uint32x4_t fdir_flags = vdupq_n_u32(PKT_RX_FDIR);
-		uint32x4_t fdir_id_flags = vdupq_n_u32(PKT_RX_FDIR_ID);
+		const uint32x4_t fdir_flags = vdupq_n_u32(RTE_MBUF_F_RX_FDIR);
+		uint32x4_t fdir_id_flags = vdupq_n_u32(RTE_MBUF_F_RX_FDIR_ID);
 		uint32x4_t invalid_mask;
 
-		/* Check if flow tag is non-zero then set PKT_RX_FDIR. */
+		/* Check if flow tag is non-zero then set RTE_MBUF_F_RX_FDIR. */
 		invalid_mask = vceqzq_u32(flow_tag);
 		ol_flags = vorrq_u32(ol_flags,
 				     vbicq_u32(fdir_flags, invalid_mask));
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
index adf991f013..b0fc29d7b9 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
@@ -204,12 +204,12 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				const __m128i ft_mask =
 					_mm_set1_epi32(0xffffff00);
 				const __m128i fdir_flags =
-					_mm_set1_epi32(PKT_RX_FDIR);
+					_mm_set1_epi32(RTE_MBUF_F_RX_FDIR);
 				const __m128i fdir_all_flags =
-					_mm_set1_epi32(PKT_RX_FDIR |
-						       PKT_RX_FDIR_ID);
+					_mm_set1_epi32(RTE_MBUF_F_RX_FDIR |
+						       RTE_MBUF_F_RX_FDIR_ID);
 				__m128i fdir_id_flags =
-					_mm_set1_epi32(PKT_RX_FDIR_ID);
+					_mm_set1_epi32(RTE_MBUF_F_RX_FDIR_ID);
 
 				/* Extract flow_tag field. */
 				__m128i ftag0 =
@@ -223,7 +223,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 
 				ol_flags_mask = _mm_or_si128(ol_flags_mask,
 							     fdir_all_flags);
-				/* Set PKT_RX_FDIR if flow tag is non-zero. */
+				/* Set RTE_MBUF_F_RX_FDIR if flow tag is non-zero. */
 				ol_flags = _mm_or_si128(ol_flags,
 					_mm_andnot_si128(invalid_mask,
 							 fdir_flags));
@@ -260,8 +260,8 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				const uint8_t pkt_hdr3 =
 					_mm_extract_epi8(mcqe2, 8);
 				const __m128i vlan_mask =
-					_mm_set1_epi32(PKT_RX_VLAN |
-						       PKT_RX_VLAN_STRIPPED);
+					_mm_set1_epi32(RTE_MBUF_F_RX_VLAN |
+						       RTE_MBUF_F_RX_VLAN_STRIPPED);
 				const __m128i cv_mask =
 					_mm_set1_epi32(MLX5_CQE_VLAN_STRIPPED);
 				const __m128i pkt_cv =
@@ -303,7 +303,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				}
 			}
 			const __m128i hash_flags =
-				_mm_set1_epi32(PKT_RX_RSS_HASH);
+				_mm_set1_epi32(RTE_MBUF_F_RX_RSS_HASH);
 			const __m128i rearm_flags =
 				_mm_set1_epi32((uint32_t)t_pkt->ol_flags);
 
@@ -381,7 +381,7 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq, __m128i cqes[4],
 {
 	__m128i pinfo0, pinfo1;
 	__m128i pinfo, ptype;
-	__m128i ol_flags = _mm_set1_epi32(rxq->rss_hash * PKT_RX_RSS_HASH |
+	__m128i ol_flags = _mm_set1_epi32(rxq->rss_hash * RTE_MBUF_F_RX_RSS_HASH |
 					  rxq->hw_timestamp * rxq->timestamp_rx_flag);
 	__m128i cv_flags;
 	const __m128i zero = _mm_setzero_si128();
@@ -390,17 +390,17 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq, __m128i cqes[4],
 	const __m128i pinfo_mask = _mm_set1_epi32(0x3);
 	const __m128i cv_flag_sel =
 		_mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0, 0,
-			     (uint8_t)((PKT_RX_IP_CKSUM_GOOD |
-					PKT_RX_L4_CKSUM_GOOD) >> 1),
+			     (uint8_t)((RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+					RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1),
 			     0,
-			     (uint8_t)(PKT_RX_L4_CKSUM_GOOD >> 1),
+			     (uint8_t)(RTE_MBUF_F_RX_L4_CKSUM_GOOD >> 1),
 			     0,
-			     (uint8_t)(PKT_RX_IP_CKSUM_GOOD >> 1),
-			     (uint8_t)(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED),
+			     (uint8_t)(RTE_MBUF_F_RX_IP_CKSUM_GOOD >> 1),
+			     (uint8_t)(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED),
 			     0);
 	const __m128i cv_mask =
-		_mm_set1_epi32(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD |
-			      PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+		_mm_set1_epi32(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 	const __m128i mbuf_init =
 		_mm_load_si128((__m128i *)&rxq->mbuf_initializer);
 	__m128i rearm0, rearm1, rearm2, rearm3;
@@ -416,12 +416,12 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq, __m128i cqes[4],
 	ptype = _mm_unpacklo_epi64(pinfo0, pinfo1);
 	if (rxq->mark) {
 		const __m128i pinfo_ft_mask = _mm_set1_epi32(0xffffff00);
-		const __m128i fdir_flags = _mm_set1_epi32(PKT_RX_FDIR);
-		__m128i fdir_id_flags = _mm_set1_epi32(PKT_RX_FDIR_ID);
+		const __m128i fdir_flags = _mm_set1_epi32(RTE_MBUF_F_RX_FDIR);
+		__m128i fdir_id_flags = _mm_set1_epi32(RTE_MBUF_F_RX_FDIR_ID);
 		__m128i flow_tag, invalid_mask;
 
 		flow_tag = _mm_and_si128(pinfo, pinfo_ft_mask);
-		/* Check if flow tag is non-zero then set PKT_RX_FDIR. */
+		/* Check if flow tag is non-zero then set RTE_MBUF_F_RX_FDIR. */
 		invalid_mask = _mm_cmpeq_epi32(flow_tag, zero);
 		ol_flags = _mm_or_si128(ol_flags,
 					_mm_andnot_si128(invalid_mask,
diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h
index 1efe912a06..3c71d825b8 100644
--- a/drivers/net/mlx5/mlx5_tx.h
+++ b/drivers/net/mlx5/mlx5_tx.h
@@ -78,7 +78,7 @@ uint16_t mlx5_tx_burst_##func(void *txq, \
 
 /* Mbuf dynamic flag offset for inline. */
 extern uint64_t rte_net_mlx5_dynf_inline_mask;
-#define PKT_TX_DYNF_NOINLINE rte_net_mlx5_dynf_inline_mask
+#define RTE_MBUF_F_TX_DYNF_NOINLINE rte_net_mlx5_dynf_inline_mask
 
 extern uint32_t mlx5_ptype_table[] __rte_cache_aligned;
 extern uint8_t mlx5_cksum_table[1 << 10] __rte_cache_aligned;
@@ -513,22 +513,22 @@ txq_mbuf_to_swp(struct mlx5_txq_local *__rte_restrict loc,
 	if (!MLX5_TXOFF_CONFIG(SWP))
 		return 0;
 	ol = loc->mbuf->ol_flags;
-	tunnel = ol & PKT_TX_TUNNEL_MASK;
+	tunnel = ol & RTE_MBUF_F_TX_TUNNEL_MASK;
 	/*
 	 * Check whether Software Parser is required.
 	 * Only customized tunnels may ask for.
 	 */
-	if (likely(tunnel != PKT_TX_TUNNEL_UDP && tunnel != PKT_TX_TUNNEL_IP))
+	if (likely(tunnel != RTE_MBUF_F_TX_TUNNEL_UDP && tunnel != RTE_MBUF_F_TX_TUNNEL_IP))
 		return 0;
 	/*
 	 * The index should have:
-	 * bit[0:1] = PKT_TX_L4_MASK
-	 * bit[4] = PKT_TX_IPV6
-	 * bit[8] = PKT_TX_OUTER_IPV6
-	 * bit[9] = PKT_TX_OUTER_UDP
+	 * bit[0:1] = RTE_MBUF_F_TX_L4_MASK
+	 * bit[4] = RTE_MBUF_F_TX_IPV6
+	 * bit[8] = RTE_MBUF_F_TX_OUTER_IPV6
+	 * bit[9] = RTE_MBUF_F_TX_OUTER_UDP
 	 */
-	idx = (ol & (PKT_TX_L4_MASK | PKT_TX_IPV6 | PKT_TX_OUTER_IPV6)) >> 52;
-	idx |= (tunnel == PKT_TX_TUNNEL_UDP) ? (1 << 9) : 0;
+	idx = (ol & (RTE_MBUF_F_TX_L4_MASK | RTE_MBUF_F_TX_IPV6 | RTE_MBUF_F_TX_OUTER_IPV6)) >> 52;
+	idx |= (tunnel == RTE_MBUF_F_TX_TUNNEL_UDP) ? (1 << 9) : 0;
 	*swp_flags = mlx5_swp_types_table[idx];
 	/*
 	 * Set offsets for SW parser. Since ConnectX-5, SW parser just
@@ -538,19 +538,19 @@ txq_mbuf_to_swp(struct mlx5_txq_local *__rte_restrict loc,
 	 * should be set regardless of HW offload.
 	 */
 	off = loc->mbuf->outer_l2_len;
-	if (MLX5_TXOFF_CONFIG(VLAN) && ol & PKT_TX_VLAN)
+	if (MLX5_TXOFF_CONFIG(VLAN) && ol & RTE_MBUF_F_TX_VLAN)
 		off += sizeof(struct rte_vlan_hdr);
 	set = (off >> 1) << 8; /* Outer L3 offset. */
 	off += loc->mbuf->outer_l3_len;
-	if (tunnel == PKT_TX_TUNNEL_UDP)
+	if (tunnel == RTE_MBUF_F_TX_TUNNEL_UDP)
 		set |= off >> 1; /* Outer L4 offset. */
-	if (ol & (PKT_TX_IPV4 | PKT_TX_IPV6)) { /* Inner IP. */
-		const uint64_t csum = ol & PKT_TX_L4_MASK;
+	if (ol & (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IPV6)) { /* Inner IP. */
+		const uint64_t csum = ol & RTE_MBUF_F_TX_L4_MASK;
 			off += loc->mbuf->l2_len;
 		set |= (off >> 1) << 24; /* Inner L3 offset. */
-		if (csum == PKT_TX_TCP_CKSUM ||
-		    csum == PKT_TX_UDP_CKSUM ||
-		    (MLX5_TXOFF_CONFIG(TSO) && ol & PKT_TX_TCP_SEG)) {
+		if (csum == RTE_MBUF_F_TX_TCP_CKSUM ||
+		    csum == RTE_MBUF_F_TX_UDP_CKSUM ||
+		    (MLX5_TXOFF_CONFIG(TSO) && ol & RTE_MBUF_F_TX_TCP_SEG)) {
 			off += loc->mbuf->l3_len;
 			set |= (off >> 1) << 16; /* Inner L4 offset. */
 		}
@@ -572,16 +572,16 @@ static __rte_always_inline uint8_t
 txq_ol_cksum_to_cs(struct rte_mbuf *buf)
 {
 	uint32_t idx;
-	uint8_t is_tunnel = !!(buf->ol_flags & PKT_TX_TUNNEL_MASK);
-	const uint64_t ol_flags_mask = PKT_TX_TCP_SEG | PKT_TX_L4_MASK |
-				       PKT_TX_IP_CKSUM | PKT_TX_OUTER_IP_CKSUM;
+	uint8_t is_tunnel = !!(buf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK);
+	const uint64_t ol_flags_mask = RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_L4_MASK |
+				       RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_OUTER_IP_CKSUM;
 
 	/*
 	 * The index should have:
-	 * bit[0] = PKT_TX_TCP_SEG
-	 * bit[2:3] = PKT_TX_UDP_CKSUM, PKT_TX_TCP_CKSUM
-	 * bit[4] = PKT_TX_IP_CKSUM
-	 * bit[8] = PKT_TX_OUTER_IP_CKSUM
+	 * bit[0] = RTE_MBUF_F_TX_TCP_SEG
+	 * bit[2:3] = RTE_MBUF_F_TX_UDP_CKSUM, RTE_MBUF_F_TX_TCP_CKSUM
+	 * bit[4] = RTE_MBUF_F_TX_IP_CKSUM
+	 * bit[8] = RTE_MBUF_F_TX_OUTER_IP_CKSUM
 	 * bit[9] = tunnel
 	 */
 	idx = ((buf->ol_flags & ol_flags_mask) >> 50) | (!!is_tunnel << 9);
@@ -952,11 +952,11 @@ mlx5_tx_eseg_none(struct mlx5_txq_data *__rte_restrict txq __rte_unused,
 	es->swp_offs = txq_mbuf_to_swp(loc, &es->swp_flags, olx);
 	/* Fill metadata field if needed. */
 	es->metadata = MLX5_TXOFF_CONFIG(METADATA) ?
-		       loc->mbuf->ol_flags & PKT_TX_DYNF_METADATA ?
+		       loc->mbuf->ol_flags & RTE_MBUF_DYNFLAG_TX_METADATA ?
 		       *RTE_FLOW_DYNF_METADATA(loc->mbuf) : 0 : 0;
 	/* Engage VLAN tag insertion feature if requested. */
 	if (MLX5_TXOFF_CONFIG(VLAN) &&
-	    loc->mbuf->ol_flags & PKT_TX_VLAN) {
+	    loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		/*
 		 * We should get here only if device support
 		 * this feature correctly.
@@ -1012,7 +1012,7 @@ mlx5_tx_eseg_dmin(struct mlx5_txq_data *__rte_restrict txq __rte_unused,
 	es->swp_offs = txq_mbuf_to_swp(loc, &es->swp_flags, olx);
 	/* Fill metadata field if needed. */
 	es->metadata = MLX5_TXOFF_CONFIG(METADATA) ?
-		       loc->mbuf->ol_flags & PKT_TX_DYNF_METADATA ?
+		       loc->mbuf->ol_flags & RTE_MBUF_DYNFLAG_TX_METADATA ?
 		       *RTE_FLOW_DYNF_METADATA(loc->mbuf) : 0 : 0;
 	psrc = rte_pktmbuf_mtod(loc->mbuf, uint8_t *);
 	es->inline_hdr_sz = RTE_BE16(MLX5_ESEG_MIN_INLINE_SIZE);
@@ -1095,7 +1095,7 @@ mlx5_tx_eseg_data(struct mlx5_txq_data *__rte_restrict txq,
 	es->swp_offs = txq_mbuf_to_swp(loc, &es->swp_flags, olx);
 	/* Fill metadata field if needed. */
 	es->metadata = MLX5_TXOFF_CONFIG(METADATA) ?
-		       loc->mbuf->ol_flags & PKT_TX_DYNF_METADATA ?
+		       loc->mbuf->ol_flags & RTE_MBUF_DYNFLAG_TX_METADATA ?
 		       *RTE_FLOW_DYNF_METADATA(loc->mbuf) : 0 : 0;
 	psrc = rte_pktmbuf_mtod(loc->mbuf, uint8_t *);
 	es->inline_hdr_sz = rte_cpu_to_be_16(inlen);
@@ -1203,7 +1203,7 @@ mlx5_tx_mseg_memcpy(uint8_t *pdst,
 			MLX5_ASSERT(loc->mbuf_nseg > 1);
 			MLX5_ASSERT(loc->mbuf);
 			--loc->mbuf_nseg;
-			if (loc->mbuf->ol_flags & PKT_TX_DYNF_NOINLINE) {
+			if (loc->mbuf->ol_flags & RTE_MBUF_F_TX_DYNF_NOINLINE) {
 				unsigned int diff;
 
 				if (copy >= must) {
@@ -1307,7 +1307,7 @@ mlx5_tx_eseg_mdat(struct mlx5_txq_data *__rte_restrict txq,
 	es->swp_offs = txq_mbuf_to_swp(loc, &es->swp_flags, olx);
 	/* Fill metadata field if needed. */
 	es->metadata = MLX5_TXOFF_CONFIG(METADATA) ?
-		       loc->mbuf->ol_flags & PKT_TX_DYNF_METADATA ?
+		       loc->mbuf->ol_flags & RTE_MBUF_DYNFLAG_TX_METADATA ?
 		       *RTE_FLOW_DYNF_METADATA(loc->mbuf) : 0 : 0;
 	MLX5_ASSERT(inlen >= MLX5_ESEG_MIN_INLINE_SIZE);
 	pdst = (uint8_t *)&es->inline_data;
@@ -1814,13 +1814,13 @@ mlx5_tx_packet_multi_tso(struct mlx5_txq_data *__rte_restrict txq,
 	 * the required space in WQE ring buffer.
 	 */
 	dlen = rte_pktmbuf_pkt_len(loc->mbuf);
-	if (MLX5_TXOFF_CONFIG(VLAN) && loc->mbuf->ol_flags & PKT_TX_VLAN)
+	if (MLX5_TXOFF_CONFIG(VLAN) && loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN)
 		vlan = sizeof(struct rte_vlan_hdr);
 	inlen = loc->mbuf->l2_len + vlan +
 		loc->mbuf->l3_len + loc->mbuf->l4_len;
 	if (unlikely((!inlen || !loc->mbuf->tso_segsz)))
 		return MLX5_TXCMP_CODE_ERROR;
-	if (loc->mbuf->ol_flags & PKT_TX_TUNNEL_MASK)
+	if (loc->mbuf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 		inlen += loc->mbuf->outer_l2_len + loc->mbuf->outer_l3_len;
 	/* Packet must contain all TSO headers. */
 	if (unlikely(inlen > MLX5_MAX_TSO_HEADER ||
@@ -1929,7 +1929,7 @@ mlx5_tx_packet_multi_send(struct mlx5_txq_data *__rte_restrict txq,
 	/* Update sent data bytes counter. */
 	txq->stats.obytes += rte_pktmbuf_pkt_len(loc->mbuf);
 	if (MLX5_TXOFF_CONFIG(VLAN) &&
-	    loc->mbuf->ol_flags & PKT_TX_VLAN)
+	    loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN)
 		txq->stats.obytes += sizeof(struct rte_vlan_hdr);
 #endif
 	/*
@@ -2028,7 +2028,7 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
 	 * to estimate the required space for WQE.
 	 */
 	dlen = rte_pktmbuf_pkt_len(loc->mbuf);
-	if (MLX5_TXOFF_CONFIG(VLAN) && loc->mbuf->ol_flags & PKT_TX_VLAN)
+	if (MLX5_TXOFF_CONFIG(VLAN) && loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN)
 		vlan = sizeof(struct rte_vlan_hdr);
 	inlen = dlen + vlan;
 	/* Check against minimal length. */
@@ -2036,7 +2036,7 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
 		return MLX5_TXCMP_CODE_ERROR;
 	MLX5_ASSERT(txq->inlen_send >= MLX5_ESEG_MIN_INLINE_SIZE);
 	if (inlen > txq->inlen_send ||
-	    loc->mbuf->ol_flags & PKT_TX_DYNF_NOINLINE) {
+	    loc->mbuf->ol_flags & RTE_MBUF_F_TX_DYNF_NOINLINE) {
 		struct rte_mbuf *mbuf;
 		unsigned int nxlen;
 		uintptr_t start;
@@ -2058,7 +2058,7 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
 			 * support the offload, will do with software inline.
 			 */
 			inlen = MLX5_ESEG_MIN_INLINE_SIZE;
-		} else if (mbuf->ol_flags & PKT_TX_DYNF_NOINLINE ||
+		} else if (mbuf->ol_flags & RTE_MBUF_F_TX_DYNF_NOINLINE ||
 			   nxlen > txq->inlen_send) {
 			return mlx5_tx_packet_multi_send(txq, loc, olx);
 		} else {
@@ -2198,7 +2198,7 @@ mlx5_tx_burst_mseg(struct mlx5_txq_data *__rte_restrict txq,
 		if (loc->elts_free < NB_SEGS(loc->mbuf))
 			return MLX5_TXCMP_CODE_EXIT;
 		if (MLX5_TXOFF_CONFIG(TSO) &&
-		    unlikely(loc->mbuf->ol_flags & PKT_TX_TCP_SEG)) {
+		    unlikely(loc->mbuf->ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 			/* Proceed with multi-segment TSO. */
 			ret = mlx5_tx_packet_multi_tso(txq, loc, olx);
 		} else if (MLX5_TXOFF_CONFIG(INLINE)) {
@@ -2224,7 +2224,7 @@ mlx5_tx_burst_mseg(struct mlx5_txq_data *__rte_restrict txq,
 			continue;
 		/* Here ends the series of multi-segment packets. */
 		if (MLX5_TXOFF_CONFIG(TSO) &&
-		    unlikely(loc->mbuf->ol_flags & PKT_TX_TCP_SEG))
+		    unlikely(loc->mbuf->ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 			return MLX5_TXCMP_CODE_TSO;
 		return MLX5_TXCMP_CODE_SINGLE;
 	}
@@ -2291,7 +2291,7 @@ mlx5_tx_burst_tso(struct mlx5_txq_data *__rte_restrict txq,
 		}
 		dlen = rte_pktmbuf_data_len(loc->mbuf);
 		if (MLX5_TXOFF_CONFIG(VLAN) &&
-		    loc->mbuf->ol_flags & PKT_TX_VLAN) {
+		    loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) {
 			vlan = sizeof(struct rte_vlan_hdr);
 		}
 		/*
@@ -2302,7 +2302,7 @@ mlx5_tx_burst_tso(struct mlx5_txq_data *__rte_restrict txq,
 		       loc->mbuf->l3_len + loc->mbuf->l4_len;
 		if (unlikely((!hlen || !loc->mbuf->tso_segsz)))
 			return MLX5_TXCMP_CODE_ERROR;
-		if (loc->mbuf->ol_flags & PKT_TX_TUNNEL_MASK)
+		if (loc->mbuf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 			hlen += loc->mbuf->outer_l2_len +
 				loc->mbuf->outer_l3_len;
 		/* Segment must contain all TSO headers. */
@@ -2368,7 +2368,7 @@ mlx5_tx_burst_tso(struct mlx5_txq_data *__rte_restrict txq,
 		if (MLX5_TXOFF_CONFIG(MULTI) &&
 		    unlikely(NB_SEGS(loc->mbuf) > 1))
 			return MLX5_TXCMP_CODE_MULTI;
-		if (likely(!(loc->mbuf->ol_flags & PKT_TX_TCP_SEG)))
+		if (likely(!(loc->mbuf->ol_flags & RTE_MBUF_F_TX_TCP_SEG)))
 			return MLX5_TXCMP_CODE_SINGLE;
 		/* Continue with the next TSO packet. */
 	}
@@ -2409,14 +2409,14 @@ mlx5_tx_able_to_empw(struct mlx5_txq_data *__rte_restrict txq,
 	/* Check for TSO packet. */
 	if (newp &&
 	    MLX5_TXOFF_CONFIG(TSO) &&
-	    unlikely(loc->mbuf->ol_flags & PKT_TX_TCP_SEG))
+	    unlikely(loc->mbuf->ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 		return MLX5_TXCMP_CODE_TSO;
 	/* Check if eMPW is enabled at all. */
 	if (!MLX5_TXOFF_CONFIG(EMPW))
 		return MLX5_TXCMP_CODE_SINGLE;
 	/* Check if eMPW can be engaged. */
 	if (MLX5_TXOFF_CONFIG(VLAN) &&
-	    unlikely(loc->mbuf->ol_flags & PKT_TX_VLAN) &&
+	    unlikely(loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) &&
 		(!MLX5_TXOFF_CONFIG(INLINE) ||
 		 unlikely((rte_pktmbuf_data_len(loc->mbuf) +
 			   sizeof(struct rte_vlan_hdr)) > txq->inlen_empw))) {
@@ -2469,7 +2469,7 @@ mlx5_tx_match_empw(struct mlx5_txq_data *__rte_restrict txq,
 		return false;
 	/* Fill metadata field if needed. */
 	if (MLX5_TXOFF_CONFIG(METADATA) &&
-		es->metadata != (loc->mbuf->ol_flags & PKT_TX_DYNF_METADATA ?
+		es->metadata != (loc->mbuf->ol_flags & RTE_MBUF_DYNFLAG_TX_METADATA ?
 				 *RTE_FLOW_DYNF_METADATA(loc->mbuf) : 0))
 		return false;
 	/* Legacy MPW can send packets with the same length only. */
@@ -2478,7 +2478,7 @@ mlx5_tx_match_empw(struct mlx5_txq_data *__rte_restrict txq,
 		return false;
 	/* There must be no VLAN packets in eMPW loop. */
 	if (MLX5_TXOFF_CONFIG(VLAN))
-		MLX5_ASSERT(!(loc->mbuf->ol_flags & PKT_TX_VLAN));
+		MLX5_ASSERT(!(loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN));
 	/* Check if the scheduling is requested. */
 	if (MLX5_TXOFF_CONFIG(TXPP) &&
 	    loc->mbuf->ol_flags & txq->ts_mask)
@@ -2914,7 +2914,7 @@ mlx5_tx_burst_empw_inline(struct mlx5_txq_data *__rte_restrict txq,
 			}
 			/* Inline or not inline - that's the Question. */
 			if (dlen > txq->inlen_empw ||
-			    loc->mbuf->ol_flags & PKT_TX_DYNF_NOINLINE)
+			    loc->mbuf->ol_flags & RTE_MBUF_F_TX_DYNF_NOINLINE)
 				goto pointer_empw;
 			if (MLX5_TXOFF_CONFIG(MPW)) {
 				if (dlen > txq->inlen_send)
@@ -2939,7 +2939,7 @@ mlx5_tx_burst_empw_inline(struct mlx5_txq_data *__rte_restrict txq,
 			}
 			/* Inline entire packet, optional VLAN insertion. */
 			if (MLX5_TXOFF_CONFIG(VLAN) &&
-			    loc->mbuf->ol_flags & PKT_TX_VLAN) {
+			    loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) {
 				/*
 				 * The packet length must be checked in
 				 * mlx5_tx_able_to_empw() and packet
@@ -3004,7 +3004,7 @@ mlx5_tx_burst_empw_inline(struct mlx5_txq_data *__rte_restrict txq,
 			MLX5_ASSERT(room >= MLX5_WQE_DSEG_SIZE);
 			if (MLX5_TXOFF_CONFIG(VLAN))
 				MLX5_ASSERT(!(loc->mbuf->ol_flags &
-					    PKT_TX_VLAN));
+					    RTE_MBUF_F_TX_VLAN));
 			mlx5_tx_dseg_ptr(txq, loc, dseg, dptr, dlen, olx);
 			/* We have to store mbuf in elts.*/
 			txq->elts[txq->elts_head++ & txq->elts_m] = loc->mbuf;
@@ -3149,7 +3149,7 @@ mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq,
 
 			inlen = rte_pktmbuf_data_len(loc->mbuf);
 			if (MLX5_TXOFF_CONFIG(VLAN) &&
-			    loc->mbuf->ol_flags & PKT_TX_VLAN) {
+			    loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) {
 				vlan = sizeof(struct rte_vlan_hdr);
 				inlen += vlan;
 			}
@@ -3170,7 +3170,7 @@ mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq,
 				if (inlen <= MLX5_ESEG_MIN_INLINE_SIZE)
 					return MLX5_TXCMP_CODE_ERROR;
 				if (loc->mbuf->ol_flags &
-				    PKT_TX_DYNF_NOINLINE) {
+				    RTE_MBUF_F_TX_DYNF_NOINLINE) {
 					/*
 					 * The hint flag not to inline packet
 					 * data is set. Check whether we can
@@ -3380,7 +3380,7 @@ mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq,
 			/* Update sent data bytes counter. */
 			txq->stats.obytes += rte_pktmbuf_data_len(loc->mbuf);
 			if (MLX5_TXOFF_CONFIG(VLAN) &&
-			    loc->mbuf->ol_flags & PKT_TX_VLAN)
+			    loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN)
 				txq->stats.obytes +=
 					sizeof(struct rte_vlan_hdr);
 #endif
@@ -3576,7 +3576,7 @@ mlx5_tx_burst_tmpl(struct mlx5_txq_data *__rte_restrict txq,
 		}
 		/* Dedicated branch for single-segment TSO packets. */
 		if (MLX5_TXOFF_CONFIG(TSO) &&
-		    unlikely(loc.mbuf->ol_flags & PKT_TX_TCP_SEG)) {
+		    unlikely(loc.mbuf->ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 			/*
 			 * TSO might require special way for inlining
 			 * (dedicated parameters) and is sent with
diff --git a/drivers/net/mvneta/mvneta_ethdev.h b/drivers/net/mvneta/mvneta_ethdev.h
index ef8067790f..d4a06d5795 100644
--- a/drivers/net/mvneta/mvneta_ethdev.h
+++ b/drivers/net/mvneta/mvneta_ethdev.h
@@ -64,9 +64,9 @@
 #define MVNETA_TX_OFFLOADS (MVNETA_TX_OFFLOAD_CHECKSUM | \
 			    DEV_TX_OFFLOAD_MULTI_SEGS)
 
-#define MVNETA_TX_PKT_OFFLOADS (PKT_TX_IP_CKSUM | \
-				PKT_TX_TCP_CKSUM | \
-				PKT_TX_UDP_CKSUM)
+#define MVNETA_TX_PKT_OFFLOADS (RTE_MBUF_F_TX_IP_CKSUM | \
+				RTE_MBUF_F_TX_TCP_CKSUM | \
+				RTE_MBUF_F_TX_UDP_CKSUM)
 
 struct mvneta_priv {
 	/* Hot fields, used in fast path. */
diff --git a/drivers/net/mvneta/mvneta_rxtx.c b/drivers/net/mvneta/mvneta_rxtx.c
index dfa7ecc090..de53ef935f 100644
--- a/drivers/net/mvneta/mvneta_rxtx.c
+++ b/drivers/net/mvneta/mvneta_rxtx.c
@@ -304,18 +304,18 @@ mvneta_prepare_proto_info(uint64_t ol_flags,
 	 * default value
 	 */
 	*l3_type = NETA_OUTQ_L3_TYPE_IPV4;
-	*gen_l3_cksum = ol_flags & PKT_TX_IP_CKSUM ? 1 : 0;
+	*gen_l3_cksum = ol_flags & RTE_MBUF_F_TX_IP_CKSUM ? 1 : 0;
 
-	if (ol_flags & PKT_TX_IPV6) {
+	if (ol_flags & RTE_MBUF_F_TX_IPV6) {
 		*l3_type = NETA_OUTQ_L3_TYPE_IPV6;
 		/* no checksum for ipv6 header */
 		*gen_l3_cksum = 0;
 	}
 
-	if (ol_flags & PKT_TX_TCP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) {
 		*l4_type = NETA_OUTQ_L4_TYPE_TCP;
 		*gen_l4_cksum = 1;
-	} else if (ol_flags & PKT_TX_UDP_CKSUM) {
+	} else if (ol_flags & RTE_MBUF_F_TX_UDP_CKSUM) {
 		*l4_type = NETA_OUTQ_L4_TYPE_UDP;
 		*gen_l4_cksum = 1;
 	} else {
@@ -342,15 +342,15 @@ mvneta_desc_to_ol_flags(struct neta_ppio_desc *desc)
 
 	status = neta_ppio_inq_desc_get_l3_pkt_error(desc);
 	if (unlikely(status != NETA_DESC_ERR_OK))
-		flags = PKT_RX_IP_CKSUM_BAD;
+		flags = RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		flags = PKT_RX_IP_CKSUM_GOOD;
+		flags = RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	status = neta_ppio_inq_desc_get_l4_pkt_error(desc);
 	if (unlikely(status != NETA_DESC_ERR_OK))
-		flags |= PKT_RX_L4_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	else
-		flags |= PKT_RX_L4_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	return flags;
 }
diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index 078aefbb8d..ed6823c2ae 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -69,9 +69,9 @@
 #define MRVL_TX_OFFLOADS (MRVL_TX_OFFLOAD_CHECKSUM | \
 			  DEV_TX_OFFLOAD_MULTI_SEGS)
 
-#define MRVL_TX_PKT_OFFLOADS (PKT_TX_IP_CKSUM | \
-			      PKT_TX_TCP_CKSUM | \
-			      PKT_TX_UDP_CKSUM)
+#define MRVL_TX_PKT_OFFLOADS (RTE_MBUF_F_TX_IP_CKSUM | \
+			      RTE_MBUF_F_TX_TCP_CKSUM | \
+			      RTE_MBUF_F_TX_UDP_CKSUM)
 
 static const char * const valid_args[] = {
 	MRVL_IFACE_NAME_ARG,
@@ -2545,18 +2545,18 @@ mrvl_desc_to_ol_flags(struct pp2_ppio_desc *desc, uint64_t packet_type)
 	if (RTE_ETH_IS_IPV4_HDR(packet_type)) {
 		status = pp2_ppio_inq_desc_get_l3_pkt_error(desc);
 		if (unlikely(status != PP2_DESC_ERR_OK))
-			flags |= PKT_RX_IP_CKSUM_BAD;
+			flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		else
-			flags |= PKT_RX_IP_CKSUM_GOOD;
+			flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 	}
 
 	if (((packet_type & RTE_PTYPE_L4_UDP) == RTE_PTYPE_L4_UDP) ||
 	    ((packet_type & RTE_PTYPE_L4_TCP) == RTE_PTYPE_L4_TCP)) {
 		status = pp2_ppio_inq_desc_get_l4_pkt_error(desc);
 		if (unlikely(status != PP2_DESC_ERR_OK))
-			flags |= PKT_RX_L4_CKSUM_BAD;
+			flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		else
-			flags |= PKT_RX_L4_CKSUM_GOOD;
+			flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 	}
 
 	return flags;
@@ -2716,18 +2716,18 @@ mrvl_prepare_proto_info(uint64_t ol_flags,
 	 * default value
 	 */
 	*l3_type = PP2_OUTQ_L3_TYPE_IPV4;
-	*gen_l3_cksum = ol_flags & PKT_TX_IP_CKSUM ? 1 : 0;
+	*gen_l3_cksum = ol_flags & RTE_MBUF_F_TX_IP_CKSUM ? 1 : 0;
 
-	if (ol_flags & PKT_TX_IPV6) {
+	if (ol_flags & RTE_MBUF_F_TX_IPV6) {
 		*l3_type = PP2_OUTQ_L3_TYPE_IPV6;
 		/* no checksum for ipv6 header */
 		*gen_l3_cksum = 0;
 	}
 
-	if ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM) {
+	if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_TCP_CKSUM) {
 		*l4_type = PP2_OUTQ_L4_TYPE_TCP;
 		*gen_l4_cksum = 1;
-	} else if ((ol_flags & PKT_TX_L4_MASK) ==  PKT_TX_UDP_CKSUM) {
+	} else if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) ==  RTE_MBUF_F_TX_UDP_CKSUM) {
 		*l4_type = PP2_OUTQ_L4_TYPE_UDP;
 		*gen_l4_cksum = 1;
 	} else {
diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c
index afef7a96a3..8af9a084b9 100644
--- a/drivers/net/netvsc/hn_rxtx.c
+++ b/drivers/net/netvsc/hn_rxtx.c
@@ -615,7 +615,7 @@ static void hn_rxpkt(struct hn_rx_queue *rxq, struct hn_rx_bufinfo *rxb,
 
 	if (info->vlan_info != HN_NDIS_VLAN_INFO_INVALID) {
 		m->vlan_tci = info->vlan_info;
-		m->ol_flags |= PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+		m->ol_flags |= RTE_MBUF_F_RX_VLAN_STRIPPED | RTE_MBUF_F_RX_VLAN;
 
 		/* NDIS always strips tag, put it back if necessary */
 		if (!hv->vlan_strip && rte_vlan_insert(&m)) {
@@ -630,18 +630,18 @@ static void hn_rxpkt(struct hn_rx_queue *rxq, struct hn_rx_bufinfo *rxb,
 
 	if (info->csum_info != HN_NDIS_RXCSUM_INFO_INVALID) {
 		if (info->csum_info & NDIS_RXCSUM_INFO_IPCS_OK)
-			m->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 		if (info->csum_info & (NDIS_RXCSUM_INFO_UDPCS_OK
 				       | NDIS_RXCSUM_INFO_TCPCS_OK))
-			m->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		else if (info->csum_info & (NDIS_RXCSUM_INFO_TCPCS_FAILED
 					    | NDIS_RXCSUM_INFO_UDPCS_FAILED))
-			m->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+			m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	}
 
 	if (info->hash_info != HN_NDIS_HASH_INFO_INVALID) {
-		m->ol_flags |= PKT_RX_RSS_HASH;
+		m->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		m->hash.rss = info->hash_value;
 	}
 
@@ -1331,17 +1331,17 @@ static void hn_encap(struct rndis_packet_msg *pkt,
 					  NDIS_PKTINFO_TYPE_HASHVAL);
 	*pi_data = queue_id;
 
-	if (m->ol_flags & PKT_TX_VLAN) {
+	if (m->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		pi_data = hn_rndis_pktinfo_append(pkt, NDIS_VLAN_INFO_SIZE,
 						  NDIS_PKTINFO_TYPE_VLAN);
 		*pi_data = m->vlan_tci;
 	}
 
-	if (m->ol_flags & PKT_TX_TCP_SEG) {
+	if (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		pi_data = hn_rndis_pktinfo_append(pkt, NDIS_LSO2_INFO_SIZE,
 						  NDIS_PKTINFO_TYPE_LSO);
 
-		if (m->ol_flags & PKT_TX_IPV6) {
+		if (m->ol_flags & RTE_MBUF_F_TX_IPV6) {
 			*pi_data = NDIS_LSO2_INFO_MAKEIPV6(hlen,
 							   m->tso_segsz);
 		} else {
@@ -1349,23 +1349,23 @@ static void hn_encap(struct rndis_packet_msg *pkt,
 							   m->tso_segsz);
 		}
 	} else if (m->ol_flags &
-		   (PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM | PKT_TX_IP_CKSUM)) {
+		   (RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM | RTE_MBUF_F_TX_IP_CKSUM)) {
 		pi_data = hn_rndis_pktinfo_append(pkt, NDIS_TXCSUM_INFO_SIZE,
 						  NDIS_PKTINFO_TYPE_CSUM);
 		*pi_data = 0;
 
-		if (m->ol_flags & PKT_TX_IPV6)
+		if (m->ol_flags & RTE_MBUF_F_TX_IPV6)
 			*pi_data |= NDIS_TXCSUM_INFO_IPV6;
-		if (m->ol_flags & PKT_TX_IPV4) {
+		if (m->ol_flags & RTE_MBUF_F_TX_IPV4) {
 			*pi_data |= NDIS_TXCSUM_INFO_IPV4;
 
-			if (m->ol_flags & PKT_TX_IP_CKSUM)
+			if (m->ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 				*pi_data |= NDIS_TXCSUM_INFO_IPCS;
 		}
 
-		if (m->ol_flags & PKT_TX_TCP_CKSUM)
+		if (m->ol_flags & RTE_MBUF_F_TX_TCP_CKSUM)
 			*pi_data |= NDIS_TXCSUM_INFO_MKTCPCS(hlen);
-		else if (m->ol_flags & PKT_TX_UDP_CKSUM)
+		else if (m->ol_flags & RTE_MBUF_F_TX_UDP_CKSUM)
 			*pi_data |= NDIS_TXCSUM_INFO_MKUDPCS(hlen);
 	}
 
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 0dcaf525f6..03afc779cf 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -203,7 +203,7 @@ nfp_net_set_hash(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd,
 	}
 
 	mbuf->hash.rss = hash;
-	mbuf->ol_flags |= PKT_RX_RSS_HASH;
+	mbuf->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 
 	switch (hash_type) {
 	case NFP_NET_RSS_IPV4:
@@ -245,9 +245,9 @@ nfp_net_rx_cksum(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd,
 	/* If IPv4 and IP checksum error, fail */
 	if (unlikely((rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM) &&
 	    !(rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM_OK)))
-		mb->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+		mb->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		mb->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+		mb->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	/* If neither UDP nor TCP return */
 	if (!(rxd->rxd.flags & PCIE_DESC_RX_TCP_CSUM) &&
@@ -255,9 +255,9 @@ nfp_net_rx_cksum(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd,
 		return;
 
 	if (likely(rxd->rxd.flags & PCIE_DESC_RX_L4_CSUM_OK))
-		mb->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+		mb->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 	else
-		mb->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+		mb->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 }
 
 /*
@@ -403,7 +403,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		if ((rxds->rxd.flags & PCIE_DESC_RX_VLAN) &&
 		    (hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN)) {
 			mb->vlan_tci = rte_cpu_to_le_32(rxds->rxd.vlan);
-			mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+			mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		}
 
 		/* Adding the mbuf to the mbuf array passed by the app */
@@ -821,7 +821,7 @@ nfp_net_tx_tso(struct nfp_net_txq *txq, struct nfp_net_tx_desc *txd,
 
 	ol_flags = mb->ol_flags;
 
-	if (!(ol_flags & PKT_TX_TCP_SEG))
+	if (!(ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 		goto clean_txd;
 
 	txd->l3_offset = mb->l2_len;
@@ -853,19 +853,19 @@ nfp_net_tx_cksum(struct nfp_net_txq *txq, struct nfp_net_tx_desc *txd,
 	ol_flags = mb->ol_flags;
 
 	/* IPv6 does not need checksum */
-	if (ol_flags & PKT_TX_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 		txd->flags |= PCIE_DESC_TX_IP4_CSUM;
 
-	switch (ol_flags & PKT_TX_L4_MASK) {
-	case PKT_TX_UDP_CKSUM:
+	switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		txd->flags |= PCIE_DESC_TX_UDP_CSUM;
 		break;
-	case PKT_TX_TCP_CKSUM:
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		txd->flags |= PCIE_DESC_TX_TCP_CSUM;
 		break;
 	}
 
-	if (ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK))
+	if (ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK))
 		txd->flags |= PCIE_DESC_TX_CSUM;
 }
 
@@ -929,7 +929,7 @@ nfp_net_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		nfp_net_tx_tso(txq, &txd, pkt);
 		nfp_net_tx_cksum(txq, &txd, pkt);
 
-		if ((pkt->ol_flags & PKT_TX_VLAN) &&
+		if ((pkt->ol_flags & RTE_MBUF_F_TX_VLAN) &&
 		    (hw->cap & NFP_NET_CFG_CTRL_TXVLAN)) {
 			txd.flags |= PCIE_DESC_TX_VLAN;
 			txd.vlan = pkt->vlan_tci;
diff --git a/drivers/net/octeontx/octeontx_rxtx.h b/drivers/net/octeontx/octeontx_rxtx.h
index e0723ac26a..eeadd555c7 100644
--- a/drivers/net/octeontx/octeontx_rxtx.h
+++ b/drivers/net/octeontx/octeontx_rxtx.h
@@ -242,20 +242,20 @@ octeontx_tx_checksum_offload(uint64_t *cmd_buf, const uint16_t flags,
 	 * 0x2 - TCP L4 checksum
 	 * 0x3 - SCTP L4 checksum
 	 */
-	const uint8_t csum = (!(((ol_flags ^ PKT_TX_UDP_CKSUM) >> 52) & 0x3) +
-		      (!(((ol_flags ^ PKT_TX_TCP_CKSUM) >> 52) & 0x3) * 2) +
-		      (!(((ol_flags ^ PKT_TX_SCTP_CKSUM) >> 52) & 0x3) * 3));
-
-	const uint8_t is_tunnel_parsed = (!!(ol_flags & PKT_TX_TUNNEL_GTP) ||
-				      !!(ol_flags & PKT_TX_TUNNEL_VXLAN_GPE) ||
-				      !!(ol_flags & PKT_TX_TUNNEL_VXLAN) ||
-				      !!(ol_flags & PKT_TX_TUNNEL_GRE) ||
-				      !!(ol_flags & PKT_TX_TUNNEL_GENEVE) ||
-				      !!(ol_flags & PKT_TX_TUNNEL_IP) ||
-				      !!(ol_flags & PKT_TX_TUNNEL_IPIP));
-
-	const uint8_t csum_outer = (!!(ol_flags & PKT_TX_OUTER_UDP_CKSUM) ||
-				    !!(ol_flags & PKT_TX_TUNNEL_UDP));
+	const uint8_t csum = (!(((ol_flags ^ RTE_MBUF_F_TX_UDP_CKSUM) >> 52) & 0x3) +
+		      (!(((ol_flags ^ RTE_MBUF_F_TX_TCP_CKSUM) >> 52) & 0x3) * 2) +
+		      (!(((ol_flags ^ RTE_MBUF_F_TX_SCTP_CKSUM) >> 52) & 0x3) * 3));
+
+	const uint8_t is_tunnel_parsed = (!!(ol_flags & RTE_MBUF_F_TX_TUNNEL_GTP) ||
+				      !!(ol_flags & RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE) ||
+				      !!(ol_flags & RTE_MBUF_F_TX_TUNNEL_VXLAN) ||
+				      !!(ol_flags & RTE_MBUF_F_TX_TUNNEL_GRE) ||
+				      !!(ol_flags & RTE_MBUF_F_TX_TUNNEL_GENEVE) ||
+				      !!(ol_flags & RTE_MBUF_F_TX_TUNNEL_IP) ||
+				      !!(ol_flags & RTE_MBUF_F_TX_TUNNEL_IPIP));
+
+	const uint8_t csum_outer = (!!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM) ||
+				    !!(ol_flags & RTE_MBUF_F_TX_TUNNEL_UDP));
 	const uint8_t outer_l2_len = m->outer_l2_len;
 	const uint8_t l2_len = m->l2_len;
 
@@ -266,7 +266,7 @@ octeontx_tx_checksum_offload(uint64_t *cmd_buf, const uint16_t flags,
 			send_hdr->w0.l3ptr = outer_l2_len;
 			send_hdr->w0.l4ptr = outer_l2_len + m->outer_l3_len;
 			/* Set clk3 for PKO to calculate IPV4 header checksum */
-			send_hdr->w0.ckl3 = !!(ol_flags & PKT_TX_OUTER_IPV4);
+			send_hdr->w0.ckl3 = !!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4);
 
 			/* Outer L4 */
 			send_hdr->w0.ckl4 = csum_outer;
@@ -277,7 +277,7 @@ octeontx_tx_checksum_offload(uint64_t *cmd_buf, const uint16_t flags,
 			/* Set clke for PKO to calculate inner IPV4 header
 			 * checksum.
 			 */
-			send_hdr->w0.ckle = !!(ol_flags & PKT_TX_IPV4);
+			send_hdr->w0.ckle = !!(ol_flags & RTE_MBUF_F_TX_IPV4);
 
 			/* Inner L4 */
 			send_hdr->w0.cklf = csum;
@@ -286,7 +286,7 @@ octeontx_tx_checksum_offload(uint64_t *cmd_buf, const uint16_t flags,
 			send_hdr->w0.l3ptr = l2_len;
 			send_hdr->w0.l4ptr = l2_len + m->l3_len;
 			/* Set clk3 for PKO to calculate IPV4 header checksum */
-			send_hdr->w0.ckl3 = !!(ol_flags & PKT_TX_IPV4);
+			send_hdr->w0.ckl3 = !!(ol_flags & RTE_MBUF_F_TX_IPV4);
 
 			/* Inner L4 */
 			send_hdr->w0.ckl4 = csum;
@@ -296,7 +296,7 @@ octeontx_tx_checksum_offload(uint64_t *cmd_buf, const uint16_t flags,
 		send_hdr->w0.l3ptr = outer_l2_len;
 		send_hdr->w0.l4ptr = outer_l2_len + m->outer_l3_len;
 		/* Set clk3 for PKO to calculate IPV4 header checksum */
-		send_hdr->w0.ckl3 = !!(ol_flags & PKT_TX_OUTER_IPV4);
+		send_hdr->w0.ckl3 = !!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4);
 
 		/* Outer L4 */
 		send_hdr->w0.ckl4 = csum_outer;
@@ -305,7 +305,7 @@ octeontx_tx_checksum_offload(uint64_t *cmd_buf, const uint16_t flags,
 		send_hdr->w0.l3ptr = l2_len;
 		send_hdr->w0.l4ptr = l2_len + m->l3_len;
 		/* Set clk3 for PKO to calculate IPV4 header checksum */
-		send_hdr->w0.ckl3 = !!(ol_flags & PKT_TX_IPV4);
+		send_hdr->w0.ckl3 = !!(ol_flags & RTE_MBUF_F_TX_IPV4);
 
 		/* Inner L4 */
 		send_hdr->w0.ckl4 = csum;
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 75d4cabf2e..541f793d5a 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -745,15 +745,15 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	uint16_t flags = 0;
 
 	/* Fastpath is dependent on these enums */
-	RTE_BUILD_BUG_ON(PKT_TX_TCP_CKSUM != (1ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_SCTP_CKSUM != (2ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_UDP_CKSUM != (3ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_IP_CKSUM != (1ULL << 54));
-	RTE_BUILD_BUG_ON(PKT_TX_IPV4 != (1ULL << 55));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IP_CKSUM != (1ULL << 58));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IPV4 != (1ULL << 59));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IPV6 != (1ULL << 60));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_UDP_CKSUM != (1ULL << 41));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_TCP_CKSUM != (1ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_SCTP_CKSUM != (2ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_UDP_CKSUM != (3ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IP_CKSUM != (1ULL << 54));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IPV4 != (1ULL << 55));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IP_CKSUM != (1ULL << 58));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV4 != (1ULL << 59));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV6 != (1ULL << 60));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_UDP_CKSUM != (1ULL << 41));
 	RTE_BUILD_BUG_ON(RTE_MBUF_L2_LEN_BITS != 7);
 	RTE_BUILD_BUG_ON(RTE_MBUF_L3_LEN_BITS != 9);
 	RTE_BUILD_BUG_ON(RTE_MBUF_OUTL2_LEN_BITS != 7);
diff --git a/drivers/net/octeontx2/otx2_lookup.c b/drivers/net/octeontx2/otx2_lookup.c
index 4764608c2d..5fa9ae1396 100644
--- a/drivers/net/octeontx2/otx2_lookup.c
+++ b/drivers/net/octeontx2/otx2_lookup.c
@@ -264,9 +264,9 @@ nix_create_rx_ol_flags_array(void *mem)
 		errlev = idx & 0xf;
 		errcode = (idx & 0xff0) >> 4;
 
-		val = PKT_RX_IP_CKSUM_UNKNOWN;
-		val |= PKT_RX_L4_CKSUM_UNKNOWN;
-		val |= PKT_RX_OUTER_L4_CKSUM_UNKNOWN;
+		val = RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
+		val |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
+		val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN;
 
 		switch (errlev) {
 		case NPC_ERRLEV_RE:
@@ -274,46 +274,46 @@ nix_create_rx_ol_flags_array(void *mem)
 			 * including Outer L2 length mismatch error
 			 */
 			if (errcode) {
-				val |= PKT_RX_IP_CKSUM_BAD;
-				val |= PKT_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 			break;
 		case NPC_ERRLEV_LC:
 			if (errcode == NPC_EC_OIP4_CSUM ||
 			    errcode == NPC_EC_IP_FRAG_OFFSET_1) {
-				val |= PKT_RX_IP_CKSUM_BAD;
-				val |= PKT_RX_OUTER_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			}
 			break;
 		case NPC_ERRLEV_LG:
 			if (errcode == NPC_EC_IIP4_CSUM)
-				val |= PKT_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			else
-				val |= PKT_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			break;
 		case NPC_ERRLEV_NIX:
 			if (errcode == NIX_RX_PERRCODE_OL4_CHK ||
 			    errcode == NIX_RX_PERRCODE_OL4_LEN ||
 			    errcode == NIX_RX_PERRCODE_OL4_PORT) {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_BAD;
-				val |= PKT_RX_OUTER_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 			} else if (errcode == NIX_RX_PERRCODE_IL4_CHK ||
 				   errcode == NIX_RX_PERRCODE_IL4_LEN ||
 				   errcode == NIX_RX_PERRCODE_IL4_PORT) {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			} else if (errcode == NIX_RX_PERRCODE_IL3_LEN ||
 				   errcode == NIX_RX_PERRCODE_OL3_LEN) {
-				val |= PKT_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 			break;
 		}
diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
index ffeade5952..5a7d220e22 100644
--- a/drivers/net/octeontx2/otx2_rx.c
+++ b/drivers/net/octeontx2/otx2_rx.c
@@ -92,7 +92,7 @@ static __rte_always_inline uint64_t
 nix_vlan_update(const uint64_t w2, uint64_t ol_flags, uint8x16_t *f)
 {
 	if (w2 & BIT_ULL(21) /* vtag0_gone */) {
-		ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		*f = vsetq_lane_u16((uint16_t)(w2 >> 32), *f, 5);
 	}
 
@@ -103,7 +103,7 @@ static __rte_always_inline uint64_t
 nix_qinq_update(const uint64_t w2, uint64_t ol_flags, struct rte_mbuf *mbuf)
 {
 	if (w2 & BIT_ULL(23) /* vtag1_gone */) {
-		ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED;
+		ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
 		mbuf->vlan_tci_outer = (uint16_t)(w2 >> 48);
 	}
 
@@ -205,10 +205,10 @@ nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
 			f1 = vsetq_lane_u32(cq1_w0, f1, 3);
 			f2 = vsetq_lane_u32(cq2_w0, f2, 3);
 			f3 = vsetq_lane_u32(cq3_w0, f3, 3);
-			ol_flags0 = PKT_RX_RSS_HASH;
-			ol_flags1 = PKT_RX_RSS_HASH;
-			ol_flags2 = PKT_RX_RSS_HASH;
-			ol_flags3 = PKT_RX_RSS_HASH;
+			ol_flags0 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags1 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags2 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags3 = RTE_MBUF_F_RX_RSS_HASH;
 		} else {
 			ol_flags0 = 0; ol_flags1 = 0;
 			ol_flags2 = 0; ol_flags3 = 0;
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
index ea29aec62f..530bf0082f 100644
--- a/drivers/net/octeontx2/otx2_rx.h
+++ b/drivers/net/octeontx2/otx2_rx.h
@@ -88,15 +88,15 @@ otx2_nix_mbuf_to_tstamp(struct rte_mbuf *mbuf,
 		 */
 		*otx2_timestamp_dynfield(mbuf, tstamp) =
 				rte_be_to_cpu_64(*tstamp_ptr);
-		/* PKT_RX_IEEE1588_TMST flag needs to be set only in case
+		/* RTE_MBUF_F_RX_IEEE1588_TMST flag needs to be set only in case
 		 * PTP packets are received.
 		 */
 		if (mbuf->packet_type == RTE_PTYPE_L2_ETHER_TIMESYNC) {
 			tstamp->rx_tstamp =
 					*otx2_timestamp_dynfield(mbuf, tstamp);
 			tstamp->rx_ready = 1;
-			mbuf->ol_flags |= PKT_RX_IEEE1588_PTP |
-				PKT_RX_IEEE1588_TMST |
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP |
+				RTE_MBUF_F_RX_IEEE1588_TMST |
 				tstamp->rx_tstamp_dynflag;
 		}
 	}
@@ -161,9 +161,9 @@ nix_update_match_id(const uint16_t match_id, uint64_t ol_flags,
 	 * 0 to OTX2_FLOW_ACTION_FLAG_DEFAULT - 2
 	 */
 	if (likely(match_id)) {
-		ol_flags |= PKT_RX_FDIR;
+		ol_flags |= RTE_MBUF_F_RX_FDIR;
 		if (match_id != OTX2_FLOW_ACTION_FLAG_DEFAULT) {
-			ol_flags |= PKT_RX_FDIR_ID;
+			ol_flags |= RTE_MBUF_F_RX_FDIR_ID;
 			mbuf->hash.fdir.hi = match_id - 1;
 		}
 	}
@@ -252,7 +252,7 @@ nix_rx_sec_mbuf_update(const struct nix_rx_parse_s *rx,
 	int i;
 
 	if (unlikely(nix_rx_sec_cptres_get(cq) != OTX2_SEC_COMP_GOOD))
-		return PKT_RX_SEC_OFFLOAD | PKT_RX_SEC_OFFLOAD_FAILED;
+		return RTE_MBUF_F_RX_SEC_OFFLOAD | RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
 
 	/* 20 bits of tag would have the SPI */
 	spi = cq->tag & 0xFFFFF;
@@ -266,7 +266,7 @@ nix_rx_sec_mbuf_update(const struct nix_rx_parse_s *rx,
 
 	if (sa->replay_win_sz) {
 		if (cpt_ipsec_ip_antireplay_check(sa, l3_ptr) < 0)
-			return PKT_RX_SEC_OFFLOAD | PKT_RX_SEC_OFFLOAD_FAILED;
+			return RTE_MBUF_F_RX_SEC_OFFLOAD | RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
 	}
 
 	l2_ptr_actual = RTE_PTR_ADD(l2_ptr,
@@ -294,7 +294,7 @@ nix_rx_sec_mbuf_update(const struct nix_rx_parse_s *rx,
 	m_len = ip_len + l2_len;
 	m->data_len = m_len;
 	m->pkt_len = m_len;
-	return PKT_RX_SEC_OFFLOAD;
+	return RTE_MBUF_F_RX_SEC_OFFLOAD;
 }
 
 static __rte_always_inline void
@@ -318,7 +318,7 @@ otx2_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 
 	if (flag & NIX_RX_OFFLOAD_RSS_F) {
 		mbuf->hash.rss = tag;
-		ol_flags |= PKT_RX_RSS_HASH;
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 	}
 
 	if (flag & NIX_RX_OFFLOAD_CHECKSUM_F)
@@ -326,11 +326,11 @@ otx2_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 
 	if (flag & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
 		if (rx->vtag0_gone) {
-			ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+			ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 			mbuf->vlan_tci = rx->vtag0_tci;
 		}
 		if (rx->vtag1_gone) {
-			ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED;
+			ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
 			mbuf->vlan_tci_outer = rx->vtag1_tci;
 		}
 	}
diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c
index ff299f00b9..afc47ca888 100644
--- a/drivers/net/octeontx2/otx2_tx.c
+++ b/drivers/net/octeontx2/otx2_tx.c
@@ -364,26 +364,26 @@ nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			const uint8x16_t tbl = {
 				/* [0-15] = il4type:il3type */
 				0x04, /* none (IPv6 assumed) */
-				0x14, /* PKT_TX_TCP_CKSUM (IPv6 assumed) */
-				0x24, /* PKT_TX_SCTP_CKSUM (IPv6 assumed) */
-				0x34, /* PKT_TX_UDP_CKSUM (IPv6 assumed) */
-				0x03, /* PKT_TX_IP_CKSUM */
-				0x13, /* PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM */
-				0x23, /* PKT_TX_IP_CKSUM | PKT_TX_SCTP_CKSUM */
-				0x33, /* PKT_TX_IP_CKSUM | PKT_TX_UDP_CKSUM */
-				0x02, /* PKT_TX_IPV4  */
-				0x12, /* PKT_TX_IPV4 | PKT_TX_TCP_CKSUM */
-				0x22, /* PKT_TX_IPV4 | PKT_TX_SCTP_CKSUM */
-				0x32, /* PKT_TX_IPV4 | PKT_TX_UDP_CKSUM */
-				0x03, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM */
-				0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_TCP_CKSUM
+				0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6 assumed) */
+				0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6 assumed) */
+				0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6 assumed) */
+				0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
+				0x13, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM */
+				0x23, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_SCTP_CKSUM */
+				0x33, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM */
+				0x02, /* RTE_MBUF_F_TX_IPV4  */
+				0x12, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_TCP_CKSUM */
+				0x22, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_SCTP_CKSUM */
+				0x32, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_UDP_CKSUM */
+				0x03, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM */
+				0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_TCP_CKSUM
 				       */
-				0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_SCTP_CKSUM
+				0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_SCTP_CKSUM
 				       */
-				0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_UDP_CKSUM
+				0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_UDP_CKSUM
 				       */
 			};
 
@@ -655,40 +655,40 @@ nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 				{
 					/* [0-15] = il4type:il3type */
 					0x04, /* none (IPv6) */
-					0x14, /* PKT_TX_TCP_CKSUM (IPv6) */
-					0x24, /* PKT_TX_SCTP_CKSUM (IPv6) */
-					0x34, /* PKT_TX_UDP_CKSUM (IPv6) */
-					0x03, /* PKT_TX_IP_CKSUM */
-					0x13, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_TCP_CKSUM
+					0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6) */
+					0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6) */
+					0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6) */
+					0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
+					0x13, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x23, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_SCTP_CKSUM
+					0x23, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x33, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_UDP_CKSUM
+					0x33, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
-					0x02, /* PKT_TX_IPV4 */
-					0x12, /* PKT_TX_IPV4 |
-					       * PKT_TX_TCP_CKSUM
+					0x02, /* RTE_MBUF_F_TX_IPV4 */
+					0x12, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x22, /* PKT_TX_IPV4 |
-					       * PKT_TX_SCTP_CKSUM
+					0x22, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x32, /* PKT_TX_IPV4 |
-					       * PKT_TX_UDP_CKSUM
+					0x32, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
-					0x03, /* PKT_TX_IPV4 |
-					       * PKT_TX_IP_CKSUM
+					0x03, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_IP_CKSUM
 					       */
-					0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_TCP_CKSUM
+					0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_SCTP_CKSUM
+					0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_UDP_CKSUM
+					0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
 				},
 
diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h
index 486248dff7..c9558e50a7 100644
--- a/drivers/net/octeontx2/otx2_tx.h
+++ b/drivers/net/octeontx2/otx2_tx.h
@@ -29,8 +29,8 @@
 	 NIX_TX_OFFLOAD_TSO_F)
 
 #define NIX_UDP_TUN_BITMASK \
-	((1ull << (PKT_TX_TUNNEL_VXLAN >> 45)) | \
-	 (1ull << (PKT_TX_TUNNEL_GENEVE >> 45)))
+	((1ull << (RTE_MBUF_F_TX_TUNNEL_VXLAN >> 45)) | \
+	 (1ull << (RTE_MBUF_F_TX_TUNNEL_GENEVE >> 45)))
 
 #define NIX_LSO_FORMAT_IDX_TSOV4	(0)
 #define NIX_LSO_FORMAT_IDX_TSOV6	(1)
@@ -54,7 +54,7 @@ otx2_nix_xmit_prepare_tstamp(uint64_t *cmd,  const uint64_t *send_mem_desc,
 	if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
 		struct nix_send_mem_s *send_mem;
 		uint16_t off = (no_segdw - 1) << 1;
-		const uint8_t is_ol_tstamp = !(ol_flags & PKT_TX_IEEE1588_TMST);
+		const uint8_t is_ol_tstamp = !(ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST);
 
 		send_mem = (struct nix_send_mem_s *)(cmd + off);
 		if (flags & NIX_TX_MULTI_SEG_F) {
@@ -67,7 +67,7 @@ otx2_nix_xmit_prepare_tstamp(uint64_t *cmd,  const uint64_t *send_mem_desc,
 			rte_compiler_barrier();
 		}
 
-		/* Packets for which PKT_TX_IEEE1588_TMST is not set, tx tstamp
+		/* Packets for which RTE_MBUF_F_TX_IEEE1588_TMST is not set, tx tstamp
 		 * should not be recorded, hence changing the alg type to
 		 * NIX_SENDMEMALG_SET and also changing send mem addr field to
 		 * next 8 bytes as it corrpt the actual tx tstamp registered
@@ -152,12 +152,12 @@ otx2_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 	uint64_t mask, ol_flags = m->ol_flags;
 
 	if (flags & NIX_TX_OFFLOAD_TSO_F &&
-	    (ol_flags & PKT_TX_TCP_SEG)) {
+	    (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		uintptr_t mdata = rte_pktmbuf_mtod(m, uintptr_t);
 		uint16_t *iplen, *oiplen, *oudplen;
 		uint16_t lso_sb, paylen;
 
-		mask = -!!(ol_flags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6));
+		mask = -!!(ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IPV6));
 		lso_sb = (mask & (m->outer_l2_len + m->outer_l3_len)) +
 			m->l2_len + m->l3_len + m->l4_len;
 
@@ -166,15 +166,15 @@ otx2_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 
 		/* Get iplen position assuming no tunnel hdr */
 		iplen = (uint16_t *)(mdata + m->l2_len +
-				     (2 << !!(ol_flags & PKT_TX_IPV6)));
+				     (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
 		/* Handle tunnel tso */
 		if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-		    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+		    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 			const uint8_t is_udp_tun = (NIX_UDP_TUN_BITMASK >>
-				((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) & 0x1;
+				((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) & 0x1;
 
 			oiplen = (uint16_t *)(mdata + m->outer_l2_len +
-				(2 << !!(ol_flags & PKT_TX_OUTER_IPV6)));
+				(2 << !!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)));
 			*oiplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*oiplen) -
 						   paylen);
 
@@ -189,7 +189,7 @@ otx2_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 
 			/* Update iplen position to inner ip hdr */
 			iplen = (uint16_t *)(mdata + lso_sb - m->l3_len -
-				m->l4_len + (2 << !!(ol_flags & PKT_TX_IPV6)));
+				m->l4_len + (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
 		}
 
 		*iplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*iplen) - paylen);
@@ -239,11 +239,11 @@ otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 
 	if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
 	    (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F)) {
-		const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+		const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
 		const uint8_t ol3type =
-			((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) +
-			((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) +
-			!!(ol_flags & PKT_TX_OUTER_IP_CKSUM);
+			((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
+			((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
+			!!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
 
 		/* Outer L3 */
 		w1.ol3type = ol3type;
@@ -255,15 +255,15 @@ otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		w1.ol4type = csum + (csum << 1);
 
 		/* Inner L3 */
-		w1.il3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) +
-			((!!(ol_flags & PKT_TX_IPV6)) << 2);
+		w1.il3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
+			((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2);
 		w1.il3ptr = w1.ol4ptr + m->l2_len;
 		w1.il4ptr = w1.il3ptr + m->l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.il3type = w1.il3type + !!(ol_flags & PKT_TX_IP_CKSUM);
+		w1.il3type = w1.il3type + !!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
 
 		/* Inner L4 */
-		w1.il4type =  (ol_flags & PKT_TX_L4_MASK) >> 52;
+		w1.il4type =  (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
 
 		/* In case of no tunnel header use only
 		 * shift IL3/IL4 fields a bit to use
@@ -274,16 +274,16 @@ otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 			((w1.u & 0X00000000FFFFFFFF) >> (mask << 4));
 
 	} else if (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) {
-		const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+		const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
 		const uint8_t outer_l2_len = m->outer_l2_len;
 
 		/* Outer L3 */
 		w1.ol3ptr = outer_l2_len;
 		w1.ol4ptr = outer_l2_len + m->outer_l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.ol3type = ((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) +
-			((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) +
-			!!(ol_flags & PKT_TX_OUTER_IP_CKSUM);
+		w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
+			((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
+			!!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
 
 		/* Outer L4 */
 		w1.ol4type = csum + (csum << 1);
@@ -299,29 +299,29 @@ otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		w1.ol3ptr = l2_len;
 		w1.ol4ptr = l2_len + m->l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.ol3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) +
-			((!!(ol_flags & PKT_TX_IPV6)) << 2) +
-			!!(ol_flags & PKT_TX_IP_CKSUM);
+		w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
+			((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2) +
+			!!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
 
 		/* Inner L4 */
-		w1.ol4type =  (ol_flags & PKT_TX_L4_MASK) >> 52;
+		w1.ol4type =  (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
 	}
 
 	if (flags & NIX_TX_NEED_EXT_HDR &&
 	    flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) {
-		send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & PKT_TX_VLAN);
+		send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_VLAN);
 		/* HW will update ptr after vlan0 update */
 		send_hdr_ext->w1.vlan1_ins_ptr = 12;
 		send_hdr_ext->w1.vlan1_ins_tci = m->vlan_tci;
 
-		send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & PKT_TX_QINQ);
+		send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_QINQ);
 		/* 2B before end of l2 header */
 		send_hdr_ext->w1.vlan0_ins_ptr = 12;
 		send_hdr_ext->w1.vlan0_ins_tci = m->vlan_tci_outer;
 	}
 
 	if (flags & NIX_TX_OFFLOAD_TSO_F &&
-	    (ol_flags & PKT_TX_TCP_SEG)) {
+	    (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		uint16_t lso_sb;
 		uint64_t mask;
 
@@ -332,18 +332,18 @@ otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		send_hdr_ext->w0.lso = 1;
 		send_hdr_ext->w0.lso_mps = m->tso_segsz;
 		send_hdr_ext->w0.lso_format =
-			NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & PKT_TX_IPV6);
+			NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & RTE_MBUF_F_TX_IPV6);
 		w1.ol4type = NIX_SENDL4TYPE_TCP_CKSUM;
 
 		/* Handle tunnel tso */
 		if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-		    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+		    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 			const uint8_t is_udp_tun = (NIX_UDP_TUN_BITMASK >>
-				((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) & 0x1;
+				((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) & 0x1;
 			uint8_t shift = is_udp_tun ? 32 : 0;
 
-			shift += (!!(ol_flags & PKT_TX_OUTER_IPV6) << 4);
-			shift += (!!(ol_flags & PKT_TX_IPV6) << 3);
+			shift += (!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) << 4);
+			shift += (!!(ol_flags & RTE_MBUF_F_TX_IPV6) << 3);
 
 			w1.il4type = NIX_SENDL4TYPE_TCP_CKSUM;
 			w1.ol4type = is_udp_tun ? NIX_SENDL4TYPE_UDP_CKSUM : 0;
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 050c6f5c32..1b4dfff3c3 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -1639,9 +1639,9 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 					    "L4 csum failed, flags = 0x%x\n",
 					    parse_flag);
 				rxq->rx_hw_errors++;
-				ol_flags |= PKT_RX_L4_CKSUM_BAD;
+				ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			} else {
-				ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+				ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 
 			if (unlikely(qede_check_tunn_csum_l3(parse_flag))) {
@@ -1649,9 +1649,9 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 					"Outer L3 csum failed, flags = 0x%x\n",
 					parse_flag);
 				rxq->rx_hw_errors++;
-				ol_flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+				ol_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 			} else {
-				ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+				ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			}
 
 			flags = fp_cqe->tunnel_pars_flags.flags;
@@ -1684,31 +1684,31 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 				    "L4 csum failed, flags = 0x%x\n",
 				    parse_flag);
 			rxq->rx_hw_errors++;
-			ol_flags |= PKT_RX_L4_CKSUM_BAD;
+			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		} else {
-			ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		}
 		if (unlikely(qede_check_notunn_csum_l3(rx_mb, parse_flag))) {
 			PMD_RX_LOG(ERR, rxq, "IP csum failed, flags = 0x%x\n",
 				   parse_flag);
 			rxq->rx_hw_errors++;
-			ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		} else {
-			ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 		}
 
 		if (unlikely(CQE_HAS_VLAN(parse_flag) ||
 			     CQE_HAS_OUTER_VLAN(parse_flag))) {
 			/* Note: FW doesn't indicate Q-in-Q packet */
-			ol_flags |= PKT_RX_VLAN;
+			ol_flags |= RTE_MBUF_F_RX_VLAN;
 			if (qdev->vlan_strip_flg) {
-				ol_flags |= PKT_RX_VLAN_STRIPPED;
+				ol_flags |= RTE_MBUF_F_RX_VLAN_STRIPPED;
 				rx_mb->vlan_tci = vlan_tci;
 			}
 		}
 
 		if (rss_enable) {
-			ol_flags |= PKT_RX_RSS_HASH;
+			ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 			rx_mb->hash.rss = rss_hash;
 		}
 
@@ -1837,7 +1837,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 			tpa_info = &rxq->tpa_info[cqe_start_tpa->tpa_agg_index];
 			tpa_start_flg = true;
 			/* Mark it as LRO packet */
-			ol_flags |= PKT_RX_LRO;
+			ol_flags |= RTE_MBUF_F_RX_LRO;
 			/* In split mode,  seg_len is same as len_on_first_bd
 			 * and bw_ext_bd_len_list will be empty since there are
 			 * no additional buffers
@@ -1908,9 +1908,9 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 					    "L4 csum failed, flags = 0x%x\n",
 					    parse_flag);
 				rxq->rx_hw_errors++;
-				ol_flags |= PKT_RX_L4_CKSUM_BAD;
+				ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			} else {
-				ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+				ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 
 			if (unlikely(qede_check_tunn_csum_l3(parse_flag))) {
@@ -1918,9 +1918,9 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 					"Outer L3 csum failed, flags = 0x%x\n",
 					parse_flag);
 				  rxq->rx_hw_errors++;
-				  ol_flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+				  ol_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 			} else {
-				  ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+				  ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			}
 
 			if (tpa_start_flg)
@@ -1957,32 +1957,32 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 				    "L4 csum failed, flags = 0x%x\n",
 				    parse_flag);
 			rxq->rx_hw_errors++;
-			ol_flags |= PKT_RX_L4_CKSUM_BAD;
+			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		} else {
-			ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		}
 		if (unlikely(qede_check_notunn_csum_l3(rx_mb, parse_flag))) {
 			PMD_RX_LOG(ERR, rxq, "IP csum failed, flags = 0x%x\n",
 				   parse_flag);
 			rxq->rx_hw_errors++;
-			ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		} else {
-			ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 		}
 
 		if (CQE_HAS_VLAN(parse_flag) ||
 		    CQE_HAS_OUTER_VLAN(parse_flag)) {
 			/* Note: FW doesn't indicate Q-in-Q packet */
-			ol_flags |= PKT_RX_VLAN;
+			ol_flags |= RTE_MBUF_F_RX_VLAN;
 			if (qdev->vlan_strip_flg) {
-				ol_flags |= PKT_RX_VLAN_STRIPPED;
+				ol_flags |= RTE_MBUF_F_RX_VLAN_STRIPPED;
 				rx_mb->vlan_tci = vlan_tci;
 			}
 		}
 
 		/* RSS Hash */
 		if (qdev->rss_enable) {
-			ol_flags |= PKT_RX_RSS_HASH;
+			ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 			rx_mb->hash.rss = rss_hash;
 		}
 
@@ -2178,7 +2178,7 @@ qede_xmit_prep_pkts(__rte_unused void *p_txq, struct rte_mbuf **tx_pkts,
 	for (i = 0; i < nb_pkts; i++) {
 		m = tx_pkts[i];
 		ol_flags = m->ol_flags;
-		if (ol_flags & PKT_TX_TCP_SEG) {
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			if (m->nb_segs >= ETH_TX_MAX_BDS_PER_LSO_PACKET) {
 				rte_errno = EINVAL;
 				break;
@@ -2196,14 +2196,14 @@ qede_xmit_prep_pkts(__rte_unused void *p_txq, struct rte_mbuf **tx_pkts,
 		}
 		if (ol_flags & QEDE_TX_OFFLOAD_NOTSUP_MASK) {
 			/* We support only limited tunnel protocols */
-			if (ol_flags & PKT_TX_TUNNEL_MASK) {
+			if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
 				uint64_t temp;
 
-				temp = ol_flags & PKT_TX_TUNNEL_MASK;
-				if (temp == PKT_TX_TUNNEL_VXLAN ||
-				    temp == PKT_TX_TUNNEL_GENEVE ||
-				    temp == PKT_TX_TUNNEL_MPLSINUDP ||
-				    temp == PKT_TX_TUNNEL_GRE)
+				temp = ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK;
+				if (temp == RTE_MBUF_F_TX_TUNNEL_VXLAN ||
+				    temp == RTE_MBUF_F_TX_TUNNEL_GENEVE ||
+				    temp == RTE_MBUF_F_TX_TUNNEL_MPLSINUDP ||
+				    temp == RTE_MBUF_F_TX_TUNNEL_GRE)
 					continue;
 			}
 
@@ -2311,13 +2311,13 @@ qede_xmit_pkts_regular(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			<< ETH_TX_DATA_1ST_BD_PKT_LEN_SHIFT;
 
 		/* Offload the IP checksum in the hardware */
-		if (tx_ol_flags & PKT_TX_IP_CKSUM)
+		if (tx_ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			bd1_bd_flags_bf |=
 				1 << ETH_TX_1ST_BD_FLAGS_IP_CSUM_SHIFT;
 
 		/* L4 checksum offload (tcp or udp) */
-		if ((tx_ol_flags & (PKT_TX_IPV4 | PKT_TX_IPV6)) &&
-		    (tx_ol_flags & (PKT_TX_UDP_CKSUM | PKT_TX_TCP_CKSUM)))
+		if ((tx_ol_flags & (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IPV6)) &&
+		    (tx_ol_flags & (RTE_MBUF_F_TX_UDP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM)))
 			bd1_bd_flags_bf |=
 				1 << ETH_TX_1ST_BD_FLAGS_L4_CSUM_SHIFT;
 
@@ -2456,7 +2456,7 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		 * offloads. Don't rely on pkt_type marked by Rx, instead use
 		 * tx_ol_flags to decide.
 		 */
-		tunn_flg = !!(tx_ol_flags & PKT_TX_TUNNEL_MASK);
+		tunn_flg = !!(tx_ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK);
 
 		if (tunn_flg) {
 			/* Check against max which is Tunnel IPv6 + ext */
@@ -2477,8 +2477,8 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			}
 
 			/* Outer IP checksum offload */
-			if (tx_ol_flags & (PKT_TX_OUTER_IP_CKSUM |
-					   PKT_TX_OUTER_IPV4)) {
+			if (tx_ol_flags & (RTE_MBUF_F_TX_OUTER_IP_CKSUM |
+					   RTE_MBUF_F_TX_OUTER_IPV4)) {
 				bd1_bd_flags_bf |=
 					ETH_TX_1ST_BD_FLAGS_TUNN_IP_CSUM_MASK <<
 					ETH_TX_1ST_BD_FLAGS_TUNN_IP_CSUM_SHIFT;
@@ -2490,8 +2490,8 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			 * and inner layers  lengths need to be provided in
 			 * mbuf.
 			 */
-			if ((tx_ol_flags & PKT_TX_TUNNEL_MASK) ==
-						PKT_TX_TUNNEL_MPLSINUDP) {
+			if ((tx_ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ==
+						RTE_MBUF_F_TX_TUNNEL_MPLSINUDP) {
 				mplsoudp_flg = true;
 #ifdef RTE_LIBRTE_QEDE_DEBUG_TX
 				qede_mpls_tunn_tx_sanity_check(mbuf, txq);
@@ -2524,18 +2524,18 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 				    1 << ETH_TX_DATA_2ND_BD_TUNN_IPV6_EXT_SHIFT;
 
 				/* Mark inner IPv6 if present */
-				if (tx_ol_flags & PKT_TX_IPV6)
+				if (tx_ol_flags & RTE_MBUF_F_TX_IPV6)
 					bd2_bf1 |=
 						1 << ETH_TX_DATA_2ND_BD_TUNN_INNER_IPV6_SHIFT;
 
 				/* Inner L4 offsets */
-				if ((tx_ol_flags & (PKT_TX_IPV4 | PKT_TX_IPV6)) &&
-				     (tx_ol_flags & (PKT_TX_UDP_CKSUM |
-							PKT_TX_TCP_CKSUM))) {
+				if ((tx_ol_flags & (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IPV6)) &&
+				     (tx_ol_flags & (RTE_MBUF_F_TX_UDP_CKSUM |
+							RTE_MBUF_F_TX_TCP_CKSUM))) {
 					/* Determines if BD3 is needed */
 					tunn_ipv6_ext_flg = true;
-					if ((tx_ol_flags & PKT_TX_L4_MASK) ==
-							PKT_TX_UDP_CKSUM) {
+					if ((tx_ol_flags & RTE_MBUF_F_TX_L4_MASK) ==
+							RTE_MBUF_F_TX_UDP_CKSUM) {
 						bd2_bf1 |=
 							1 << ETH_TX_DATA_2ND_BD_L4_UDP_SHIFT;
 					}
@@ -2553,7 +2553,7 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			} /* End MPLSoUDP */
 		} /* End Tunnel handling */
 
-		if (tx_ol_flags & PKT_TX_TCP_SEG) {
+		if (tx_ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			lso_flg = true;
 			if (unlikely(txq->nb_tx_avail <
 						ETH_TX_MIN_BDS_PER_LSO_PKT))
@@ -2570,7 +2570,7 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			bd1_bd_flags_bf |= 1 << ETH_TX_1ST_BD_FLAGS_LSO_SHIFT;
 			bd1_bd_flags_bf |=
 					1 << ETH_TX_1ST_BD_FLAGS_IP_CSUM_SHIFT;
-			/* PKT_TX_TCP_SEG implies PKT_TX_TCP_CKSUM */
+			/* RTE_MBUF_F_TX_TCP_SEG implies RTE_MBUF_F_TX_TCP_CKSUM */
 			bd1_bd_flags_bf |=
 					1 << ETH_TX_1ST_BD_FLAGS_L4_CSUM_SHIFT;
 			mss = rte_cpu_to_le_16(mbuf->tso_segsz);
@@ -2587,14 +2587,14 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Descriptor based VLAN insertion */
-		if (tx_ol_flags & PKT_TX_VLAN) {
+		if (tx_ol_flags & RTE_MBUF_F_TX_VLAN) {
 			vlan = rte_cpu_to_le_16(mbuf->vlan_tci);
 			bd1_bd_flags_bf |=
 			    1 << ETH_TX_1ST_BD_FLAGS_VLAN_INSERTION_SHIFT;
 		}
 
 		/* Offload the IP checksum in the hardware */
-		if (tx_ol_flags & PKT_TX_IP_CKSUM) {
+		if (tx_ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 			bd1_bd_flags_bf |=
 				1 << ETH_TX_1ST_BD_FLAGS_IP_CSUM_SHIFT;
 			/* There's no DPDK flag to request outer-L4 csum
@@ -2602,8 +2602,8 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			 * csum offload is requested then we need to force
 			 * recalculation of L4 tunnel header csum also.
 			 */
-			if (tunn_flg && ((tx_ol_flags & PKT_TX_TUNNEL_MASK) !=
-							PKT_TX_TUNNEL_GRE)) {
+			if (tunn_flg && ((tx_ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) !=
+							RTE_MBUF_F_TX_TUNNEL_GRE)) {
 				bd1_bd_flags_bf |=
 					ETH_TX_1ST_BD_FLAGS_TUNN_L4_CSUM_MASK <<
 					ETH_TX_1ST_BD_FLAGS_TUNN_L4_CSUM_SHIFT;
@@ -2611,8 +2611,8 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* L4 checksum offload (tcp or udp) */
-		if ((tx_ol_flags & (PKT_TX_IPV4 | PKT_TX_IPV6)) &&
-		    (tx_ol_flags & (PKT_TX_UDP_CKSUM | PKT_TX_TCP_CKSUM))) {
+		if ((tx_ol_flags & (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IPV6)) &&
+		    (tx_ol_flags & (RTE_MBUF_F_TX_UDP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM))) {
 			bd1_bd_flags_bf |=
 				1 << ETH_TX_1ST_BD_FLAGS_L4_CSUM_SHIFT;
 			/* There's no DPDK flag to request outer-L4 csum
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index 025ed6fff2..828df1cf99 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -144,20 +144,20 @@
 
 #define QEDE_PKT_TYPE_TUNN_MAX_TYPE			0x20 /* 2^5 */
 
-#define QEDE_TX_CSUM_OFFLOAD_MASK (PKT_TX_IP_CKSUM              | \
-				   PKT_TX_TCP_CKSUM             | \
-				   PKT_TX_UDP_CKSUM             | \
-				   PKT_TX_OUTER_IP_CKSUM        | \
-				   PKT_TX_TCP_SEG		| \
-				   PKT_TX_IPV4			| \
-				   PKT_TX_IPV6)
+#define QEDE_TX_CSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM              | \
+				   RTE_MBUF_F_TX_TCP_CKSUM             | \
+				   RTE_MBUF_F_TX_UDP_CKSUM             | \
+				   RTE_MBUF_F_TX_OUTER_IP_CKSUM        | \
+				   RTE_MBUF_F_TX_TCP_SEG		| \
+				   RTE_MBUF_F_TX_IPV4			| \
+				   RTE_MBUF_F_TX_IPV6)
 
 #define QEDE_TX_OFFLOAD_MASK (QEDE_TX_CSUM_OFFLOAD_MASK | \
-			      PKT_TX_VLAN		| \
-			      PKT_TX_TUNNEL_MASK)
+			      RTE_MBUF_F_TX_VLAN		| \
+			      RTE_MBUF_F_TX_TUNNEL_MASK)
 
 #define QEDE_TX_OFFLOAD_NOTSUP_MASK \
-	(PKT_TX_OFFLOAD_MASK ^ QEDE_TX_OFFLOAD_MASK)
+	(RTE_MBUF_F_TX_OFFLOAD_MASK ^ QEDE_TX_OFFLOAD_MASK)
 
 /* TPA related structures */
 struct qede_agg_info {
diff --git a/drivers/net/sfc/sfc_dp_tx.h b/drivers/net/sfc/sfc_dp_tx.h
index 777807985b..20f3b4eaba 100644
--- a/drivers/net/sfc/sfc_dp_tx.h
+++ b/drivers/net/sfc/sfc_dp_tx.h
@@ -241,7 +241,7 @@ sfc_dp_tx_prepare_pkt(struct rte_mbuf *m,
 			   unsigned int nb_vlan_descs)
 {
 	unsigned int descs_required = m->nb_segs;
-	unsigned int tcph_off = ((m->ol_flags & PKT_TX_TUNNEL_MASK) ?
+	unsigned int tcph_off = ((m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 				 m->outer_l2_len + m->outer_l3_len : 0) +
 				m->l2_len + m->l3_len;
 	unsigned int header_len = tcph_off + m->l4_len;
@@ -279,21 +279,21 @@ sfc_dp_tx_prepare_pkt(struct rte_mbuf *m,
 			 * to proceed with additional checks below.
 			 * Otherwise, throw an error.
 			 */
-			if ((m->ol_flags & PKT_TX_TCP_SEG) == 0 ||
+			if ((m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) == 0 ||
 			    tso_bounce_buffer_len == 0)
 				return EINVAL;
 		}
 	}
 
-	if (m->ol_flags & PKT_TX_TCP_SEG) {
-		switch (m->ol_flags & PKT_TX_TUNNEL_MASK) {
+	if (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
+		switch (m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
 		case 0:
 			break;
-		case PKT_TX_TUNNEL_VXLAN:
+		case RTE_MBUF_F_TX_TUNNEL_VXLAN:
 			/* FALLTHROUGH */
-		case PKT_TX_TUNNEL_GENEVE:
+		case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 			if (!(m->ol_flags &
-			      (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6)))
+			      (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IPV6)))
 				return EINVAL;
 		}
 
diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index 1bf04f565a..35e1650851 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -203,7 +203,7 @@ sfc_ef100_rx_nt_or_inner_l4_csum(const efx_word_t class)
 	return EFX_WORD_FIELD(class,
 			      ESF_GZ_RX_PREFIX_HCLASS_NT_OR_INNER_L4_CSUM) ==
 		ESE_GZ_RH_HCLASS_L4_CSUM_GOOD ?
-		PKT_RX_L4_CKSUM_GOOD : PKT_RX_L4_CKSUM_BAD;
+		RTE_MBUF_F_RX_L4_CKSUM_GOOD : RTE_MBUF_F_RX_L4_CKSUM_BAD;
 }
 
 static inline uint64_t
@@ -212,7 +212,7 @@ sfc_ef100_rx_tun_outer_l4_csum(const efx_word_t class)
 	return EFX_WORD_FIELD(class,
 			      ESF_GZ_RX_PREFIX_HCLASS_TUN_OUTER_L4_CSUM) ==
 		ESE_GZ_RH_HCLASS_L4_CSUM_GOOD ?
-		PKT_RX_OUTER_L4_CKSUM_GOOD : PKT_RX_OUTER_L4_CKSUM_BAD;
+		RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD : RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 }
 
 static uint32_t
@@ -268,11 +268,11 @@ sfc_ef100_rx_class_decode(const efx_word_t class, uint64_t *ol_flags)
 			ESF_GZ_RX_PREFIX_HCLASS_NT_OR_INNER_L3_CLASS)) {
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP4GOOD:
 			ptype |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
-			*ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			*ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			break;
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP4BAD:
 			ptype |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
-			*ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			*ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			break;
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP6:
 			ptype |= RTE_PTYPE_L3_IPV6_EXT_UNKNOWN;
@@ -309,7 +309,7 @@ sfc_ef100_rx_class_decode(const efx_word_t class, uint64_t *ol_flags)
 			break;
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP4BAD:
 			ptype |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
-			*ol_flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+			*ol_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 			break;
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP6:
 			ptype |= RTE_PTYPE_L3_IPV6_EXT_UNKNOWN;
@@ -320,11 +320,11 @@ sfc_ef100_rx_class_decode(const efx_word_t class, uint64_t *ol_flags)
 			ESF_GZ_RX_PREFIX_HCLASS_NT_OR_INNER_L3_CLASS)) {
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP4GOOD:
 			ptype |= RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN;
-			*ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			*ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			break;
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP4BAD:
 			ptype |= RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN;
-			*ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			*ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			break;
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP6:
 			ptype |= RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN;
@@ -401,7 +401,7 @@ sfc_ef100_rx_prefix_to_offloads(const struct sfc_ef100_rxq *rxq,
 	if ((rxq->flags & SFC_EF100_RXQ_RSS_HASH) &&
 	    EFX_TEST_OWORD_BIT(rx_prefix[0],
 			       ESF_GZ_RX_PREFIX_RSS_HASH_VALID_LBN)) {
-		ol_flags |= PKT_RX_RSS_HASH;
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		/* EFX_OWORD_FIELD converts little-endian to CPU */
 		m->hash.rss = EFX_OWORD_FIELD(rx_prefix[0],
 					      ESF_GZ_RX_PREFIX_RSS_HASH);
@@ -414,7 +414,7 @@ sfc_ef100_rx_prefix_to_offloads(const struct sfc_ef100_rxq *rxq,
 		user_mark = EFX_OWORD_FIELD(rx_prefix[0],
 					    ESF_GZ_RX_PREFIX_USER_MARK);
 		if (user_mark != SFC_EF100_USER_MARK_INVALID) {
-			ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+			ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 			m->hash.fdir.hi = user_mark;
 		}
 	}
diff --git a/drivers/net/sfc/sfc_ef100_tx.c b/drivers/net/sfc/sfc_ef100_tx.c
index 53d01612d1..78c16168ed 100644
--- a/drivers/net/sfc/sfc_ef100_tx.c
+++ b/drivers/net/sfc/sfc_ef100_tx.c
@@ -98,7 +98,7 @@ static int
 sfc_ef100_tx_prepare_pkt_tso(struct sfc_ef100_txq * const txq,
 			     struct rte_mbuf *m)
 {
-	size_t header_len = ((m->ol_flags & PKT_TX_TUNNEL_MASK) ?
+	size_t header_len = ((m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 			     m->outer_l2_len + m->outer_l3_len : 0) +
 			    m->l2_len + m->l3_len + m->l4_len;
 	size_t payload_len = m->pkt_len - header_len;
@@ -106,12 +106,12 @@ sfc_ef100_tx_prepare_pkt_tso(struct sfc_ef100_txq * const txq,
 	unsigned int nb_payload_descs;
 
 #ifdef RTE_LIBRTE_SFC_EFX_DEBUG
-	switch (m->ol_flags & PKT_TX_TUNNEL_MASK) {
+	switch (m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
 	case 0:
 		/* FALLTHROUGH */
-	case PKT_TX_TUNNEL_VXLAN:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
 		/* FALLTHROUGH */
-	case PKT_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 		break;
 	default:
 		return ENOTSUP;
@@ -164,11 +164,11 @@ sfc_ef100_tx_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		 * pseudo-header checksum which is calculated below,
 		 * but requires contiguous packet headers.
 		 */
-		if ((m->ol_flags & PKT_TX_TUNNEL_MASK) &&
-		    (m->ol_flags & PKT_TX_L4_MASK)) {
+		if ((m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) &&
+		    (m->ol_flags & RTE_MBUF_F_TX_L4_MASK)) {
 			calc_phdr_cksum = true;
 			max_nb_header_segs = 1;
-		} else if (m->ol_flags & PKT_TX_TCP_SEG) {
+		} else if (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			max_nb_header_segs = txq->tso_max_nb_header_descs;
 		}
 
@@ -180,7 +180,7 @@ sfc_ef100_tx_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			break;
 		}
 
-		if (m->ol_flags & PKT_TX_TCP_SEG) {
+		if (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			ret = sfc_ef100_tx_prepare_pkt_tso(txq, m);
 			if (unlikely(ret != 0)) {
 				rte_errno = ret;
@@ -197,7 +197,7 @@ sfc_ef100_tx_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			 * and does not require any assistance.
 			 */
 			ret = rte_net_intel_cksum_flags_prepare(m,
-					m->ol_flags & ~PKT_TX_IP_CKSUM);
+					m->ol_flags & ~RTE_MBUF_F_TX_IP_CKSUM);
 			if (unlikely(ret != 0)) {
 				rte_errno = -ret;
 				break;
@@ -315,10 +315,10 @@ sfc_ef100_tx_qdesc_cso_inner_l3(uint64_t tx_tunnel)
 	uint8_t inner_l3;
 
 	switch (tx_tunnel) {
-	case PKT_TX_TUNNEL_VXLAN:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
 		inner_l3 = ESE_GZ_TX_DESC_CS_INNER_L3_VXLAN;
 		break;
-	case PKT_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 		inner_l3 = ESE_GZ_TX_DESC_CS_INNER_L3_GENEVE;
 		break;
 	default:
@@ -338,25 +338,25 @@ sfc_ef100_tx_qdesc_send_create(const struct rte_mbuf *m, efx_oword_t *tx_desc)
 	uint16_t part_cksum_w;
 	uint16_t l4_offset_w;
 
-	if ((m->ol_flags & PKT_TX_TUNNEL_MASK) == 0) {
-		outer_l3 = (m->ol_flags & PKT_TX_IP_CKSUM);
-		outer_l4 = (m->ol_flags & PKT_TX_L4_MASK);
+	if ((m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) == 0) {
+		outer_l3 = (m->ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
+		outer_l4 = (m->ol_flags & RTE_MBUF_F_TX_L4_MASK);
 		inner_l3 = ESE_GZ_TX_DESC_CS_INNER_L3_OFF;
 		partial_en = ESE_GZ_TX_DESC_CSO_PARTIAL_EN_OFF;
 		part_cksum_w = 0;
 		l4_offset_w = 0;
 	} else {
-		outer_l3 = (m->ol_flags & PKT_TX_OUTER_IP_CKSUM);
-		outer_l4 = (m->ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+		outer_l3 = (m->ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
+		outer_l4 = (m->ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
 		inner_l3 = sfc_ef100_tx_qdesc_cso_inner_l3(m->ol_flags &
-							   PKT_TX_TUNNEL_MASK);
+							   RTE_MBUF_F_TX_TUNNEL_MASK);
 
-		switch (m->ol_flags & PKT_TX_L4_MASK) {
-		case PKT_TX_TCP_CKSUM:
+		switch (m->ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+		case RTE_MBUF_F_TX_TCP_CKSUM:
 			partial_en = ESE_GZ_TX_DESC_CSO_PARTIAL_EN_TCP;
 			part_cksum_w = offsetof(struct rte_tcp_hdr, cksum) >> 1;
 			break;
-		case PKT_TX_UDP_CKSUM:
+		case RTE_MBUF_F_TX_UDP_CKSUM:
 			partial_en = ESE_GZ_TX_DESC_CSO_PARTIAL_EN_UDP;
 			part_cksum_w = offsetof(struct rte_udp_hdr,
 						dgram_cksum) >> 1;
@@ -382,7 +382,7 @@ sfc_ef100_tx_qdesc_send_create(const struct rte_mbuf *m, efx_oword_t *tx_desc)
 			ESF_GZ_TX_SEND_CSO_OUTER_L4, outer_l4,
 			ESF_GZ_TX_DESC_TYPE, ESE_GZ_TX_DESC_TYPE_SEND);
 
-	if (m->ol_flags & PKT_TX_VLAN) {
+	if (m->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		efx_oword_t tx_desc_extra_fields;
 
 		EFX_POPULATE_OWORD_2(tx_desc_extra_fields,
@@ -423,7 +423,7 @@ sfc_ef100_tx_qdesc_tso_create(const struct rte_mbuf *m,
 	 */
 	int ed_inner_ip_id = ESE_GZ_TX_DESC_IP4_ID_INC_MOD16;
 	uint8_t inner_l3 = sfc_ef100_tx_qdesc_cso_inner_l3(
-					m->ol_flags & PKT_TX_TUNNEL_MASK);
+					m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK);
 
 	EFX_POPULATE_OWORD_10(*tx_desc,
 			ESF_GZ_TX_TSO_MSS, m->tso_segsz,
@@ -464,7 +464,7 @@ sfc_ef100_tx_qdesc_tso_create(const struct rte_mbuf *m,
 
 	EFX_OR_OWORD(*tx_desc, tx_desc_extra_fields);
 
-	if (m->ol_flags & PKT_TX_VLAN) {
+	if (m->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		EFX_POPULATE_OWORD_2(tx_desc_extra_fields,
 				ESF_GZ_TX_TSO_VLAN_INSERT_EN, 1,
 				ESF_GZ_TX_TSO_VLAN_INSERT_TCI, m->vlan_tci);
@@ -505,7 +505,7 @@ sfc_ef100_tx_pkt_descs_max(const struct rte_mbuf *m)
 #define SFC_MBUF_SEG_LEN_MAX		UINT16_MAX
 	RTE_BUILD_BUG_ON(sizeof(m->data_len) != 2);
 
-	if (m->ol_flags & PKT_TX_TCP_SEG) {
+	if (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		/* Tx TSO descriptor */
 		extra_descs++;
 		/*
@@ -552,7 +552,7 @@ sfc_ef100_xmit_tso_pkt(struct sfc_ef100_txq * const txq,
 	size_t header_len;
 	size_t remaining_hdr_len;
 
-	if (m->ol_flags & PKT_TX_TUNNEL_MASK) {
+	if (m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
 		outer_iph_off = m->outer_l2_len;
 		outer_udph_off = outer_iph_off + m->outer_l3_len;
 	} else {
@@ -671,7 +671,7 @@ sfc_ef100_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 				break;
 		}
 
-		if (m_seg->ol_flags & PKT_TX_TCP_SEG) {
+		if (m_seg->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			m_seg = sfc_ef100_xmit_tso_pkt(txq, m_seg, &added);
 		} else {
 			id = added++ & txq->ptr_mask;
diff --git a/drivers/net/sfc/sfc_ef10_essb_rx.c b/drivers/net/sfc/sfc_ef10_essb_rx.c
index 991329e86f..eda468df3f 100644
--- a/drivers/net/sfc/sfc_ef10_essb_rx.c
+++ b/drivers/net/sfc/sfc_ef10_essb_rx.c
@@ -374,13 +374,13 @@ sfc_ef10_essb_rx_get_pending(struct sfc_ef10_essb_rxq *rxq,
 			rte_pktmbuf_data_len(m) = pkt_len;
 
 			m->ol_flags |=
-				(PKT_RX_RSS_HASH *
+				(RTE_MBUF_F_RX_RSS_HASH *
 				 !!EFX_TEST_QWORD_BIT(*qwordp,
 					ES_EZ_ESSB_RX_PREFIX_HASH_VALID_LBN)) |
-				(PKT_RX_FDIR_ID *
+				(RTE_MBUF_F_RX_FDIR_ID *
 				 !!EFX_TEST_QWORD_BIT(*qwordp,
 					ES_EZ_ESSB_RX_PREFIX_MARK_VALID_LBN)) |
-				(PKT_RX_FDIR *
+				(RTE_MBUF_F_RX_FDIR *
 				 !!EFX_TEST_QWORD_BIT(*qwordp,
 					ES_EZ_ESSB_RX_PREFIX_MATCH_FLAG_LBN));
 
diff --git a/drivers/net/sfc/sfc_ef10_rx.c b/drivers/net/sfc/sfc_ef10_rx.c
index 49a7d4fb42..8ddb830642 100644
--- a/drivers/net/sfc/sfc_ef10_rx.c
+++ b/drivers/net/sfc/sfc_ef10_rx.c
@@ -330,7 +330,7 @@ sfc_ef10_rx_process_event(struct sfc_ef10_rxq *rxq, efx_qword_t rx_ev,
 	/* Mask RSS hash offload flag if RSS is not enabled */
 	sfc_ef10_rx_ev_to_offloads(rx_ev, m,
 				   (rxq->flags & SFC_EF10_RXQ_RSS_HASH) ?
-				   ~0ull : ~PKT_RX_RSS_HASH);
+				   ~0ull : ~RTE_MBUF_F_RX_RSS_HASH);
 
 	/* data_off already moved past pseudo header */
 	pseudo_hdr = (uint8_t *)m->buf_addr + RTE_PKTMBUF_HEADROOM;
@@ -338,7 +338,7 @@ sfc_ef10_rx_process_event(struct sfc_ef10_rxq *rxq, efx_qword_t rx_ev,
 	/*
 	 * Always get RSS hash from pseudo header to avoid
 	 * condition/branching. If it is valid or not depends on
-	 * PKT_RX_RSS_HASH in m->ol_flags.
+	 * RTE_MBUF_F_RX_RSS_HASH in m->ol_flags.
 	 */
 	m->hash.rss = sfc_ef10_rx_pseudo_hdr_get_hash(pseudo_hdr);
 
@@ -392,7 +392,7 @@ sfc_ef10_rx_process_event(struct sfc_ef10_rxq *rxq, efx_qword_t rx_ev,
 		/*
 		 * Always get RSS hash from pseudo header to avoid
 		 * condition/branching. If it is valid or not depends on
-		 * PKT_RX_RSS_HASH in m->ol_flags.
+		 * RTE_MBUF_F_RX_RSS_HASH in m->ol_flags.
 		 */
 		m->hash.rss = sfc_ef10_rx_pseudo_hdr_get_hash(pseudo_hdr);
 
diff --git a/drivers/net/sfc/sfc_ef10_rx_ev.h b/drivers/net/sfc/sfc_ef10_rx_ev.h
index a7f5b9168b..821e2227bb 100644
--- a/drivers/net/sfc/sfc_ef10_rx_ev.h
+++ b/drivers/net/sfc/sfc_ef10_rx_ev.h
@@ -27,9 +27,9 @@ sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m,
 			   uint64_t ol_mask)
 {
 	uint32_t tun_ptype = 0;
-	/* Which event bit is mapped to PKT_RX_IP_CKSUM_* */
+	/* Which event bit is mapped to RTE_MBUF_F_RX_IP_CKSUM_* */
 	int8_t ip_csum_err_bit;
-	/* Which event bit is mapped to PKT_RX_L4_CKSUM_* */
+	/* Which event bit is mapped to RTE_MBUF_F_RX_L4_CKSUM_* */
 	int8_t l4_csum_err_bit;
 	uint32_t l2_ptype = 0;
 	uint32_t l3_ptype = 0;
@@ -76,7 +76,7 @@ sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m,
 		l4_csum_err_bit = ESF_EZ_RX_TCP_UDP_INNER_CHKSUM_ERR_LBN;
 		if (unlikely(EFX_TEST_QWORD_BIT(rx_ev,
 						ESF_DZ_RX_IPCKSUM_ERR_LBN)))
-			ol_flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+			ol_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 	}
 
 	switch (EFX_QWORD_FIELD(rx_ev, ESF_DZ_RX_ETH_TAG_CLASS)) {
@@ -105,9 +105,9 @@ sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m,
 	case ESE_DZ_L3_CLASS_IP4:
 		l3_ptype = (tun_ptype == 0) ? RTE_PTYPE_L3_IPV4_EXT_UNKNOWN :
 			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN;
-		ol_flags |= PKT_RX_RSS_HASH |
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH |
 			((EFX_TEST_QWORD_BIT(rx_ev, ip_csum_err_bit)) ?
-			 PKT_RX_IP_CKSUM_BAD : PKT_RX_IP_CKSUM_GOOD);
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD : RTE_MBUF_F_RX_IP_CKSUM_GOOD);
 		break;
 	case ESE_DZ_L3_CLASS_IP6_FRAG:
 		l4_ptype = (tun_ptype == 0) ? RTE_PTYPE_L4_FRAG :
@@ -116,7 +116,7 @@ sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m,
 	case ESE_DZ_L3_CLASS_IP6:
 		l3_ptype = (tun_ptype == 0) ? RTE_PTYPE_L3_IPV6_EXT_UNKNOWN :
 			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN;
-		ol_flags |= PKT_RX_RSS_HASH;
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		break;
 	case ESE_DZ_L3_CLASS_ARP:
 		/* Override Layer 2 packet type */
@@ -144,7 +144,7 @@ sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m,
 			RTE_PTYPE_INNER_L4_TCP;
 		ol_flags |=
 			(EFX_TEST_QWORD_BIT(rx_ev, l4_csum_err_bit)) ?
-			PKT_RX_L4_CKSUM_BAD : PKT_RX_L4_CKSUM_GOOD;
+			RTE_MBUF_F_RX_L4_CKSUM_BAD : RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		break;
 	case ESE_FZ_L4_CLASS_UDP:
 		 RTE_BUILD_BUG_ON(ESE_FZ_L4_CLASS_UDP != ESE_DE_L4_CLASS_UDP);
@@ -152,7 +152,7 @@ sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m,
 			RTE_PTYPE_INNER_L4_UDP;
 		ol_flags |=
 			(EFX_TEST_QWORD_BIT(rx_ev, l4_csum_err_bit)) ?
-			PKT_RX_L4_CKSUM_BAD : PKT_RX_L4_CKSUM_GOOD;
+			RTE_MBUF_F_RX_L4_CKSUM_BAD : RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		break;
 	case ESE_FZ_L4_CLASS_UNKNOWN:
 		 RTE_BUILD_BUG_ON(ESE_FZ_L4_CLASS_UNKNOWN !=
diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index 277fe6c6ca..e58f8bbe8c 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -341,7 +341,7 @@ sfc_ef10_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		 * the size limit. Perform the check in debug mode since MTU
 		 * more than 9k is not supported, but the limit here is 16k-1.
 		 */
-		if (!(m->ol_flags & PKT_TX_TCP_SEG)) {
+		if (!(m->ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 			struct rte_mbuf *m_seg;
 
 			for (m_seg = m; m_seg != NULL; m_seg = m_seg->next) {
@@ -371,7 +371,7 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 		      unsigned int *added, unsigned int *dma_desc_space,
 		      bool *reap_done)
 {
-	size_t iph_off = ((m_seg->ol_flags & PKT_TX_TUNNEL_MASK) ?
+	size_t iph_off = ((m_seg->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 			  m_seg->outer_l2_len + m_seg->outer_l3_len : 0) +
 			 m_seg->l2_len;
 	size_t tcph_off = iph_off + m_seg->l3_len;
@@ -489,10 +489,10 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 	 *
 	 * The same concern applies to outer UDP datagram length field.
 	 */
-	switch (m_seg->ol_flags & PKT_TX_TUNNEL_MASK) {
-	case PKT_TX_TUNNEL_VXLAN:
+	switch (m_seg->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
 		/* FALLTHROUGH */
-	case PKT_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 		sfc_tso_outer_udp_fix_len(first_m_seg, hdr_addr);
 		break;
 	default:
@@ -506,10 +506,10 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 	 * filled in in TSO mbuf. Use zero IPID if there is no IPv4 flag.
 	 * If the packet is still IPv4, HW will simply start from zero IPID.
 	 */
-	if (first_m_seg->ol_flags & PKT_TX_IPV4)
+	if (first_m_seg->ol_flags & RTE_MBUF_F_TX_IPV4)
 		packet_id = sfc_tso_ip4_get_ipid(hdr_addr, iph_off);
 
-	if (first_m_seg->ol_flags & PKT_TX_OUTER_IPV4)
+	if (first_m_seg->ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)
 		outer_packet_id = sfc_tso_ip4_get_ipid(hdr_addr,
 						first_m_seg->outer_l2_len);
 
@@ -648,7 +648,7 @@ sfc_ef10_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		if (likely(pktp + 1 != pktp_end))
 			rte_mbuf_prefetch_part1(pktp[1]);
 
-		if (m_seg->ol_flags & PKT_TX_TCP_SEG) {
+		if (m_seg->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			int rc;
 
 			rc = sfc_ef10_xmit_tso_pkt(txq, m_seg, &added,
@@ -805,7 +805,7 @@ sfc_ef10_simple_prepare_pkts(__rte_unused void *tx_queue,
 
 		/* ef10_simple does not support TSO and VLAN insertion */
 		if (unlikely(m->ol_flags &
-			     (PKT_TX_TCP_SEG | PKT_TX_VLAN))) {
+			     (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_VLAN))) {
 			rte_errno = ENOTSUP;
 			break;
 		}
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 280e8a61f9..66024f3e53 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -148,15 +148,15 @@ sfc_efx_rx_desc_flags_to_offload_flags(const unsigned int desc_flags)
 
 	switch (desc_flags & (EFX_PKT_IPV4 | EFX_CKSUM_IPV4)) {
 	case (EFX_PKT_IPV4 | EFX_CKSUM_IPV4):
-		mbuf_flags |= PKT_RX_IP_CKSUM_GOOD;
+		mbuf_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 		break;
 	case EFX_PKT_IPV4:
-		mbuf_flags |= PKT_RX_IP_CKSUM_BAD;
+		mbuf_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		break;
 	default:
-		RTE_BUILD_BUG_ON(PKT_RX_IP_CKSUM_UNKNOWN != 0);
-		SFC_ASSERT((mbuf_flags & PKT_RX_IP_CKSUM_MASK) ==
-			   PKT_RX_IP_CKSUM_UNKNOWN);
+		RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN != 0);
+		SFC_ASSERT((mbuf_flags & RTE_MBUF_F_RX_IP_CKSUM_MASK) ==
+			   RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN);
 		break;
 	}
 
@@ -164,16 +164,16 @@ sfc_efx_rx_desc_flags_to_offload_flags(const unsigned int desc_flags)
 		 (EFX_PKT_TCP | EFX_PKT_UDP | EFX_CKSUM_TCPUDP))) {
 	case (EFX_PKT_TCP | EFX_CKSUM_TCPUDP):
 	case (EFX_PKT_UDP | EFX_CKSUM_TCPUDP):
-		mbuf_flags |= PKT_RX_L4_CKSUM_GOOD;
+		mbuf_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		break;
 	case EFX_PKT_TCP:
 	case EFX_PKT_UDP:
-		mbuf_flags |= PKT_RX_L4_CKSUM_BAD;
+		mbuf_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		break;
 	default:
-		RTE_BUILD_BUG_ON(PKT_RX_L4_CKSUM_UNKNOWN != 0);
-		SFC_ASSERT((mbuf_flags & PKT_RX_L4_CKSUM_MASK) ==
-			   PKT_RX_L4_CKSUM_UNKNOWN);
+		RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN != 0);
+		SFC_ASSERT((mbuf_flags & RTE_MBUF_F_RX_L4_CKSUM_MASK) ==
+			   RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN);
 		break;
 	}
 
@@ -224,7 +224,7 @@ sfc_efx_rx_set_rss_hash(struct sfc_efx_rxq *rxq, unsigned int flags,
 						      EFX_RX_HASHALG_TOEPLITZ,
 						      mbuf_data);
 
-		m->ol_flags |= PKT_RX_RSS_HASH;
+		m->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 	}
 }
 
diff --git a/drivers/net/sfc/sfc_tso.c b/drivers/net/sfc/sfc_tso.c
index 29d0836b65..927e351a6e 100644
--- a/drivers/net/sfc/sfc_tso.c
+++ b/drivers/net/sfc/sfc_tso.c
@@ -153,7 +153,7 @@ sfc_efx_tso_do(struct sfc_efx_txq *txq, unsigned int idx,
 	 * IPv4 flag. If the packet is still IPv4, HW will simply start from
 	 * zero IPID.
 	 */
-	if (m->ol_flags & PKT_TX_IPV4)
+	if (m->ol_flags & RTE_MBUF_F_TX_IPV4)
 		packet_id = sfc_tso_ip4_get_ipid(tsoh, nh_off);
 
 	/* Handle TCP header */
diff --git a/drivers/net/sfc/sfc_tso.h b/drivers/net/sfc/sfc_tso.h
index f081e856e1..9029ad1590 100644
--- a/drivers/net/sfc/sfc_tso.h
+++ b/drivers/net/sfc/sfc_tso.h
@@ -59,7 +59,7 @@ sfc_tso_innermost_ip_fix_len(const struct rte_mbuf *m, uint8_t *tsoh,
 	size_t field_ofst;
 	rte_be16_t len;
 
-	if (m->ol_flags & PKT_TX_IPV4) {
+	if (m->ol_flags & RTE_MBUF_F_TX_IPV4) {
 		field_ofst = offsetof(struct rte_ipv4_hdr, total_length);
 		len = rte_cpu_to_be_16(m->l3_len + ip_payload_len);
 	} else {
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 936ae815ea..fd79e67efa 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -766,7 +766,7 @@ static unsigned int
 sfc_efx_tx_maybe_insert_tag(struct sfc_efx_txq *txq, struct rte_mbuf *m,
 			    efx_desc_t **pend)
 {
-	uint16_t this_tag = ((m->ol_flags & PKT_TX_VLAN) ?
+	uint16_t this_tag = ((m->ol_flags & RTE_MBUF_F_TX_VLAN) ?
 			     m->vlan_tci : 0);
 
 	if (this_tag == txq->hw_vlan_tci)
@@ -869,7 +869,7 @@ sfc_efx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		 */
 		pkt_descs += sfc_efx_tx_maybe_insert_tag(txq, m_seg, &pend);
 
-		if (m_seg->ol_flags & PKT_TX_TCP_SEG) {
+		if (m_seg->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			/*
 			 * We expect correct 'pkt->l[2, 3, 4]_len' values
 			 * to be set correctly by the caller
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index c515de3bf7..3b62232553 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -340,8 +340,8 @@ tap_verify_csum(struct rte_mbuf *mbuf)
 
 		cksum = ~rte_raw_cksum(iph, l3_len);
 		mbuf->ol_flags |= cksum ?
-			PKT_RX_IP_CKSUM_BAD :
-			PKT_RX_IP_CKSUM_GOOD;
+			RTE_MBUF_F_RX_IP_CKSUM_BAD :
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 	} else if (l3 == RTE_PTYPE_L3_IPV6) {
 		struct rte_ipv6_hdr *iph = l3_hdr;
 
@@ -376,7 +376,7 @@ tap_verify_csum(struct rte_mbuf *mbuf)
 					 * indicates that the sender did not
 					 * generate one [RFC 768].
 					 */
-					mbuf->ol_flags |= PKT_RX_L4_CKSUM_NONE;
+					mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_NONE;
 					return;
 				}
 			}
@@ -387,7 +387,7 @@ tap_verify_csum(struct rte_mbuf *mbuf)
 								 l4_hdr);
 		}
 		mbuf->ol_flags |= cksum_ok ?
-			PKT_RX_L4_CKSUM_GOOD : PKT_RX_L4_CKSUM_BAD;
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD : RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	}
 }
 
@@ -544,7 +544,7 @@ tap_tx_l3_cksum(char *packet, uint64_t ol_flags, unsigned int l2_len,
 {
 	void *l3_hdr = packet + l2_len;
 
-	if (ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_IPV4)) {
+	if (ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_IPV4)) {
 		struct rte_ipv4_hdr *iph = l3_hdr;
 		uint16_t cksum;
 
@@ -552,18 +552,18 @@ tap_tx_l3_cksum(char *packet, uint64_t ol_flags, unsigned int l2_len,
 		cksum = rte_raw_cksum(iph, l3_len);
 		iph->hdr_checksum = (cksum == 0xffff) ? cksum : ~cksum;
 	}
-	if (ol_flags & PKT_TX_L4_MASK) {
+	if (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
 		void *l4_hdr;
 
 		l4_hdr = packet + l2_len + l3_len;
-		if ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_UDP_CKSUM)
+		if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_UDP_CKSUM)
 			*l4_cksum = &((struct rte_udp_hdr *)l4_hdr)->dgram_cksum;
-		else if ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM)
+		else if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_TCP_CKSUM)
 			*l4_cksum = &((struct rte_tcp_hdr *)l4_hdr)->cksum;
 		else
 			return;
 		**l4_cksum = 0;
-		if (ol_flags & PKT_TX_IPV4)
+		if (ol_flags & RTE_MBUF_F_TX_IPV4)
 			*l4_phdr_cksum = rte_ipv4_phdr_cksum(l3_hdr, 0);
 		else
 			*l4_phdr_cksum = rte_ipv6_phdr_cksum(l3_hdr, 0);
@@ -627,9 +627,9 @@ tap_write_mbufs(struct tx_queue *txq, uint16_t num_mbufs,
 
 		nb_segs = mbuf->nb_segs;
 		if (txq->csum &&
-		    ((mbuf->ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_IPV4) ||
-		     (mbuf->ol_flags & PKT_TX_L4_MASK) == PKT_TX_UDP_CKSUM ||
-		     (mbuf->ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM))) {
+		    ((mbuf->ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_IPV4) ||
+		      (mbuf->ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_UDP_CKSUM ||
+		      (mbuf->ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_TCP_CKSUM))) {
 			is_cksum = 1;
 
 			/* Support only packets with at least layer 4
@@ -719,12 +719,12 @@ pmd_tx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		uint16_t hdrs_len;
 		uint64_t tso;
 
-		tso = mbuf_in->ol_flags & PKT_TX_TCP_SEG;
+		tso = mbuf_in->ol_flags & RTE_MBUF_F_TX_TCP_SEG;
 		if (tso) {
 			struct rte_gso_ctx *gso_ctx = &txq->gso_ctx;
 
 			/* TCP segmentation implies TCP checksum offload */
-			mbuf_in->ol_flags |= PKT_TX_TCP_CKSUM;
+			mbuf_in->ol_flags |= RTE_MBUF_F_TX_TCP_CKSUM;
 
 			/* gso size is calculated without RTE_ETHER_CRC_LEN */
 			hdrs_len = mbuf_in->l2_len + mbuf_in->l3_len +
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index 91e09ff8d5..4a433435c6 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -42,10 +42,10 @@ fill_sq_desc_header(union sq_entry_t *entry, struct rte_mbuf *pkt)
 	ol_flags = pkt->ol_flags & NICVF_TX_OFFLOAD_MASK;
 	if (unlikely(ol_flags)) {
 		/* L4 cksum */
-		uint64_t l4_flags = ol_flags & PKT_TX_L4_MASK;
-		if (l4_flags == PKT_TX_TCP_CKSUM)
+		uint64_t l4_flags = ol_flags & RTE_MBUF_F_TX_L4_MASK;
+		if (l4_flags == RTE_MBUF_F_TX_TCP_CKSUM)
 			sqe.hdr.csum_l4 = SEND_L4_CSUM_TCP;
-		else if (l4_flags == PKT_TX_UDP_CKSUM)
+		else if (l4_flags == RTE_MBUF_F_TX_UDP_CKSUM)
 			sqe.hdr.csum_l4 = SEND_L4_CSUM_UDP;
 		else
 			sqe.hdr.csum_l4 = SEND_L4_CSUM_DISABLE;
@@ -54,7 +54,7 @@ fill_sq_desc_header(union sq_entry_t *entry, struct rte_mbuf *pkt)
 		sqe.hdr.l4_offset = pkt->l3_len + pkt->l2_len;
 
 		/* L3 cksum */
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			sqe.hdr.csum_l3 = 1;
 	}
 
@@ -343,9 +343,9 @@ static inline uint64_t __rte_hot
 nicvf_set_olflags(const cqe_rx_word0_t cqe_rx_w0)
 {
 	static const uint64_t flag_table[3] __rte_cache_aligned = {
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD,
-		PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_UNKNOWN,
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
 	};
 
 	const uint8_t idx = (cqe_rx_w0.err_opcode == CQE_RX_ERR_L4_CHK) << 1 |
@@ -409,7 +409,7 @@ nicvf_rx_offload(cqe_rx_word0_t cqe_rx_w0, cqe_rx_word2_t cqe_rx_w2,
 {
 	if (likely(cqe_rx_w0.rss_alg)) {
 		pkt->hash.rss = cqe_rx_w2.rss_tag;
-		pkt->ol_flags |= PKT_RX_RSS_HASH;
+		pkt->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 
 	}
 }
@@ -454,8 +454,8 @@ nicvf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts,
 			pkt->ol_flags = nicvf_set_olflags(cqe_rx_w0);
 		if (flag & NICVF_RX_OFFLOAD_VLAN_STRIP) {
 			if (unlikely(cqe_rx_w0.vlan_stripped)) {
-				pkt->ol_flags |= PKT_RX_VLAN
-							| PKT_RX_VLAN_STRIPPED;
+				pkt->ol_flags |= RTE_MBUF_F_RX_VLAN
+							| RTE_MBUF_F_RX_VLAN_STRIPPED;
 				pkt->vlan_tci =
 					rte_cpu_to_be_16(cqe_rx_w2.vlan_tci);
 			}
@@ -549,8 +549,8 @@ nicvf_process_cq_mseg_entry(struct cqe_rx_t *cqe_rx,
 		pkt->ol_flags = nicvf_set_olflags(cqe_rx_w0);
 	if (flag & NICVF_RX_OFFLOAD_VLAN_STRIP) {
 		if (unlikely(cqe_rx_w0.vlan_stripped)) {
-			pkt->ol_flags |= PKT_RX_VLAN
-				| PKT_RX_VLAN_STRIPPED;
+			pkt->ol_flags |= RTE_MBUF_F_RX_VLAN
+				| RTE_MBUF_F_RX_VLAN_STRIPPED;
 			pkt->vlan_tci = rte_cpu_to_be_16(cqe_rx_w2.vlan_tci);
 		}
 	}
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index d6ed660b4e..3e1d40bbeb 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -12,7 +12,7 @@
 #define NICVF_RX_OFFLOAD_CKSUM          0x2
 #define NICVF_RX_OFFLOAD_VLAN_STRIP     0x4
 
-#define NICVF_TX_OFFLOAD_MASK (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK)
+#define NICVF_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK)
 
 #if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
 static inline uint16_t __attribute__((const))
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 0063994688..844b005249 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -1136,10 +1136,10 @@ txgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
 	rxq = dev->data->rx_queues[queue];
 
 	if (on) {
-		rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		rxq->vlan_flags = RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
 	} else {
-		rxq->vlan_flags = PKT_RX_VLAN;
+		rxq->vlan_flags = RTE_MBUF_F_RX_VLAN;
 		rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
 	}
 }
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 53da1b8450..3c5941d71f 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -43,30 +43,30 @@
 #include "txgbe_rxtx.h"
 
 #ifdef RTE_LIBRTE_IEEE1588
-#define TXGBE_TX_IEEE1588_TMST PKT_TX_IEEE1588_TMST
+#define TXGBE_TX_IEEE1588_TMST RTE_MBUF_F_TX_IEEE1588_TMST
 #else
 #define TXGBE_TX_IEEE1588_TMST 0
 #endif
 
 /* Bit Mask to indicate what bits required for building TX context */
-static const u64 TXGBE_TX_OFFLOAD_MASK = (PKT_TX_IP_CKSUM |
-		PKT_TX_OUTER_IPV6 |
-		PKT_TX_OUTER_IPV4 |
-		PKT_TX_IPV6 |
-		PKT_TX_IPV4 |
-		PKT_TX_VLAN |
-		PKT_TX_L4_MASK |
-		PKT_TX_TCP_SEG |
-		PKT_TX_TUNNEL_MASK |
-		PKT_TX_OUTER_IP_CKSUM |
-		PKT_TX_OUTER_UDP_CKSUM |
+static const u64 TXGBE_TX_OFFLOAD_MASK = (RTE_MBUF_F_TX_IP_CKSUM |
+		RTE_MBUF_F_TX_OUTER_IPV6 |
+		RTE_MBUF_F_TX_OUTER_IPV4 |
+		RTE_MBUF_F_TX_IPV6 |
+		RTE_MBUF_F_TX_IPV4 |
+		RTE_MBUF_F_TX_VLAN |
+		RTE_MBUF_F_TX_L4_MASK |
+		RTE_MBUF_F_TX_TCP_SEG |
+		RTE_MBUF_F_TX_TUNNEL_MASK |
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM |
+		RTE_MBUF_F_TX_OUTER_UDP_CKSUM |
 #ifdef RTE_LIB_SECURITY
-		PKT_TX_SEC_OFFLOAD |
+		RTE_MBUF_F_TX_SEC_OFFLOAD |
 #endif
 		TXGBE_TX_IEEE1588_TMST);
 
 #define TXGBE_TX_OFFLOAD_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ TXGBE_TX_OFFLOAD_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ TXGBE_TX_OFFLOAD_MASK)
 
 /*
  * Prefetch a cache line into all cache levels.
@@ -339,7 +339,7 @@ txgbe_set_xmit_ctx(struct txgbe_tx_queue *txq,
 	type_tucmd_mlhl |= TXGBE_TXD_PTID(tx_offload.ptid);
 
 	/* check if TCP segmentation required for this packet */
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		tx_offload_mask.l2_len |= ~0;
 		tx_offload_mask.l3_len |= ~0;
 		tx_offload_mask.l4_len |= ~0;
@@ -347,25 +347,25 @@ txgbe_set_xmit_ctx(struct txgbe_tx_queue *txq,
 		mss_l4len_idx |= TXGBE_TXD_MSS(tx_offload.tso_segsz);
 		mss_l4len_idx |= TXGBE_TXD_L4LEN(tx_offload.l4_len);
 	} else { /* no TSO, check if hardware checksum is needed */
-		if (ol_flags & PKT_TX_IP_CKSUM) {
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 			tx_offload_mask.l2_len |= ~0;
 			tx_offload_mask.l3_len |= ~0;
 		}
 
-		switch (ol_flags & PKT_TX_L4_MASK) {
-		case PKT_TX_UDP_CKSUM:
+		switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+		case RTE_MBUF_F_TX_UDP_CKSUM:
 			mss_l4len_idx |=
 				TXGBE_TXD_L4LEN(sizeof(struct rte_udp_hdr));
 			tx_offload_mask.l2_len |= ~0;
 			tx_offload_mask.l3_len |= ~0;
 			break;
-		case PKT_TX_TCP_CKSUM:
+		case RTE_MBUF_F_TX_TCP_CKSUM:
 			mss_l4len_idx |=
 				TXGBE_TXD_L4LEN(sizeof(struct rte_tcp_hdr));
 			tx_offload_mask.l2_len |= ~0;
 			tx_offload_mask.l3_len |= ~0;
 			break;
-		case PKT_TX_SCTP_CKSUM:
+		case RTE_MBUF_F_TX_SCTP_CKSUM:
 			mss_l4len_idx |=
 				TXGBE_TXD_L4LEN(sizeof(struct rte_sctp_hdr));
 			tx_offload_mask.l2_len |= ~0;
@@ -378,7 +378,7 @@ txgbe_set_xmit_ctx(struct txgbe_tx_queue *txq,
 
 	vlan_macip_lens = TXGBE_TXD_IPLEN(tx_offload.l3_len >> 1);
 
-	if (ol_flags & PKT_TX_TUNNEL_MASK) {
+	if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
 		tx_offload_mask.outer_tun_len |= ~0;
 		tx_offload_mask.outer_l2_len |= ~0;
 		tx_offload_mask.outer_l3_len |= ~0;
@@ -386,16 +386,16 @@ txgbe_set_xmit_ctx(struct txgbe_tx_queue *txq,
 		tunnel_seed = TXGBE_TXD_ETUNLEN(tx_offload.outer_tun_len >> 1);
 		tunnel_seed |= TXGBE_TXD_EIPLEN(tx_offload.outer_l3_len >> 2);
 
-		switch (ol_flags & PKT_TX_TUNNEL_MASK) {
-		case PKT_TX_TUNNEL_IPIP:
+		switch (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+		case RTE_MBUF_F_TX_TUNNEL_IPIP:
 			/* for non UDP / GRE tunneling, set to 0b */
 			break;
-		case PKT_TX_TUNNEL_VXLAN:
-		case PKT_TX_TUNNEL_VXLAN_GPE:
-		case PKT_TX_TUNNEL_GENEVE:
+		case RTE_MBUF_F_TX_TUNNEL_VXLAN:
+		case RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE:
+		case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 			tunnel_seed |= TXGBE_TXD_ETYPE_UDP;
 			break;
-		case PKT_TX_TUNNEL_GRE:
+		case RTE_MBUF_F_TX_TUNNEL_GRE:
 			tunnel_seed |= TXGBE_TXD_ETYPE_GRE;
 			break;
 		default:
@@ -408,13 +408,13 @@ txgbe_set_xmit_ctx(struct txgbe_tx_queue *txq,
 		vlan_macip_lens |= TXGBE_TXD_MACLEN(tx_offload.l2_len);
 	}
 
-	if (ol_flags & PKT_TX_VLAN) {
+	if (ol_flags & RTE_MBUF_F_TX_VLAN) {
 		tx_offload_mask.vlan_tci |= ~0;
 		vlan_macip_lens |= TXGBE_TXD_VLAN(tx_offload.vlan_tci);
 	}
 
 #ifdef RTE_LIB_SECURITY
-	if (ol_flags & PKT_TX_SEC_OFFLOAD) {
+	if (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) {
 		union txgbe_crypto_tx_desc_md *md =
 				(union txgbe_crypto_tx_desc_md *)mdata;
 		tunnel_seed |= TXGBE_TXD_IPSEC_SAIDX(md->sa_idx);
@@ -477,26 +477,26 @@ tx_desc_cksum_flags_to_olinfo(uint64_t ol_flags)
 {
 	uint32_t tmp = 0;
 
-	if ((ol_flags & PKT_TX_L4_MASK) != PKT_TX_L4_NO_CKSUM) {
+	if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) != RTE_MBUF_F_TX_L4_NO_CKSUM) {
 		tmp |= TXGBE_TXD_CC;
 		tmp |= TXGBE_TXD_L4CS;
 	}
-	if (ol_flags & PKT_TX_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		tmp |= TXGBE_TXD_CC;
 		tmp |= TXGBE_TXD_IPCS;
 	}
-	if (ol_flags & PKT_TX_OUTER_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) {
 		tmp |= TXGBE_TXD_CC;
 		tmp |= TXGBE_TXD_EIPCS;
 	}
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		tmp |= TXGBE_TXD_CC;
 		/* implies IPv4 cksum */
-		if (ol_flags & PKT_TX_IPV4)
+		if (ol_flags & RTE_MBUF_F_TX_IPV4)
 			tmp |= TXGBE_TXD_IPCS;
 		tmp |= TXGBE_TXD_L4CS;
 	}
-	if (ol_flags & PKT_TX_VLAN)
+	if (ol_flags & RTE_MBUF_F_TX_VLAN)
 		tmp |= TXGBE_TXD_CC;
 
 	return tmp;
@@ -507,11 +507,11 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
 {
 	uint32_t cmdtype = 0;
 
-	if (ol_flags & PKT_TX_VLAN)
+	if (ol_flags & RTE_MBUF_F_TX_VLAN)
 		cmdtype |= TXGBE_TXD_VLE;
-	if (ol_flags & PKT_TX_TCP_SEG)
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 		cmdtype |= TXGBE_TXD_TSE;
-	if (ol_flags & PKT_TX_MACSEC)
+	if (ol_flags & RTE_MBUF_F_TX_MACSEC)
 		cmdtype |= TXGBE_TXD_LINKSEC;
 	return cmdtype;
 }
@@ -525,67 +525,67 @@ tx_desc_ol_flags_to_ptid(uint64_t oflags, uint32_t ptype)
 		return txgbe_encode_ptype(ptype);
 
 	/* Only support flags in TXGBE_TX_OFFLOAD_MASK */
-	tun = !!(oflags & PKT_TX_TUNNEL_MASK);
+	tun = !!(oflags & RTE_MBUF_F_TX_TUNNEL_MASK);
 
 	/* L2 level */
 	ptype = RTE_PTYPE_L2_ETHER;
-	if (oflags & PKT_TX_VLAN)
+	if (oflags & RTE_MBUF_F_TX_VLAN)
 		ptype |= RTE_PTYPE_L2_ETHER_VLAN;
 
 	/* L3 level */
-	if (oflags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IP_CKSUM))
+	if (oflags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IP_CKSUM))
 		ptype |= RTE_PTYPE_L3_IPV4;
-	else if (oflags & (PKT_TX_OUTER_IPV6))
+	else if (oflags & (RTE_MBUF_F_TX_OUTER_IPV6))
 		ptype |= RTE_PTYPE_L3_IPV6;
 
-	if (oflags & (PKT_TX_IPV4 | PKT_TX_IP_CKSUM))
+	if (oflags & (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM))
 		ptype |= (tun ? RTE_PTYPE_INNER_L3_IPV4 : RTE_PTYPE_L3_IPV4);
-	else if (oflags & (PKT_TX_IPV6))
+	else if (oflags & (RTE_MBUF_F_TX_IPV6))
 		ptype |= (tun ? RTE_PTYPE_INNER_L3_IPV6 : RTE_PTYPE_L3_IPV6);
 
 	/* L4 level */
-	switch (oflags & (PKT_TX_L4_MASK)) {
-	case PKT_TX_TCP_CKSUM:
+	switch (oflags & (RTE_MBUF_F_TX_L4_MASK)) {
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		ptype |= (tun ? RTE_PTYPE_INNER_L4_TCP : RTE_PTYPE_L4_TCP);
 		break;
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		ptype |= (tun ? RTE_PTYPE_INNER_L4_UDP : RTE_PTYPE_L4_UDP);
 		break;
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		ptype |= (tun ? RTE_PTYPE_INNER_L4_SCTP : RTE_PTYPE_L4_SCTP);
 		break;
 	}
 
-	if (oflags & PKT_TX_TCP_SEG)
+	if (oflags & RTE_MBUF_F_TX_TCP_SEG)
 		ptype |= (tun ? RTE_PTYPE_INNER_L4_TCP : RTE_PTYPE_L4_TCP);
 
 	/* Tunnel */
-	switch (oflags & PKT_TX_TUNNEL_MASK) {
-	case PKT_TX_TUNNEL_VXLAN:
+	switch (oflags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
 		ptype |= RTE_PTYPE_L2_ETHER |
 			 RTE_PTYPE_L3_IPV4 |
 			 RTE_PTYPE_TUNNEL_VXLAN;
 		ptype |= RTE_PTYPE_INNER_L2_ETHER;
 		break;
-	case PKT_TX_TUNNEL_GRE:
+	case RTE_MBUF_F_TX_TUNNEL_GRE:
 		ptype |= RTE_PTYPE_L2_ETHER |
 			 RTE_PTYPE_L3_IPV4 |
 			 RTE_PTYPE_TUNNEL_GRE;
 		ptype |= RTE_PTYPE_INNER_L2_ETHER;
 		break;
-	case PKT_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 		ptype |= RTE_PTYPE_L2_ETHER |
 			 RTE_PTYPE_L3_IPV4 |
 			 RTE_PTYPE_TUNNEL_GENEVE;
 		ptype |= RTE_PTYPE_INNER_L2_ETHER;
 		break;
-	case PKT_TX_TUNNEL_VXLAN_GPE:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE:
 		ptype |= RTE_PTYPE_L2_ETHER |
 			 RTE_PTYPE_L3_IPV4 |
 			 RTE_PTYPE_TUNNEL_VXLAN_GPE;
 		break;
-	case PKT_TX_TUNNEL_IPIP:
-	case PKT_TX_TUNNEL_IP:
+	case RTE_MBUF_F_TX_TUNNEL_IPIP:
+	case RTE_MBUF_F_TX_TUNNEL_IP:
 		ptype |= RTE_PTYPE_L2_ETHER |
 			 RTE_PTYPE_L3_IPV4 |
 			 RTE_PTYPE_TUNNEL_IP;
@@ -669,19 +669,19 @@ txgbe_get_tun_len(struct rte_mbuf *mbuf)
 	const struct txgbe_genevehdr *gh;
 	uint8_t tun_len;
 
-	switch (mbuf->ol_flags & PKT_TX_TUNNEL_MASK) {
-	case PKT_TX_TUNNEL_IPIP:
+	switch (mbuf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+	case RTE_MBUF_F_TX_TUNNEL_IPIP:
 		tun_len = 0;
 		break;
-	case PKT_TX_TUNNEL_VXLAN:
-	case PKT_TX_TUNNEL_VXLAN_GPE:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE:
 		tun_len = sizeof(struct txgbe_udphdr)
 			+ sizeof(struct txgbe_vxlanhdr);
 		break;
-	case PKT_TX_TUNNEL_GRE:
+	case RTE_MBUF_F_TX_TUNNEL_GRE:
 		tun_len = sizeof(struct txgbe_nvgrehdr);
 		break;
-	case PKT_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 		gh = rte_pktmbuf_read(mbuf,
 			mbuf->outer_l2_len + mbuf->outer_l3_len,
 			sizeof(genevehdr), &genevehdr);
@@ -751,7 +751,7 @@ txgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		 */
 		ol_flags = tx_pkt->ol_flags;
 #ifdef RTE_LIB_SECURITY
-		use_ipsec = txq->using_ipsec && (ol_flags & PKT_TX_SEC_OFFLOAD);
+		use_ipsec = txq->using_ipsec && (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD);
 #endif
 
 		/* If hardware offload required */
@@ -895,20 +895,20 @@ txgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		cmd_type_len = TXGBE_TXD_FCS;
 
 #ifdef RTE_LIBRTE_IEEE1588
-		if (ol_flags & PKT_TX_IEEE1588_TMST)
+		if (ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 			cmd_type_len |= TXGBE_TXD_1588;
 #endif
 
 		olinfo_status = 0;
 		if (tx_ol_req) {
-			if (ol_flags & PKT_TX_TCP_SEG) {
+			if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 				/* when TSO is on, paylen in descriptor is the
 				 * not the packet len but the tcp payload len
 				 */
 				pkt_len -= (tx_offload.l2_len +
 					tx_offload.l3_len + tx_offload.l4_len);
 				pkt_len -=
-					(tx_pkt->ol_flags & PKT_TX_TUNNEL_MASK)
+					(tx_pkt->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 					? tx_offload.outer_l2_len +
 					  tx_offload.outer_l3_len : 0;
 			}
@@ -1076,14 +1076,14 @@ static inline uint64_t
 txgbe_rxd_pkt_info_to_pkt_flags(uint32_t pkt_info)
 {
 	static uint64_t ip_rss_types_map[16] __rte_cache_aligned = {
-		0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-		0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
-		PKT_RX_RSS_HASH, 0, 0, 0,
-		0, 0, 0,  PKT_RX_FDIR,
+		0, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+		0, RTE_MBUF_F_RX_RSS_HASH, 0, RTE_MBUF_F_RX_RSS_HASH,
+		RTE_MBUF_F_RX_RSS_HASH, 0, 0, 0,
+		0, 0, 0,  RTE_MBUF_F_RX_FDIR,
 	};
 #ifdef RTE_LIBRTE_IEEE1588
 	static uint64_t ip_pkt_etqf_map[8] = {
-		0, 0, 0, PKT_RX_IEEE1588_PTP,
+		0, 0, 0, RTE_MBUF_F_RX_IEEE1588_PTP,
 		0, 0, 0, 0,
 	};
 	int etfid = txgbe_etflt_id(TXGBE_RXD_PTID(pkt_info));
@@ -1108,12 +1108,12 @@ rx_desc_status_to_pkt_flags(uint32_t rx_status, uint64_t vlan_flags)
 	 * That can be found from rte_eth_rxmode.offloads flag
 	 */
 	pkt_flags = (rx_status & TXGBE_RXD_STAT_VLAN &&
-		     vlan_flags & PKT_RX_VLAN_STRIPPED)
+		     vlan_flags & RTE_MBUF_F_RX_VLAN_STRIPPED)
 		    ? vlan_flags : 0;
 
 #ifdef RTE_LIBRTE_IEEE1588
 	if (rx_status & TXGBE_RXD_STAT_1588)
-		pkt_flags = pkt_flags | PKT_RX_IEEE1588_TMST;
+		pkt_flags = pkt_flags | RTE_MBUF_F_RX_IEEE1588_TMST;
 #endif
 	return pkt_flags;
 }
@@ -1126,24 +1126,24 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status)
 	/* checksum offload can't be disabled */
 	if (rx_status & TXGBE_RXD_STAT_IPCS) {
 		pkt_flags |= (rx_status & TXGBE_RXD_ERR_IPCS
-				? PKT_RX_IP_CKSUM_BAD : PKT_RX_IP_CKSUM_GOOD);
+				? RTE_MBUF_F_RX_IP_CKSUM_BAD : RTE_MBUF_F_RX_IP_CKSUM_GOOD);
 	}
 
 	if (rx_status & TXGBE_RXD_STAT_L4CS) {
 		pkt_flags |= (rx_status & TXGBE_RXD_ERR_L4CS
-				? PKT_RX_L4_CKSUM_BAD : PKT_RX_L4_CKSUM_GOOD);
+				? RTE_MBUF_F_RX_L4_CKSUM_BAD : RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 	}
 
 	if (rx_status & TXGBE_RXD_STAT_EIPCS &&
 	    rx_status & TXGBE_RXD_ERR_EIPCS) {
-		pkt_flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+		pkt_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 	}
 
 #ifdef RTE_LIB_SECURITY
 	if (rx_status & TXGBE_RXD_STAT_SECP) {
-		pkt_flags |= PKT_RX_SEC_OFFLOAD;
+		pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD;
 		if (rx_status & TXGBE_RXD_ERR_SECERR)
-			pkt_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+			pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
 	}
 #endif
 
@@ -1226,10 +1226,10 @@ txgbe_rx_scan_hw_ring(struct txgbe_rx_queue *rxq)
 				txgbe_rxd_pkt_info_to_pkt_type(pkt_info[j],
 				rxq->pkt_type_mask);
 
-			if (likely(pkt_flags & PKT_RX_RSS_HASH))
+			if (likely(pkt_flags & RTE_MBUF_F_RX_RSS_HASH))
 				mb->hash.rss =
 					rte_le_to_cpu_32(rxdp[j].qw0.dw1);
-			else if (pkt_flags & PKT_RX_FDIR) {
+			else if (pkt_flags & RTE_MBUF_F_RX_FDIR) {
 				mb->hash.fdir.hash =
 					rte_le_to_cpu_16(rxdp[j].qw0.hi.csum) &
 					TXGBE_ATR_HASH_MASK;
@@ -1541,7 +1541,7 @@ txgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		rxm->port = rxq->port_id;
 
 		pkt_info = rte_le_to_cpu_32(rxd.qw0.dw0);
-		/* Only valid if PKT_RX_VLAN set in pkt_flags */
+		/* Only valid if RTE_MBUF_F_RX_VLAN set in pkt_flags */
 		rxm->vlan_tci = rte_le_to_cpu_16(rxd.qw1.hi.tag);
 
 		pkt_flags = rx_desc_status_to_pkt_flags(staterr,
@@ -1552,9 +1552,9 @@ txgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		rxm->packet_type = txgbe_rxd_pkt_info_to_pkt_type(pkt_info,
 						       rxq->pkt_type_mask);
 
-		if (likely(pkt_flags & PKT_RX_RSS_HASH)) {
+		if (likely(pkt_flags & RTE_MBUF_F_RX_RSS_HASH)) {
 			rxm->hash.rss = rte_le_to_cpu_32(rxd.qw0.dw1);
-		} else if (pkt_flags & PKT_RX_FDIR) {
+		} else if (pkt_flags & RTE_MBUF_F_RX_FDIR) {
 			rxm->hash.fdir.hash =
 				rte_le_to_cpu_16(rxd.qw0.hi.csum) &
 				TXGBE_ATR_HASH_MASK;
@@ -1616,7 +1616,7 @@ txgbe_fill_cluster_head_buf(struct rte_mbuf *head, struct txgbe_rx_desc *desc,
 
 	head->port = rxq->port_id;
 
-	/* The vlan_tci field is only valid when PKT_RX_VLAN is
+	/* The vlan_tci field is only valid when RTE_MBUF_F_RX_VLAN is
 	 * set in the pkt_flags field.
 	 */
 	head->vlan_tci = rte_le_to_cpu_16(desc->qw1.hi.tag);
@@ -1628,9 +1628,9 @@ txgbe_fill_cluster_head_buf(struct rte_mbuf *head, struct txgbe_rx_desc *desc,
 	head->packet_type = txgbe_rxd_pkt_info_to_pkt_type(pkt_info,
 						rxq->pkt_type_mask);
 
-	if (likely(pkt_flags & PKT_RX_RSS_HASH)) {
+	if (likely(pkt_flags & RTE_MBUF_F_RX_RSS_HASH)) {
 		head->hash.rss = rte_le_to_cpu_32(desc->qw0.dw1);
-	} else if (pkt_flags & PKT_RX_FDIR) {
+	} else if (pkt_flags & RTE_MBUF_F_RX_FDIR) {
 		head->hash.fdir.hash = rte_le_to_cpu_16(desc->qw0.hi.csum)
 				& TXGBE_ATR_HASH_MASK;
 		head->hash.fdir.id = rte_le_to_cpu_16(desc->qw0.hi.ipid);
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index 214a6ee4c8..a3d6a1d2dc 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -444,7 +444,7 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 		struct rte_mbuf *m = bufs[i];
 
 		/* Do VLAN tag insertion */
-		if (m->ol_flags & PKT_TX_VLAN) {
+		if (m->ol_flags & RTE_MBUF_F_TX_VLAN) {
 			int error = rte_vlan_insert(&m);
 			if (unlikely(error)) {
 				rte_pktmbuf_free(m);
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index 65f08b775a..d03fa08a75 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -929,7 +929,7 @@ virtio_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr)
 	if (hdr->flags == 0 && hdr->gso_type == VIRTIO_NET_HDR_GSO_NONE)
 		return 0;
 
-	m->ol_flags |= PKT_RX_IP_CKSUM_UNKNOWN;
+	m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 
 	ptype = rte_net_get_ptype(m, &hdr_lens, RTE_PTYPE_ALL_MASK);
 	m->packet_type = ptype;
@@ -941,7 +941,7 @@ virtio_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr)
 	if (hdr->flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) {
 		hdrlen = hdr_lens.l2_len + hdr_lens.l3_len + hdr_lens.l4_len;
 		if (hdr->csum_start <= hdrlen && l4_supported) {
-			m->ol_flags |= PKT_RX_L4_CKSUM_NONE;
+			m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_NONE;
 		} else {
 			/* Unknown proto or tunnel, do sw cksum. We can assume
 			 * the cksum field is in the first segment since the
@@ -963,7 +963,7 @@ virtio_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr)
 					off) = csum;
 		}
 	} else if (hdr->flags & VIRTIO_NET_HDR_F_DATA_VALID && l4_supported) {
-		m->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+		m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 	}
 
 	/* GSO request, save required information in mbuf */
@@ -979,8 +979,8 @@ virtio_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr)
 		switch (hdr->gso_type & ~VIRTIO_NET_HDR_GSO_ECN) {
 			case VIRTIO_NET_HDR_GSO_TCPV4:
 			case VIRTIO_NET_HDR_GSO_TCPV6:
-				m->ol_flags |= PKT_RX_LRO | \
-					PKT_RX_L4_CKSUM_NONE;
+				m->ol_flags |= RTE_MBUF_F_RX_LRO | \
+					RTE_MBUF_F_RX_L4_CKSUM_NONE;
 				break;
 			default:
 				return -EINVAL;
@@ -1747,7 +1747,7 @@ virtio_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts,
 #endif
 
 		/* Do VLAN tag insertion */
-		if (unlikely(m->ol_flags & PKT_TX_VLAN)) {
+		if (unlikely(m->ol_flags & RTE_MBUF_F_TX_VLAN)) {
 			error = rte_vlan_insert(&m);
 			/* rte_vlan_insert() may change pointer
 			 * even in the case of failure
@@ -1766,7 +1766,7 @@ virtio_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts,
 			break;
 		}
 
-		if (m->ol_flags & PKT_TX_TCP_SEG)
+		if (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			virtio_tso_fix_cksum(m);
 	}
 
diff --git a/drivers/net/virtio/virtio_rxtx_packed.h b/drivers/net/virtio/virtio_rxtx_packed.h
index 1d1db60da8..7c77128ff2 100644
--- a/drivers/net/virtio/virtio_rxtx_packed.h
+++ b/drivers/net/virtio/virtio_rxtx_packed.h
@@ -166,7 +166,7 @@ virtio_vec_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr)
 		return 0;
 
 	/* GSO not support in vec path, skip check */
-	m->ol_flags |= PKT_RX_IP_CKSUM_UNKNOWN;
+	m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 
 	ptype = rte_net_get_ptype(m, &hdr_lens, RTE_PTYPE_ALL_MASK);
 	m->packet_type = ptype;
@@ -178,7 +178,7 @@ virtio_vec_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr)
 	if (hdr->flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) {
 		hdrlen = hdr_lens.l2_len + hdr_lens.l3_len + hdr_lens.l4_len;
 		if (hdr->csum_start <= hdrlen && l4_supported) {
-			m->ol_flags |= PKT_RX_L4_CKSUM_NONE;
+			m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_NONE;
 		} else {
 			/* Unknown proto or tunnel, do sw cksum. We can assume
 			 * the cksum field is in the first segment since the
@@ -200,7 +200,7 @@ virtio_vec_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr)
 					off) = csum;
 		}
 	} else if (hdr->flags & VIRTIO_NET_HDR_F_DATA_VALID && l4_supported) {
-		m->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+		m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 	}
 
 	return 0;
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 03957b2bd0..182e4aa74c 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -619,19 +619,19 @@ virtqueue_notify(struct virtqueue *vq)
 static inline void
 virtqueue_xmit_offload(struct virtio_net_hdr *hdr, struct rte_mbuf *cookie)
 {
-	uint64_t csum_l4 = cookie->ol_flags & PKT_TX_L4_MASK;
+	uint64_t csum_l4 = cookie->ol_flags & RTE_MBUF_F_TX_L4_MASK;
 
-	if (cookie->ol_flags & PKT_TX_TCP_SEG)
-		csum_l4 |= PKT_TX_TCP_CKSUM;
+	if (cookie->ol_flags & RTE_MBUF_F_TX_TCP_SEG)
+		csum_l4 |= RTE_MBUF_F_TX_TCP_CKSUM;
 
 	switch (csum_l4) {
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		hdr->csum_start = cookie->l2_len + cookie->l3_len;
 		hdr->csum_offset = offsetof(struct rte_udp_hdr, dgram_cksum);
 		hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
 		break;
 
-	case PKT_TX_TCP_CKSUM:
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		hdr->csum_start = cookie->l2_len + cookie->l3_len;
 		hdr->csum_offset = offsetof(struct rte_tcp_hdr, cksum);
 		hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
@@ -645,8 +645,8 @@ virtqueue_xmit_offload(struct virtio_net_hdr *hdr, struct rte_mbuf *cookie)
 	}
 
 	/* TCP Segmentation Offload */
-	if (cookie->ol_flags & PKT_TX_TCP_SEG) {
-		hdr->gso_type = (cookie->ol_flags & PKT_TX_IPV6) ?
+	if (cookie->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
+		hdr->gso_type = (cookie->ol_flags & RTE_MBUF_F_TX_IPV6) ?
 			VIRTIO_NET_HDR_GSO_TCPV6 :
 			VIRTIO_NET_HDR_GSO_TCPV4;
 		hdr->gso_size = cookie->tso_segsz;
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index 69e877f816..d961498fdf 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -48,15 +48,14 @@
 #include "vmxnet3_logs.h"
 #include "vmxnet3_ethdev.h"
 
-#define	VMXNET3_TX_OFFLOAD_MASK	( \
-		PKT_TX_VLAN | \
-		PKT_TX_IPV6 |     \
-		PKT_TX_IPV4 |     \
-		PKT_TX_L4_MASK |  \
-		PKT_TX_TCP_SEG)
+#define	VMXNET3_TX_OFFLOAD_MASK	(RTE_MBUF_F_TX_VLAN | \
+		RTE_MBUF_F_TX_IPV6 |     \
+		RTE_MBUF_F_TX_IPV4 |     \
+		RTE_MBUF_F_TX_L4_MASK |  \
+		RTE_MBUF_F_TX_TCP_SEG)
 
 #define	VMXNET3_TX_OFFLOAD_NOTSUP_MASK	\
-	(PKT_TX_OFFLOAD_MASK ^ VMXNET3_TX_OFFLOAD_MASK)
+	(RTE_MBUF_F_TX_OFFLOAD_MASK ^ VMXNET3_TX_OFFLOAD_MASK)
 
 static const uint32_t rxprod_reg[2] = {VMXNET3_REG_RXPROD, VMXNET3_REG_RXPROD2};
 
@@ -359,7 +358,7 @@ vmxnet3_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		/* Non-TSO packet cannot occupy more than
 		 * VMXNET3_MAX_TXD_PER_PKT TX descriptors.
 		 */
-		if ((ol_flags & PKT_TX_TCP_SEG) == 0 &&
+		if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) == 0 &&
 				m->nb_segs > VMXNET3_MAX_TXD_PER_PKT) {
 			rte_errno = EINVAL;
 			return i;
@@ -367,8 +366,8 @@ vmxnet3_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 
 		/* check that only supported TX offloads are requested. */
 		if ((ol_flags & VMXNET3_TX_OFFLOAD_NOTSUP_MASK) != 0 ||
-				(ol_flags & PKT_TX_L4_MASK) ==
-				PKT_TX_SCTP_CKSUM) {
+				(ol_flags & RTE_MBUF_F_TX_L4_MASK) ==
+				RTE_MBUF_F_TX_SCTP_CKSUM) {
 			rte_errno = ENOTSUP;
 			return i;
 		}
@@ -416,7 +415,7 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		struct rte_mbuf *txm = tx_pkts[nb_tx];
 		struct rte_mbuf *m_seg = txm;
 		int copy_size = 0;
-		bool tso = (txm->ol_flags & PKT_TX_TCP_SEG) != 0;
+		bool tso = (txm->ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0;
 		/* # of descriptors needed for a packet. */
 		unsigned count = txm->nb_segs;
 
@@ -520,7 +519,7 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 		/* Add VLAN tag if present */
 		gdesc = txq->cmd_ring.base + first2fill;
-		if (txm->ol_flags & PKT_TX_VLAN) {
+		if (txm->ol_flags & RTE_MBUF_F_TX_VLAN) {
 			gdesc->txd.ti = 1;
 			gdesc->txd.tci = txm->vlan_tci;
 		}
@@ -535,23 +534,23 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			gdesc->txd.msscof = mss;
 
 			deferred += (rte_pktmbuf_pkt_len(txm) - gdesc->txd.hlen + mss - 1) / mss;
-		} else if (txm->ol_flags & PKT_TX_L4_MASK) {
+		} else if (txm->ol_flags & RTE_MBUF_F_TX_L4_MASK) {
 			gdesc->txd.om = VMXNET3_OM_CSUM;
 			gdesc->txd.hlen = txm->l2_len + txm->l3_len;
 
-			switch (txm->ol_flags & PKT_TX_L4_MASK) {
-			case PKT_TX_TCP_CKSUM:
+			switch (txm->ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+			case RTE_MBUF_F_TX_TCP_CKSUM:
 				gdesc->txd.msscof = gdesc->txd.hlen +
 					offsetof(struct rte_tcp_hdr, cksum);
 				break;
-			case PKT_TX_UDP_CKSUM:
+			case RTE_MBUF_F_TX_UDP_CKSUM:
 				gdesc->txd.msscof = gdesc->txd.hlen +
 					offsetof(struct rte_udp_hdr,
 						dgram_cksum);
 				break;
 			default:
 				PMD_TX_LOG(WARNING, "requested cksum offload not supported %#llx",
-					   txm->ol_flags & PKT_TX_L4_MASK);
+					   txm->ol_flags & RTE_MBUF_F_TX_L4_MASK);
 				abort();
 			}
 			deferred++;
@@ -739,35 +738,35 @@ vmxnet3_rx_offload(struct vmxnet3_hw *hw, const Vmxnet3_RxCompDesc *rcd,
 
 			rxm->tso_segsz = rcde->mss;
 			*vmxnet3_segs_dynfield(rxm) = rcde->segCnt;
-			ol_flags |= PKT_RX_LRO;
+			ol_flags |= RTE_MBUF_F_RX_LRO;
 		}
 	} else { /* Offloads set in eop */
 		/* Check for RSS */
 		if (rcd->rssType != VMXNET3_RCD_RSS_TYPE_NONE) {
-			ol_flags |= PKT_RX_RSS_HASH;
+			ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 			rxm->hash.rss = rcd->rssHash;
 		}
 
 		/* Check for hardware stripped VLAN tag */
 		if (rcd->ts) {
-			ol_flags |= (PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+			ol_flags |= (RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 			rxm->vlan_tci = rte_le_to_cpu_16((uint16_t)rcd->tci);
 		}
 
 		/* Check packet type, checksum errors, etc. */
 		if (rcd->cnc) {
-			ol_flags |= PKT_RX_L4_CKSUM_UNKNOWN;
+			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 		} else {
 			if (rcd->v4) {
 				packet_type |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
 
 				if (rcd->ipc)
-					ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+					ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 				else
-					ol_flags |= PKT_RX_IP_CKSUM_BAD;
+					ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 
 				if (rcd->tuc) {
-					ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+					ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 					if (rcd->tcp)
 						packet_type |= RTE_PTYPE_L4_TCP;
 					else
@@ -775,17 +774,17 @@ vmxnet3_rx_offload(struct vmxnet3_hw *hw, const Vmxnet3_RxCompDesc *rcd,
 				} else {
 					if (rcd->tcp) {
 						packet_type |= RTE_PTYPE_L4_TCP;
-						ol_flags |= PKT_RX_L4_CKSUM_BAD;
+						ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 					} else if (rcd->udp) {
 						packet_type |= RTE_PTYPE_L4_UDP;
-						ol_flags |= PKT_RX_L4_CKSUM_BAD;
+						ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 					}
 				}
 			} else if (rcd->v6) {
 				packet_type |= RTE_PTYPE_L3_IPV6_EXT_UNKNOWN;
 
 				if (rcd->tuc) {
-					ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+					ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 					if (rcd->tcp)
 						packet_type |= RTE_PTYPE_L4_TCP;
 					else
@@ -793,10 +792,10 @@ vmxnet3_rx_offload(struct vmxnet3_hw *hw, const Vmxnet3_RxCompDesc *rcd,
 				} else {
 					if (rcd->tcp) {
 						packet_type |= RTE_PTYPE_L4_TCP;
-						ol_flags |= PKT_RX_L4_CKSUM_BAD;
+						ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 					} else if (rcd->udp) {
 						packet_type |= RTE_PTYPE_L4_UDP;
-						ol_flags |= PKT_RX_L4_CKSUM_BAD;
+						ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 					}
 				}
 			} else {
@@ -804,7 +803,7 @@ vmxnet3_rx_offload(struct vmxnet3_hw *hw, const Vmxnet3_RxCompDesc *rcd,
 			}
 
 			/* Old variants of vmxnet3 do not provide MSS */
-			if ((ol_flags & PKT_RX_LRO) && rxm->tso_segsz == 0)
+			if ((ol_flags & RTE_MBUF_F_RX_LRO) && rxm->tso_segsz == 0)
 				rxm->tso_segsz = vmxnet3_guess_mss(hw,
 						rcd, rxm);
 		}
diff --git a/drivers/regex/mlx5/mlx5_regex_fastpath.c b/drivers/regex/mlx5/mlx5_regex_fastpath.c
index c79445ce7d..d1b5e9bb2a 100644
--- a/drivers/regex/mlx5/mlx5_regex_fastpath.c
+++ b/drivers/regex/mlx5/mlx5_regex_fastpath.c
@@ -139,7 +139,7 @@ mlx5_regex_addr2mr(struct mlx5_regex_priv *priv, struct mlx5_mr_ctrl *mr_ctrl,
 		return lkey;
 	/* Take slower bottom-half on miss. */
 	return mlx5_mr_addr2mr_bh(priv->pd, 0, &priv->mr_scache, mr_ctrl, addr,
-				  !!(mbuf->ol_flags & EXT_ATTACHED_MBUF));
+				  !!(mbuf->ol_flags & RTE_MBUF_F_EXTERNAL));
 }
 
 
diff --git a/examples/bpf/t2.c b/examples/bpf/t2.c
index b9bce746c0..67cd908cd6 100644
--- a/examples/bpf/t2.c
+++ b/examples/bpf/t2.c
@@ -6,7 +6,7 @@
  * eBPF program sample.
  * Accepts pointer to struct rte_mbuf as an input parameter.
  * cleanup mbuf's vlan_tci and all related RX flags
- * (PKT_RX_VLAN_PKT | PKT_RX_VLAN_STRIPPED).
+ * (RTE_MBUF_F_RX_VLAN_PKT | RTE_MBUF_F_RX_VLAN_STRIPPED).
  * Doesn't touch contents of packet data.
  * To compile:
  * clang -O2 -target bpf -Wno-int-to-void-pointer-cast -c t2.c
@@ -27,7 +27,7 @@ entry(void *pkt)
 
 	mb = pkt;
 	mb->vlan_tci = 0;
-	mb->ol_flags &= ~(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+	mb->ol_flags &= ~(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 
 	return 1;
 }
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index f245369720..c11ed1c0f6 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -299,7 +299,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
 			rte_pktmbuf_free(m);
 
 			/* request HW to regenerate IPv4 cksum */
-			ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+			ol_flags |= (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM);
 
 			/* If we fail to fragment the packet */
 			if (unlikely (len2 < 0))
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 8645ac790b..bbe343e5fa 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -359,7 +359,7 @@ reassemble(struct rte_mbuf *m, uint16_t portid, uint32_t queue,
 			}
 
 			/* update offloading flags */
-			m->ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+			m->ol_flags |= (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM);
 		}
 		ip_dst = rte_be_to_cpu_32(ip_hdr->dst_addr);
 
diff --git a/examples/ipsec-secgw/esp.c b/examples/ipsec-secgw/esp.c
index bfa7ff7217..bd233752c8 100644
--- a/examples/ipsec-secgw/esp.c
+++ b/examples/ipsec-secgw/esp.c
@@ -159,8 +159,8 @@ esp_inbound_post(struct rte_mbuf *m, struct ipsec_sa *sa,
 
 	if ((ips->type == RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) ||
 			(ips->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO)) {
-		if (m->ol_flags & PKT_RX_SEC_OFFLOAD) {
-			if (m->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED)
+		if (m->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD) {
+			if (m->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED)
 				cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
 			else
 				cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
@@ -464,7 +464,7 @@ esp_outbound_post(struct rte_mbuf *m,
 
 	if ((type == RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) ||
 			(type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO)) {
-		m->ol_flags |= PKT_TX_SEC_OFFLOAD;
+		m->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
 	} else {
 		RTE_ASSERT(cop != NULL);
 		if (cop->status != RTE_CRYPTO_OP_STATUS_SUCCESS) {
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 7ad94cb822..0a19033a7a 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -466,7 +466,7 @@ prepare_one_packet(struct rte_mbuf *pkt, struct ipsec_traffic *t)
 	 * with the security session.
 	 */
 
-	if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD &&
+	if (pkt->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD &&
 			rte_security_dynfield_is_registered()) {
 		struct ipsec_sa *sa;
 		struct ipsec_mbuf_metadata *priv;
@@ -533,7 +533,7 @@ prepare_tx_pkt(struct rte_mbuf *pkt, uint16_t port,
 		ip->ip_sum = 0;
 
 		/* calculate IPv4 cksum in SW */
-		if ((pkt->ol_flags & PKT_TX_IP_CKSUM) == 0)
+		if ((pkt->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) == 0)
 			ip->ip_sum = rte_ipv4_cksum((struct rte_ipv4_hdr *)ip);
 
 		ethhdr->ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
@@ -696,7 +696,7 @@ inbound_sp_sa(struct sp_ctx *sp, struct sa_ctx *sa, struct traffic_type *ip,
 		}
 
 		/* Only check SPI match for processed IPSec packets */
-		if (i < lim && ((m->ol_flags & PKT_RX_SEC_OFFLOAD) == 0)) {
+		if (i < lim && ((m->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD) == 0)) {
 			free_pkts(&m, 1);
 			continue;
 		}
@@ -978,7 +978,7 @@ route4_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint8_t nb_pkts)
 	 */
 
 	for (i = 0; i < nb_pkts; i++) {
-		if (!(pkts[i]->ol_flags & PKT_TX_SEC_OFFLOAD)) {
+		if (!(pkts[i]->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD)) {
 			/* Security offload not enabled. So an LPM lookup is
 			 * required to get the hop
 			 */
@@ -995,7 +995,7 @@ route4_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint8_t nb_pkts)
 	lpm_pkts = 0;
 
 	for (i = 0; i < nb_pkts; i++) {
-		if (pkts[i]->ol_flags & PKT_TX_SEC_OFFLOAD) {
+		if (pkts[i]->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) {
 			/* Read hop from the SA */
 			pkt_hop = get_hop_for_offload_pkt(pkts[i], 0);
 		} else {
@@ -1029,7 +1029,7 @@ route6_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint8_t nb_pkts)
 	 */
 
 	for (i = 0; i < nb_pkts; i++) {
-		if (!(pkts[i]->ol_flags & PKT_TX_SEC_OFFLOAD)) {
+		if (!(pkts[i]->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD)) {
 			/* Security offload not enabled. So an LPM lookup is
 			 * required to get the hop
 			 */
@@ -1047,7 +1047,7 @@ route6_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint8_t nb_pkts)
 	lpm_pkts = 0;
 
 	for (i = 0; i < nb_pkts; i++) {
-		if (pkts[i]->ol_flags & PKT_TX_SEC_OFFLOAD) {
+		if (pkts[i]->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) {
 			/* Read hop from the SA */
 			pkt_hop = get_hop_for_offload_pkt(pkts[i], 1);
 		} else {
@@ -2302,10 +2302,10 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
 		qconf->tx_queue_id[portid] = tx_queueid;
 
 		/* Pre-populate pkt offloads based on capabilities */
-		qconf->outbound.ipv4_offloads = PKT_TX_IPV4;
-		qconf->outbound.ipv6_offloads = PKT_TX_IPV6;
+		qconf->outbound.ipv4_offloads = RTE_MBUF_F_TX_IPV4;
+		qconf->outbound.ipv6_offloads = RTE_MBUF_F_TX_IPV6;
 		if (local_port_conf.txmode.offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
-			qconf->outbound.ipv4_offloads |= PKT_TX_IP_CKSUM;
+			qconf->outbound.ipv4_offloads |= RTE_MBUF_F_TX_IP_CKSUM;
 
 		tx_queueid++;
 
diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c
index c545497cee..f7703b7167 100644
--- a/examples/ipsec-secgw/ipsec_worker.c
+++ b/examples/ipsec-secgw/ipsec_worker.c
@@ -208,9 +208,9 @@ process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt,
 
 	switch (type) {
 	case PKT_TYPE_PLAIN_IPV4:
-		if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) {
+		if (pkt->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD) {
 			if (unlikely(pkt->ol_flags &
-				     PKT_RX_SEC_OFFLOAD_FAILED)) {
+				     RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED)) {
 				RTE_LOG(ERR, IPSEC,
 					"Inbound security offload failed\n");
 				goto drop_pkt_and_exit;
@@ -226,9 +226,9 @@ process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt,
 		break;
 
 	case PKT_TYPE_PLAIN_IPV6:
-		if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) {
+		if (pkt->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD) {
 			if (unlikely(pkt->ol_flags &
-				     PKT_RX_SEC_OFFLOAD_FAILED)) {
+				     RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED)) {
 				RTE_LOG(ERR, IPSEC,
 					"Inbound security offload failed\n");
 				goto drop_pkt_and_exit;
@@ -367,7 +367,7 @@ process_ipsec_ev_outbound(struct ipsec_ctx *ctx, struct route_table *rt,
 				      sess->security.ses, pkt, NULL);
 
 	/* Mark the packet for Tx security offload */
-	pkt->ol_flags |= PKT_TX_SEC_OFFLOAD;
+	pkt->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
 
 	/* Get the port to which this pkt need to be submitted */
 	port_id = sa->portid;
@@ -482,7 +482,7 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct eh_event_link_info *links,
 						      NULL);
 
 			/* Mark the packet for Tx security offload */
-			pkt->ol_flags |= PKT_TX_SEC_OFFLOAD;
+			pkt->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
 
 			/* Provide L2 len for Outbound processing */
 			pkt->l2_len = RTE_ETHER_HDR_LEN;
diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c
index 17a28556c9..7f2199290e 100644
--- a/examples/ipsec-secgw/sa.c
+++ b/examples/ipsec-secgw/sa.c
@@ -32,7 +32,7 @@
 
 #define IP6_FULL_MASK (sizeof(((struct ip_addr *)NULL)->ip.ip6.ip6) * CHAR_BIT)
 
-#define MBUF_NO_SEC_OFFLOAD(m) ((m->ol_flags & PKT_RX_SEC_OFFLOAD) == 0)
+#define MBUF_NO_SEC_OFFLOAD(m) ((m->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD) == 0)
 
 struct supported_cipher_algo {
 	const char *keyword;
diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
index 4f32ade7fb..d5d5217497 100644
--- a/examples/ptpclient/ptpclient.c
+++ b/examples/ptpclient/ptpclient.c
@@ -463,7 +463,7 @@ parse_fup(struct ptpv2_data_slave_ordinary *ptp_data)
 			   sizeof(struct clock_id));
 
 		/* Enable flag for hardware timestamping. */
-		created_pkt->ol_flags |= PKT_TX_IEEE1588_TMST;
+		created_pkt->ol_flags |= RTE_MBUF_F_TX_IEEE1588_TMST;
 
 		/*Read value from NIC to prevent latching with old value. */
 		rte_eth_timesync_read_tx_timestamp(ptp_data->portid,
@@ -625,7 +625,7 @@ lcore_main(void)
 				continue;
 
 			/* Packet is parsed to determine which type. 8< */
-			if (m->ol_flags & PKT_RX_IEEE1588_PTP)
+			if (m->ol_flags & RTE_MBUF_F_RX_IEEE1588_PTP)
 				parse_ptp_frames(portid, m);
 			/* >8 End of packet is parsed to determine which type. */
 
diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c
index 7ffccc8369..41e8fcdc30 100644
--- a/examples/qos_meter/main.c
+++ b/examples/qos_meter/main.c
@@ -77,13 +77,13 @@ static struct rte_eth_conf port_conf = {
  * Packet RX/TX
  *
  ***/
-#define PKT_RX_BURST_MAX                32
-#define PKT_TX_BURST_MAX                32
+#define RTE_MBUF_F_RX_BURST_MAX                32
+#define RTE_MBUF_F_TX_BURST_MAX                32
 #define TIME_TX_DRAIN                   200000ULL
 
 static uint16_t port_rx;
 static uint16_t port_tx;
-static struct rte_mbuf *pkts_rx[PKT_RX_BURST_MAX];
+static struct rte_mbuf *pkts_rx[RTE_MBUF_F_RX_BURST_MAX];
 struct rte_eth_dev_tx_buffer *tx_buffer;
 
 /* Traffic meter parameters are configured in the application. 8< */
@@ -188,7 +188,7 @@ main_loop(__rte_unused void *dummy)
 		}
 
 		/* Read packet burst from NIC RX */
-		nb_rx = rte_eth_rx_burst(port_rx, NIC_RX_QUEUE, pkts_rx, PKT_RX_BURST_MAX);
+		nb_rx = rte_eth_rx_burst(port_rx, NIC_RX_QUEUE, pkts_rx, RTE_MBUF_F_RX_BURST_MAX);
 
 		/* Handle packets */
 		for (i = 0; i < nb_rx; i ++) {
@@ -420,13 +420,13 @@ main(int argc, char **argv)
 		rte_exit(EXIT_FAILURE, "Port %d TX queue setup error (%d)\n", port_tx, ret);
 
 	tx_buffer = rte_zmalloc_socket("tx_buffer",
-			RTE_ETH_TX_BUFFER_SIZE(PKT_TX_BURST_MAX), 0,
+			RTE_ETH_TX_BUFFER_SIZE(RTE_MBUF_F_TX_BURST_MAX), 0,
 			rte_eth_dev_socket_id(port_tx));
 	if (tx_buffer == NULL)
 		rte_exit(EXIT_FAILURE, "Port %d TX buffer allocation error\n",
 				port_tx);
 
-	rte_eth_tx_buffer_init(tx_buffer, PKT_TX_BURST_MAX);
+	rte_eth_tx_buffer_init(tx_buffer, RTE_MBUF_F_TX_BURST_MAX);
 
 	ret = rte_eth_dev_start(port_rx);
 	if (ret < 0)
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index 1603c29fb5..f1248b0a36 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -1036,15 +1036,15 @@ static void virtio_tx_offload(struct rte_mbuf *m)
 	tcp_hdr = rte_pktmbuf_mtod_offset(m, struct rte_tcp_hdr *,
 		m->l2_len + m->l3_len);
 
-	m->ol_flags |= PKT_TX_TCP_SEG;
+	m->ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 	if ((ptype & RTE_PTYPE_L3_MASK) == RTE_PTYPE_L3_IPV4) {
-		m->ol_flags |= PKT_TX_IPV4;
-		m->ol_flags |= PKT_TX_IP_CKSUM;
+		m->ol_flags |= RTE_MBUF_F_TX_IPV4;
+		m->ol_flags |= RTE_MBUF_F_TX_IP_CKSUM;
 		ipv4_hdr = l3_hdr;
 		ipv4_hdr->hdr_checksum = 0;
 		tcp_hdr->cksum = rte_ipv4_phdr_cksum(l3_hdr, m->ol_flags);
 	} else { /* assume ethertype == RTE_ETHER_TYPE_IPV6 */
-		m->ol_flags |= PKT_TX_IPV6;
+		m->ol_flags |= RTE_MBUF_F_TX_IPV6;
 		tcp_hdr->cksum = rte_ipv6_phdr_cksum(l3_hdr, m->ol_flags);
 	}
 }
@@ -1115,7 +1115,7 @@ virtio_tx_route(struct vhost_dev *vdev, struct rte_mbuf *m, uint16_t vlan_tag)
 			(vh->vlan_tci != vlan_tag_be))
 			vh->vlan_tci = vlan_tag_be;
 	} else {
-		m->ol_flags |= PKT_TX_VLAN;
+		m->ol_flags |= RTE_MBUF_F_TX_VLAN;
 
 		/*
 		 * Find the right seg to adjust the data len when offset is
@@ -1139,7 +1139,7 @@ virtio_tx_route(struct vhost_dev *vdev, struct rte_mbuf *m, uint16_t vlan_tag)
 		m->vlan_tci = vlan_tag;
 	}
 
-	if (m->ol_flags & PKT_RX_LRO)
+	if (m->ol_flags & RTE_MBUF_F_RX_LRO)
 		virtio_tx_offload(m);
 
 	tx_q->m_table[tx_q->len++] = m;
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index afdc53b674..330fbb2722 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -1448,13 +1448,13 @@ struct rte_eth_conf {
 #define DEV_TX_OFFLOAD_SECURITY         0x00020000
 /**
  * Device supports generic UDP tunneled packet TSO.
- * Application must set PKT_TX_TUNNEL_UDP and other mbuf fields required
+ * Application must set RTE_MBUF_F_TX_TUNNEL_UDP and other mbuf fields required
  * for tunnel TSO.
  */
 #define DEV_TX_OFFLOAD_UDP_TNL_TSO      0x00040000
 /**
  * Device supports generic IP tunneled packet TSO.
- * Application must set PKT_TX_TUNNEL_IP and other mbuf fields required
+ * Application must set RTE_MBUF_F_TX_TUNNEL_IP and other mbuf fields required
  * for tunnel TSO.
  */
 #define DEV_TX_OFFLOAD_IP_TNL_TSO       0x00080000
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 7b1ed7f110..ba3e6f39b5 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -1422,11 +1422,12 @@ rte_flow_item_icmp6_nd_opt_tla_eth_mask = {
  * RTE_FLOW_ITEM_TYPE_META
  *
  * Matches a specified metadata value. On egress, metadata can be set
- * either by mbuf dynamic metadata field with PKT_TX_DYNF_METADATA flag or
- * RTE_FLOW_ACTION_TYPE_SET_META. On ingress, RTE_FLOW_ACTION_TYPE_SET_META
+ * either by mbuf dynamic metadata field with RTE_MBUF_DYNFLAG_TX_METADATA flag
+ * or RTE_FLOW_ACTION_TYPE_SET_META. On ingress, RTE_FLOW_ACTION_TYPE_SET_META
  * sets metadata for a packet and the metadata will be reported via mbuf
- * metadata dynamic field with PKT_RX_DYNF_METADATA flag. The dynamic mbuf
- * field must be registered in advance by rte_flow_dynf_metadata_register().
+ * metadata dynamic field with RTE_MBUF_DYNFLAG_RX_METADATA flag. The dynamic
+ * mbuf field must be registered in advance by
+ * rte_flow_dynf_metadata_register().
  */
 struct rte_flow_item_meta {
 	uint32_t data;
@@ -1900,8 +1901,8 @@ enum rte_flow_action_type {
 	RTE_FLOW_ACTION_TYPE_JUMP,
 
 	/**
-	 * Attaches an integer value to packets and sets PKT_RX_FDIR and
-	 * PKT_RX_FDIR_ID mbuf flags.
+	 * Attaches an integer value to packets and sets RTE_MBUF_F_RX_FDIR and
+	 * RTE_MBUF_F_RX_FDIR_ID mbuf flags.
 	 *
 	 * See struct rte_flow_action_mark.
 	 */
@@ -1909,7 +1910,7 @@ enum rte_flow_action_type {
 
 	/**
 	 * Flags packets. Similar to MARK without a specific value; only
-	 * sets the PKT_RX_FDIR mbuf flag.
+	 * sets the RTE_MBUF_F_RX_FDIR mbuf flag.
 	 *
 	 * No associated configuration structure.
 	 */
@@ -2414,8 +2415,8 @@ enum rte_flow_action_type {
 /**
  * RTE_FLOW_ACTION_TYPE_MARK
  *
- * Attaches an integer value to packets and sets PKT_RX_FDIR and
- * PKT_RX_FDIR_ID mbuf flags.
+ * Attaches an integer value to packets and sets RTE_MBUF_F_RX_FDIR and
+ * RTE_MBUF_F_RX_FDIR_ID mbuf flags.
  *
  * This value is arbitrary and application-defined. Maximum allowed value
  * depends on the underlying implementation. It is returned in the
@@ -2960,10 +2961,10 @@ struct rte_flow_action_set_tag {
  * RTE_FLOW_ACTION_TYPE_SET_META
  *
  * Set metadata. Metadata set by mbuf metadata dynamic field with
- * PKT_TX_DYNF_DATA flag on egress will be overridden by this action. On
+ * RTE_MBUF_DYNFLAG_TX_DATA flag on egress will be overridden by this action. On
  * ingress, the metadata will be carried by mbuf metadata dynamic field
- * with PKT_RX_DYNF_METADATA flag if set.  The dynamic mbuf field must be
- * registered in advance by rte_flow_dynf_metadata_register().
+ * with RTE_MBUF_DYNFLAG_RX_METADATA flag if set.  The dynamic mbuf field must
+ * be registered in advance by rte_flow_dynf_metadata_register().
  *
  * Altering partial bits is supported with mask. For bits which have never
  * been set, unpredictable value will be seen depending on driver
@@ -3261,8 +3262,12 @@ extern uint64_t rte_flow_dynf_metadata_mask;
 	RTE_MBUF_DYNFIELD((m), rte_flow_dynf_metadata_offs, uint32_t *)
 
 /* Mbuf dynamic flags for metadata. */
-#define PKT_RX_DYNF_METADATA (rte_flow_dynf_metadata_mask)
-#define PKT_TX_DYNF_METADATA (rte_flow_dynf_metadata_mask)
+#define RTE_MBUF_DYNFLAG_RX_METADATA (rte_flow_dynf_metadata_mask)
+#define PKT_RX_DYNF_METADATA RTE_DEPRECATED(PKT_RX_DYNF_METADATA) \
+		RTE_MBUF_DYNFLAG_RX_METADATA
+#define RTE_MBUF_DYNFLAG_TX_METADATA (rte_flow_dynf_metadata_mask)
+#define PKT_TX_DYNF_METADATA RTE_DEPRECATED(PKT_TX_DYNF_METADATA) \
+		RTE_MBUF_DYNFLAG_TX_METADATA
 
 __rte_experimental
 static inline uint32_t
diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c
index 13dfb28401..c67dbdf102 100644
--- a/lib/eventdev/rte_event_eth_rx_adapter.c
+++ b/lib/eventdev/rte_event_eth_rx_adapter.c
@@ -869,8 +869,8 @@ rxa_buffer_mbufs(struct rte_event_eth_rx_adapter *rx_adapter,
 	uint16_t dropped;
 
 	if (!eth_rx_queue_info->ena_vector) {
-		/* 0xffff ffff if PKT_RX_RSS_HASH is set, otherwise 0 */
-		rss_mask = ~(((m->ol_flags & PKT_RX_RSS_HASH) != 0) - 1);
+		/* 0xffff ffff if RTE_MBUF_F_RX_RSS_HASH is set, otherwise 0 */
+		rss_mask = ~(((m->ol_flags & RTE_MBUF_F_RX_RSS_HASH) != 0) - 1);
 		do_rss = !rss_mask && !eth_rx_queue_info->flow_id_mask;
 		for (i = 0; i < num; i++) {
 			m = mbufs[i];
diff --git a/lib/gso/gso_common.h b/lib/gso/gso_common.h
index 4d5f303fa6..2c258b22bf 100644
--- a/lib/gso/gso_common.h
+++ b/lib/gso/gso_common.h
@@ -18,26 +18,26 @@
 #define TCP_HDR_PSH_MASK ((uint8_t)0x08)
 #define TCP_HDR_FIN_MASK ((uint8_t)0x01)
 
-#define IS_IPV4_TCP(flag) (((flag) & (PKT_TX_TCP_SEG | PKT_TX_IPV4)) == \
-		(PKT_TX_TCP_SEG | PKT_TX_IPV4))
-
-#define IS_IPV4_VXLAN_TCP4(flag) (((flag) & (PKT_TX_TCP_SEG | PKT_TX_IPV4 | \
-				PKT_TX_OUTER_IPV4 | PKT_TX_TUNNEL_MASK)) == \
-		(PKT_TX_TCP_SEG | PKT_TX_IPV4 | PKT_TX_OUTER_IPV4 | \
-		 PKT_TX_TUNNEL_VXLAN))
-
-#define IS_IPV4_VXLAN_UDP4(flag) (((flag) & (PKT_TX_UDP_SEG | PKT_TX_IPV4 | \
-				PKT_TX_OUTER_IPV4 | PKT_TX_TUNNEL_MASK)) == \
-		(PKT_TX_UDP_SEG | PKT_TX_IPV4 | PKT_TX_OUTER_IPV4 | \
-		 PKT_TX_TUNNEL_VXLAN))
-
-#define IS_IPV4_GRE_TCP4(flag) (((flag) & (PKT_TX_TCP_SEG | PKT_TX_IPV4 | \
-				PKT_TX_OUTER_IPV4 | PKT_TX_TUNNEL_MASK)) == \
-		(PKT_TX_TCP_SEG | PKT_TX_IPV4 | PKT_TX_OUTER_IPV4 | \
-		 PKT_TX_TUNNEL_GRE))
-
-#define IS_IPV4_UDP(flag) (((flag) & (PKT_TX_UDP_SEG | PKT_TX_IPV4)) == \
-		(PKT_TX_UDP_SEG | PKT_TX_IPV4))
+#define IS_IPV4_TCP(flag) (((flag) & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_IPV4)) == \
+		(RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_IPV4))
+
+#define IS_IPV4_VXLAN_TCP4(flag) (((flag) & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_IPV4 | \
+				RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_TUNNEL_MASK)) == \
+		(RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_OUTER_IPV4 | \
+		 RTE_MBUF_F_TX_TUNNEL_VXLAN))
+
+#define IS_IPV4_VXLAN_UDP4(flag) (((flag) & (RTE_MBUF_F_TX_UDP_SEG | RTE_MBUF_F_TX_IPV4 | \
+				RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_TUNNEL_MASK)) == \
+		(RTE_MBUF_F_TX_UDP_SEG | RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_OUTER_IPV4 | \
+		 RTE_MBUF_F_TX_TUNNEL_VXLAN))
+
+#define IS_IPV4_GRE_TCP4(flag) (((flag) & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_IPV4 | \
+				RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_TUNNEL_MASK)) == \
+		(RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_OUTER_IPV4 | \
+		 RTE_MBUF_F_TX_TUNNEL_GRE))
+
+#define IS_IPV4_UDP(flag) (((flag) & (RTE_MBUF_F_TX_UDP_SEG | RTE_MBUF_F_TX_IPV4)) == \
+		(RTE_MBUF_F_TX_UDP_SEG | RTE_MBUF_F_TX_IPV4))
 
 /**
  * Internal function which updates the UDP header of a packet, following
diff --git a/lib/gso/gso_tunnel_tcp4.c b/lib/gso/gso_tunnel_tcp4.c
index 166aace73a..1a7ef30dde 100644
--- a/lib/gso/gso_tunnel_tcp4.c
+++ b/lib/gso/gso_tunnel_tcp4.c
@@ -37,7 +37,7 @@ update_tunnel_ipv4_tcp_headers(struct rte_mbuf *pkt, uint8_t ipid_delta,
 	tail_idx = nb_segs - 1;
 
 	/* Only update UDP header for VxLAN packets. */
-	update_udp_hdr = (pkt->ol_flags & PKT_TX_TUNNEL_VXLAN) ? 1 : 0;
+	update_udp_hdr = (pkt->ol_flags & RTE_MBUF_F_TX_TUNNEL_VXLAN) ? 1 : 0;
 
 	for (i = 0; i < nb_segs; i++) {
 		update_ipv4_header(segs[i], outer_ipv4_offset, outer_id);
diff --git a/lib/gso/rte_gso.c b/lib/gso/rte_gso.c
index 0d02ec3cee..58037d6b5d 100644
--- a/lib/gso/rte_gso.c
+++ b/lib/gso/rte_gso.c
@@ -43,7 +43,7 @@ rte_gso_segment(struct rte_mbuf *pkt,
 		return -EINVAL;
 
 	if (gso_ctx->gso_size >= pkt->pkt_len) {
-		pkt->ol_flags &= (~(PKT_TX_TCP_SEG | PKT_TX_UDP_SEG));
+		pkt->ol_flags &= (~(RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG));
 		return 0;
 	}
 
@@ -57,26 +57,26 @@ rte_gso_segment(struct rte_mbuf *pkt,
 			(gso_ctx->gso_types & DEV_TX_OFFLOAD_VXLAN_TNL_TSO)) ||
 			((IS_IPV4_GRE_TCP4(pkt->ol_flags) &&
 			 (gso_ctx->gso_types & DEV_TX_OFFLOAD_GRE_TNL_TSO)))) {
-		pkt->ol_flags &= (~PKT_TX_TCP_SEG);
+		pkt->ol_flags &= (~RTE_MBUF_F_TX_TCP_SEG);
 		ret = gso_tunnel_tcp4_segment(pkt, gso_size, ipid_delta,
 				direct_pool, indirect_pool,
 				pkts_out, nb_pkts_out);
 	} else if (IS_IPV4_VXLAN_UDP4(pkt->ol_flags) &&
 			(gso_ctx->gso_types & DEV_TX_OFFLOAD_VXLAN_TNL_TSO) &&
 			(gso_ctx->gso_types & DEV_TX_OFFLOAD_UDP_TSO)) {
-		pkt->ol_flags &= (~PKT_TX_UDP_SEG);
+		pkt->ol_flags &= (~RTE_MBUF_F_TX_UDP_SEG);
 		ret = gso_tunnel_udp4_segment(pkt, gso_size,
 				direct_pool, indirect_pool,
 				pkts_out, nb_pkts_out);
 	} else if (IS_IPV4_TCP(pkt->ol_flags) &&
 			(gso_ctx->gso_types & DEV_TX_OFFLOAD_TCP_TSO)) {
-		pkt->ol_flags &= (~PKT_TX_TCP_SEG);
+		pkt->ol_flags &= (~RTE_MBUF_F_TX_TCP_SEG);
 		ret = gso_tcp4_segment(pkt, gso_size, ipid_delta,
 				direct_pool, indirect_pool,
 				pkts_out, nb_pkts_out);
 	} else if (IS_IPV4_UDP(pkt->ol_flags) &&
 			(gso_ctx->gso_types & DEV_TX_OFFLOAD_UDP_TSO)) {
-		pkt->ol_flags &= (~PKT_TX_UDP_SEG);
+		pkt->ol_flags &= (~RTE_MBUF_F_TX_UDP_SEG);
 		ret = gso_udp4_segment(pkt, gso_size, direct_pool,
 				indirect_pool, pkts_out, nb_pkts_out);
 	} else {
diff --git a/lib/gso/rte_gso.h b/lib/gso/rte_gso.h
index d93ee8e5b1..777d0a55fb 100644
--- a/lib/gso/rte_gso.h
+++ b/lib/gso/rte_gso.h
@@ -77,8 +77,8 @@ struct rte_gso_ctx {
  *
  * Before calling rte_gso_segment(), applications must set proper ol_flags
  * for the packet. The GSO library uses the same macros as that of TSO.
- * For example, set PKT_TX_TCP_SEG and PKT_TX_IPV4 in ol_flags to segment
- * a TCP/IPv4 packet. If rte_gso_segment() succeeds, the PKT_TX_TCP_SEG
+ * For example, set RTE_MBUF_F_TX_TCP_SEG and RTE_MBUF_F_TX_IPV4 in ol_flags to segment
+ * a TCP/IPv4 packet. If rte_gso_segment() succeeds, the RTE_MBUF_F_TX_TCP_SEG
  * flag is removed for all GSO segments and the input packet.
  *
  * Each of the newly-created GSO segments is organized as a two-segment
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index 2b1df6a032..17442a98f2 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -399,7 +399,7 @@ static inline int32_t
 trs_process_check(struct rte_mbuf *mb, struct rte_mbuf **ml,
 	uint32_t *tofs, struct rte_esp_tail espt, uint32_t hlen, uint32_t tlen)
 {
-	if ((mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) != 0 ||
+	if ((mb->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED) != 0 ||
 			tlen + hlen > mb->pkt_len)
 		return -EBADMSG;
 
@@ -487,8 +487,8 @@ trs_process_step3(struct rte_mbuf *mb)
 	/* reset mbuf packet type */
 	mb->packet_type &= (RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK);
 
-	/* clear the PKT_RX_SEC_OFFLOAD flag if set */
-	mb->ol_flags &= ~PKT_RX_SEC_OFFLOAD;
+	/* clear the RTE_MBUF_F_RX_SEC_OFFLOAD flag if set */
+	mb->ol_flags &= ~RTE_MBUF_F_RX_SEC_OFFLOAD;
 }
 
 /*
@@ -505,8 +505,8 @@ tun_process_step3(struct rte_mbuf *mb, uint64_t txof_msk, uint64_t txof_val)
 	mb->packet_type = RTE_PTYPE_UNKNOWN;
 	mb->tx_offload = (mb->tx_offload & txof_msk) | txof_val;
 
-	/* clear the PKT_RX_SEC_OFFLOAD flag if set */
-	mb->ol_flags &= ~PKT_RX_SEC_OFFLOAD;
+	/* clear the RTE_MBUF_F_RX_SEC_OFFLOAD flag if set */
+	mb->ol_flags &= ~RTE_MBUF_F_RX_SEC_OFFLOAD;
 }
 
 /*
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 1e181cf2ce..2bbd5df2b8 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -544,7 +544,7 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	icv_len = sa->icv_len;
 
 	for (i = 0; i != num; i++) {
-		if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
+		if ((mb[i]->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED) == 0) {
 			ml = rte_pktmbuf_lastseg(mb[i]);
 			/* remove high-order 32 bits of esn from packet len */
 			mb[i]->pkt_len -= sa->sqh_len;
@@ -580,7 +580,7 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
 	ol_flags = ss->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA;
 	for (i = 0; i != num; i++) {
 
-		mb[i]->ol_flags |= PKT_TX_SEC_OFFLOAD;
+		mb[i]->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
 		if (ol_flags != 0)
 			rte_security_set_pkt_metadata(ss->security.ctx,
 				ss->security.ses, mb[i], NULL);
diff --git a/lib/ipsec/misc.h b/lib/ipsec/misc.h
index 79b9a20762..8e72ca992d 100644
--- a/lib/ipsec/misc.h
+++ b/lib/ipsec/misc.h
@@ -173,7 +173,7 @@ cpu_crypto_bulk(const struct rte_ipsec_session *ss,
 	j = num - n;
 	for (i = 0; j != 0 && i != num; i++) {
 		if (st[i] != 0) {
-			mb[i]->ol_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+			mb[i]->ol_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
 			j--;
 		}
 	}
diff --git a/lib/ipsec/rte_ipsec_group.h b/lib/ipsec/rte_ipsec_group.h
index ea3bdfad95..60ab297710 100644
--- a/lib/ipsec/rte_ipsec_group.h
+++ b/lib/ipsec/rte_ipsec_group.h
@@ -61,7 +61,7 @@ rte_ipsec_ses_from_crypto(const struct rte_crypto_op *cop)
  * Take as input completed crypto ops, extract related mbufs
  * and group them by rte_ipsec_session they belong to.
  * For mbuf which crypto-op wasn't completed successfully
- * PKT_RX_SEC_OFFLOAD_FAILED will be raised in ol_flags.
+ * RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED will be raised in ol_flags.
  * Note that mbufs with undetermined SA (session-less) are not freed
  * by the function, but are placed beyond mbufs for the last valid group.
  * It is a user responsibility to handle them further.
@@ -95,9 +95,9 @@ rte_ipsec_pkt_crypto_group(const struct rte_crypto_op *cop[],
 		m = cop[i]->sym[0].m_src;
 		ns = cop[i]->sym[0].session;
 
-		m->ol_flags |= PKT_RX_SEC_OFFLOAD;
+		m->ol_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD;
 		if (cop[i]->status != RTE_CRYPTO_OP_STATUS_SUCCESS)
-			m->ol_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+			m->ol_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
 
 		/* no valid session found */
 		if (ns == NULL) {
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index e59189d215..4754093873 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -590,7 +590,7 @@ pkt_flag_process(const struct rte_ipsec_session *ss,
 
 	k = 0;
 	for (i = 0; i != num; i++) {
-		if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0)
+		if ((mb[i]->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED) == 0)
 			k++;
 		else
 			dr[i - k] = i;
diff --git a/lib/mbuf/rte_mbuf.c b/lib/mbuf/rte_mbuf.c
index f7e3c1a187..f2e740c363 100644
--- a/lib/mbuf/rte_mbuf.c
+++ b/lib/mbuf/rte_mbuf.c
@@ -133,7 +133,7 @@ rte_pktmbuf_free_pinned_extmem(void *addr, void *opaque)
 	RTE_ASSERT(m->shinfo->fcb_opaque == m);
 
 	rte_mbuf_ext_refcnt_set(m->shinfo, 1);
-	m->ol_flags = EXT_ATTACHED_MBUF;
+	m->ol_flags = RTE_MBUF_F_EXTERNAL;
 	if (m->next != NULL) {
 		m->next = NULL;
 		m->nb_segs = 1;
@@ -213,7 +213,7 @@ __rte_pktmbuf_init_extmem(struct rte_mempool *mp,
 	m->pool = mp;
 	m->nb_segs = 1;
 	m->port = RTE_MBUF_PORT_INVALID;
-	m->ol_flags = EXT_ATTACHED_MBUF;
+	m->ol_flags = RTE_MBUF_F_EXTERNAL;
 	rte_mbuf_refcnt_set(m, 1);
 	m->next = NULL;
 
@@ -620,7 +620,7 @@ rte_pktmbuf_copy(const struct rte_mbuf *m, struct rte_mempool *mp,
 	__rte_pktmbuf_copy_hdr(mc, m);
 
 	/* copied mbuf is not indirect or external */
-	mc->ol_flags = m->ol_flags & ~(IND_ATTACHED_MBUF|EXT_ATTACHED_MBUF);
+	mc->ol_flags = m->ol_flags & ~(RTE_MBUF_F_INDIRECT|RTE_MBUF_F_EXTERNAL);
 
 	prev = &mc->next;
 	m_last = mc;
@@ -685,7 +685,7 @@ rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len)
 	fprintf(f, "  pkt_len=%u, ol_flags=%#"PRIx64", nb_segs=%u, port=%u",
 		m->pkt_len, m->ol_flags, m->nb_segs, m->port);
 
-	if (m->ol_flags & (PKT_RX_VLAN | PKT_TX_VLAN))
+	if (m->ol_flags & (RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_TX_VLAN))
 		fprintf(f, ", vlan_tci=%u", m->vlan_tci);
 
 	fprintf(f, ", ptype=%#"PRIx32"\n", m->packet_type);
@@ -751,30 +751,30 @@ const void *__rte_pktmbuf_read(const struct rte_mbuf *m, uint32_t off,
 const char *rte_get_rx_ol_flag_name(uint64_t mask)
 {
 	switch (mask) {
-	case PKT_RX_VLAN: return "PKT_RX_VLAN";
-	case PKT_RX_RSS_HASH: return "PKT_RX_RSS_HASH";
-	case PKT_RX_FDIR: return "PKT_RX_FDIR";
-	case PKT_RX_L4_CKSUM_BAD: return "PKT_RX_L4_CKSUM_BAD";
-	case PKT_RX_L4_CKSUM_GOOD: return "PKT_RX_L4_CKSUM_GOOD";
-	case PKT_RX_L4_CKSUM_NONE: return "PKT_RX_L4_CKSUM_NONE";
-	case PKT_RX_IP_CKSUM_BAD: return "PKT_RX_IP_CKSUM_BAD";
-	case PKT_RX_IP_CKSUM_GOOD: return "PKT_RX_IP_CKSUM_GOOD";
-	case PKT_RX_IP_CKSUM_NONE: return "PKT_RX_IP_CKSUM_NONE";
-	case PKT_RX_OUTER_IP_CKSUM_BAD: return "PKT_RX_OUTER_IP_CKSUM_BAD";
-	case PKT_RX_VLAN_STRIPPED: return "PKT_RX_VLAN_STRIPPED";
-	case PKT_RX_IEEE1588_PTP: return "PKT_RX_IEEE1588_PTP";
-	case PKT_RX_IEEE1588_TMST: return "PKT_RX_IEEE1588_TMST";
-	case PKT_RX_FDIR_ID: return "PKT_RX_FDIR_ID";
-	case PKT_RX_FDIR_FLX: return "PKT_RX_FDIR_FLX";
-	case PKT_RX_QINQ_STRIPPED: return "PKT_RX_QINQ_STRIPPED";
-	case PKT_RX_QINQ: return "PKT_RX_QINQ";
-	case PKT_RX_LRO: return "PKT_RX_LRO";
-	case PKT_RX_SEC_OFFLOAD: return "PKT_RX_SEC_OFFLOAD";
-	case PKT_RX_SEC_OFFLOAD_FAILED: return "PKT_RX_SEC_OFFLOAD_FAILED";
-	case PKT_RX_OUTER_L4_CKSUM_BAD: return "PKT_RX_OUTER_L4_CKSUM_BAD";
-	case PKT_RX_OUTER_L4_CKSUM_GOOD: return "PKT_RX_OUTER_L4_CKSUM_GOOD";
-	case PKT_RX_OUTER_L4_CKSUM_INVALID:
-		return "PKT_RX_OUTER_L4_CKSUM_INVALID";
+	case RTE_MBUF_F_RX_VLAN: return "RTE_MBUF_F_RX_VLAN";
+	case RTE_MBUF_F_RX_RSS_HASH: return "RTE_MBUF_F_RX_RSS_HASH";
+	case RTE_MBUF_F_RX_FDIR: return "RTE_MBUF_F_RX_FDIR";
+	case RTE_MBUF_F_RX_L4_CKSUM_BAD: return "RTE_MBUF_F_RX_L4_CKSUM_BAD";
+	case RTE_MBUF_F_RX_L4_CKSUM_GOOD: return "RTE_MBUF_F_RX_L4_CKSUM_GOOD";
+	case RTE_MBUF_F_RX_L4_CKSUM_NONE: return "RTE_MBUF_F_RX_L4_CKSUM_NONE";
+	case RTE_MBUF_F_RX_IP_CKSUM_BAD: return "RTE_MBUF_F_RX_IP_CKSUM_BAD";
+	case RTE_MBUF_F_RX_IP_CKSUM_GOOD: return "RTE_MBUF_F_RX_IP_CKSUM_GOOD";
+	case RTE_MBUF_F_RX_IP_CKSUM_NONE: return "RTE_MBUF_F_RX_IP_CKSUM_NONE";
+	case RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD: return "RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD";
+	case RTE_MBUF_F_RX_VLAN_STRIPPED: return "RTE_MBUF_F_RX_VLAN_STRIPPED";
+	case RTE_MBUF_F_RX_IEEE1588_PTP: return "RTE_MBUF_F_RX_IEEE1588_PTP";
+	case RTE_MBUF_F_RX_IEEE1588_TMST: return "RTE_MBUF_F_RX_IEEE1588_TMST";
+	case RTE_MBUF_F_RX_FDIR_ID: return "RTE_MBUF_F_RX_FDIR_ID";
+	case RTE_MBUF_F_RX_FDIR_FLX: return "RTE_MBUF_F_RX_FDIR_FLX";
+	case RTE_MBUF_F_RX_QINQ_STRIPPED: return "RTE_MBUF_F_RX_QINQ_STRIPPED";
+	case RTE_MBUF_F_RX_QINQ: return "RTE_MBUF_F_RX_QINQ";
+	case RTE_MBUF_F_RX_LRO: return "RTE_MBUF_F_RX_LRO";
+	case RTE_MBUF_F_RX_SEC_OFFLOAD: return "RTE_MBUF_F_RX_SEC_OFFLOAD";
+	case RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED: return "RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED";
+	case RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD: return "RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD";
+	case RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD: return "RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD";
+	case RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID:
+		return "RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID";
 
 	default: return NULL;
 	}
@@ -791,37 +791,37 @@ int
 rte_get_rx_ol_flag_list(uint64_t mask, char *buf, size_t buflen)
 {
 	const struct flag_mask rx_flags[] = {
-		{ PKT_RX_VLAN, PKT_RX_VLAN, NULL },
-		{ PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, NULL },
-		{ PKT_RX_FDIR, PKT_RX_FDIR, NULL },
-		{ PKT_RX_L4_CKSUM_BAD, PKT_RX_L4_CKSUM_MASK, NULL },
-		{ PKT_RX_L4_CKSUM_GOOD, PKT_RX_L4_CKSUM_MASK, NULL },
-		{ PKT_RX_L4_CKSUM_NONE, PKT_RX_L4_CKSUM_MASK, NULL },
-		{ PKT_RX_L4_CKSUM_UNKNOWN, PKT_RX_L4_CKSUM_MASK,
-		  "PKT_RX_L4_CKSUM_UNKNOWN" },
-		{ PKT_RX_IP_CKSUM_BAD, PKT_RX_IP_CKSUM_MASK, NULL },
-		{ PKT_RX_IP_CKSUM_GOOD, PKT_RX_IP_CKSUM_MASK, NULL },
-		{ PKT_RX_IP_CKSUM_NONE, PKT_RX_IP_CKSUM_MASK, NULL },
-		{ PKT_RX_IP_CKSUM_UNKNOWN, PKT_RX_IP_CKSUM_MASK,
-		  "PKT_RX_IP_CKSUM_UNKNOWN" },
-		{ PKT_RX_OUTER_IP_CKSUM_BAD, PKT_RX_OUTER_IP_CKSUM_BAD, NULL },
-		{ PKT_RX_VLAN_STRIPPED, PKT_RX_VLAN_STRIPPED, NULL },
-		{ PKT_RX_IEEE1588_PTP, PKT_RX_IEEE1588_PTP, NULL },
-		{ PKT_RX_IEEE1588_TMST, PKT_RX_IEEE1588_TMST, NULL },
-		{ PKT_RX_FDIR_ID, PKT_RX_FDIR_ID, NULL },
-		{ PKT_RX_FDIR_FLX, PKT_RX_FDIR_FLX, NULL },
-		{ PKT_RX_QINQ_STRIPPED, PKT_RX_QINQ_STRIPPED, NULL },
-		{ PKT_RX_LRO, PKT_RX_LRO, NULL },
-		{ PKT_RX_SEC_OFFLOAD, PKT_RX_SEC_OFFLOAD, NULL },
-		{ PKT_RX_SEC_OFFLOAD_FAILED, PKT_RX_SEC_OFFLOAD_FAILED, NULL },
-		{ PKT_RX_QINQ, PKT_RX_QINQ, NULL },
-		{ PKT_RX_OUTER_L4_CKSUM_BAD, PKT_RX_OUTER_L4_CKSUM_MASK, NULL },
-		{ PKT_RX_OUTER_L4_CKSUM_GOOD, PKT_RX_OUTER_L4_CKSUM_MASK,
+		{ RTE_MBUF_F_RX_VLAN, RTE_MBUF_F_RX_VLAN, NULL },
+		{ RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, NULL },
+		{ RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_FDIR, NULL },
+		{ RTE_MBUF_F_RX_L4_CKSUM_BAD, RTE_MBUF_F_RX_L4_CKSUM_MASK, NULL },
+		{ RTE_MBUF_F_RX_L4_CKSUM_GOOD, RTE_MBUF_F_RX_L4_CKSUM_MASK, NULL },
+		{ RTE_MBUF_F_RX_L4_CKSUM_NONE, RTE_MBUF_F_RX_L4_CKSUM_MASK, NULL },
+		{ RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN, RTE_MBUF_F_RX_L4_CKSUM_MASK,
+		  "RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN" },
+		{ RTE_MBUF_F_RX_IP_CKSUM_BAD, RTE_MBUF_F_RX_IP_CKSUM_MASK, NULL },
+		{ RTE_MBUF_F_RX_IP_CKSUM_GOOD, RTE_MBUF_F_RX_IP_CKSUM_MASK, NULL },
+		{ RTE_MBUF_F_RX_IP_CKSUM_NONE, RTE_MBUF_F_RX_IP_CKSUM_MASK, NULL },
+		{ RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN, RTE_MBUF_F_RX_IP_CKSUM_MASK,
+		  "RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN" },
+		{ RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD, RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD, NULL },
+		{ RTE_MBUF_F_RX_VLAN_STRIPPED, RTE_MBUF_F_RX_VLAN_STRIPPED, NULL },
+		{ RTE_MBUF_F_RX_IEEE1588_PTP, RTE_MBUF_F_RX_IEEE1588_PTP, NULL },
+		{ RTE_MBUF_F_RX_IEEE1588_TMST, RTE_MBUF_F_RX_IEEE1588_TMST, NULL },
+		{ RTE_MBUF_F_RX_FDIR_ID, RTE_MBUF_F_RX_FDIR_ID, NULL },
+		{ RTE_MBUF_F_RX_FDIR_FLX, RTE_MBUF_F_RX_FDIR_FLX, NULL },
+		{ RTE_MBUF_F_RX_QINQ_STRIPPED, RTE_MBUF_F_RX_QINQ_STRIPPED, NULL },
+		{ RTE_MBUF_F_RX_LRO, RTE_MBUF_F_RX_LRO, NULL },
+		{ RTE_MBUF_F_RX_SEC_OFFLOAD, RTE_MBUF_F_RX_SEC_OFFLOAD, NULL },
+		{ RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED, RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED, NULL },
+		{ RTE_MBUF_F_RX_QINQ, RTE_MBUF_F_RX_QINQ, NULL },
+		{ RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD, RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK, NULL },
+		{ RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD, RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK,
 		  NULL },
-		{ PKT_RX_OUTER_L4_CKSUM_INVALID, PKT_RX_OUTER_L4_CKSUM_MASK,
+		{ RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID, RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK,
 		  NULL },
-		{ PKT_RX_OUTER_L4_CKSUM_UNKNOWN, PKT_RX_OUTER_L4_CKSUM_MASK,
-		  "PKT_RX_OUTER_L4_CKSUM_UNKNOWN" },
+		{ RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN, RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK,
+		  "RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN" },
 	};
 	const char *name;
 	unsigned int i;
@@ -856,32 +856,32 @@ rte_get_rx_ol_flag_list(uint64_t mask, char *buf, size_t buflen)
 const char *rte_get_tx_ol_flag_name(uint64_t mask)
 {
 	switch (mask) {
-	case PKT_TX_VLAN: return "PKT_TX_VLAN";
-	case PKT_TX_IP_CKSUM: return "PKT_TX_IP_CKSUM";
-	case PKT_TX_TCP_CKSUM: return "PKT_TX_TCP_CKSUM";
-	case PKT_TX_SCTP_CKSUM: return "PKT_TX_SCTP_CKSUM";
-	case PKT_TX_UDP_CKSUM: return "PKT_TX_UDP_CKSUM";
-	case PKT_TX_IEEE1588_TMST: return "PKT_TX_IEEE1588_TMST";
-	case PKT_TX_TCP_SEG: return "PKT_TX_TCP_SEG";
-	case PKT_TX_IPV4: return "PKT_TX_IPV4";
-	case PKT_TX_IPV6: return "PKT_TX_IPV6";
-	case PKT_TX_OUTER_IP_CKSUM: return "PKT_TX_OUTER_IP_CKSUM";
-	case PKT_TX_OUTER_IPV4: return "PKT_TX_OUTER_IPV4";
-	case PKT_TX_OUTER_IPV6: return "PKT_TX_OUTER_IPV6";
-	case PKT_TX_TUNNEL_VXLAN: return "PKT_TX_TUNNEL_VXLAN";
-	case PKT_TX_TUNNEL_GTP: return "PKT_TX_TUNNEL_GTP";
-	case PKT_TX_TUNNEL_GRE: return "PKT_TX_TUNNEL_GRE";
-	case PKT_TX_TUNNEL_IPIP: return "PKT_TX_TUNNEL_IPIP";
-	case PKT_TX_TUNNEL_GENEVE: return "PKT_TX_TUNNEL_GENEVE";
-	case PKT_TX_TUNNEL_MPLSINUDP: return "PKT_TX_TUNNEL_MPLSINUDP";
-	case PKT_TX_TUNNEL_VXLAN_GPE: return "PKT_TX_TUNNEL_VXLAN_GPE";
-	case PKT_TX_TUNNEL_IP: return "PKT_TX_TUNNEL_IP";
-	case PKT_TX_TUNNEL_UDP: return "PKT_TX_TUNNEL_UDP";
-	case PKT_TX_QINQ: return "PKT_TX_QINQ";
-	case PKT_TX_MACSEC: return "PKT_TX_MACSEC";
-	case PKT_TX_SEC_OFFLOAD: return "PKT_TX_SEC_OFFLOAD";
-	case PKT_TX_UDP_SEG: return "PKT_TX_UDP_SEG";
-	case PKT_TX_OUTER_UDP_CKSUM: return "PKT_TX_OUTER_UDP_CKSUM";
+	case RTE_MBUF_F_TX_VLAN: return "RTE_MBUF_F_TX_VLAN";
+	case RTE_MBUF_F_TX_IP_CKSUM: return "RTE_MBUF_F_TX_IP_CKSUM";
+	case RTE_MBUF_F_TX_TCP_CKSUM: return "RTE_MBUF_F_TX_TCP_CKSUM";
+	case RTE_MBUF_F_TX_SCTP_CKSUM: return "RTE_MBUF_F_TX_SCTP_CKSUM";
+	case RTE_MBUF_F_TX_UDP_CKSUM: return "RTE_MBUF_F_TX_UDP_CKSUM";
+	case RTE_MBUF_F_TX_IEEE1588_TMST: return "RTE_MBUF_F_TX_IEEE1588_TMST";
+	case RTE_MBUF_F_TX_TCP_SEG: return "RTE_MBUF_F_TX_TCP_SEG";
+	case RTE_MBUF_F_TX_IPV4: return "RTE_MBUF_F_TX_IPV4";
+	case RTE_MBUF_F_TX_IPV6: return "RTE_MBUF_F_TX_IPV6";
+	case RTE_MBUF_F_TX_OUTER_IP_CKSUM: return "RTE_MBUF_F_TX_OUTER_IP_CKSUM";
+	case RTE_MBUF_F_TX_OUTER_IPV4: return "RTE_MBUF_F_TX_OUTER_IPV4";
+	case RTE_MBUF_F_TX_OUTER_IPV6: return "RTE_MBUF_F_TX_OUTER_IPV6";
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN: return "RTE_MBUF_F_TX_TUNNEL_VXLAN";
+	case RTE_MBUF_F_TX_TUNNEL_GTP: return "RTE_MBUF_F_TX_TUNNEL_GTP";
+	case RTE_MBUF_F_TX_TUNNEL_GRE: return "RTE_MBUF_F_TX_TUNNEL_GRE";
+	case RTE_MBUF_F_TX_TUNNEL_IPIP: return "RTE_MBUF_F_TX_TUNNEL_IPIP";
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE: return "RTE_MBUF_F_TX_TUNNEL_GENEVE";
+	case RTE_MBUF_F_TX_TUNNEL_MPLSINUDP: return "RTE_MBUF_F_TX_TUNNEL_MPLSINUDP";
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE: return "RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE";
+	case RTE_MBUF_F_TX_TUNNEL_IP: return "RTE_MBUF_F_TX_TUNNEL_IP";
+	case RTE_MBUF_F_TX_TUNNEL_UDP: return "RTE_MBUF_F_TX_TUNNEL_UDP";
+	case RTE_MBUF_F_TX_QINQ: return "RTE_MBUF_F_TX_QINQ";
+	case RTE_MBUF_F_TX_MACSEC: return "RTE_MBUF_F_TX_MACSEC";
+	case RTE_MBUF_F_TX_SEC_OFFLOAD: return "RTE_MBUF_F_TX_SEC_OFFLOAD";
+	case RTE_MBUF_F_TX_UDP_SEG: return "RTE_MBUF_F_TX_UDP_SEG";
+	case RTE_MBUF_F_TX_OUTER_UDP_CKSUM: return "RTE_MBUF_F_TX_OUTER_UDP_CKSUM";
 	default: return NULL;
 	}
 }
@@ -891,33 +891,33 @@ int
 rte_get_tx_ol_flag_list(uint64_t mask, char *buf, size_t buflen)
 {
 	const struct flag_mask tx_flags[] = {
-		{ PKT_TX_VLAN, PKT_TX_VLAN, NULL },
-		{ PKT_TX_IP_CKSUM, PKT_TX_IP_CKSUM, NULL },
-		{ PKT_TX_TCP_CKSUM, PKT_TX_L4_MASK, NULL },
-		{ PKT_TX_SCTP_CKSUM, PKT_TX_L4_MASK, NULL },
-		{ PKT_TX_UDP_CKSUM, PKT_TX_L4_MASK, NULL },
-		{ PKT_TX_L4_NO_CKSUM, PKT_TX_L4_MASK, "PKT_TX_L4_NO_CKSUM" },
-		{ PKT_TX_IEEE1588_TMST, PKT_TX_IEEE1588_TMST, NULL },
-		{ PKT_TX_TCP_SEG, PKT_TX_TCP_SEG, NULL },
-		{ PKT_TX_IPV4, PKT_TX_IPV4, NULL },
-		{ PKT_TX_IPV6, PKT_TX_IPV6, NULL },
-		{ PKT_TX_OUTER_IP_CKSUM, PKT_TX_OUTER_IP_CKSUM, NULL },
-		{ PKT_TX_OUTER_IPV4, PKT_TX_OUTER_IPV4, NULL },
-		{ PKT_TX_OUTER_IPV6, PKT_TX_OUTER_IPV6, NULL },
-		{ PKT_TX_TUNNEL_VXLAN, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_GTP, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_GRE, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_IPIP, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_GENEVE, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_MPLSINUDP, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_VXLAN_GPE, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_IP, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_UDP, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_QINQ, PKT_TX_QINQ, NULL },
-		{ PKT_TX_MACSEC, PKT_TX_MACSEC, NULL },
-		{ PKT_TX_SEC_OFFLOAD, PKT_TX_SEC_OFFLOAD, NULL },
-		{ PKT_TX_UDP_SEG, PKT_TX_UDP_SEG, NULL },
-		{ PKT_TX_OUTER_UDP_CKSUM, PKT_TX_OUTER_UDP_CKSUM, NULL },
+		{ RTE_MBUF_F_TX_VLAN, RTE_MBUF_F_TX_VLAN, NULL },
+		{ RTE_MBUF_F_TX_IP_CKSUM, RTE_MBUF_F_TX_IP_CKSUM, NULL },
+		{ RTE_MBUF_F_TX_TCP_CKSUM, RTE_MBUF_F_TX_L4_MASK, NULL },
+		{ RTE_MBUF_F_TX_SCTP_CKSUM, RTE_MBUF_F_TX_L4_MASK, NULL },
+		{ RTE_MBUF_F_TX_UDP_CKSUM, RTE_MBUF_F_TX_L4_MASK, NULL },
+		{ RTE_MBUF_F_TX_L4_NO_CKSUM, RTE_MBUF_F_TX_L4_MASK, "RTE_MBUF_F_TX_L4_NO_CKSUM" },
+		{ RTE_MBUF_F_TX_IEEE1588_TMST, RTE_MBUF_F_TX_IEEE1588_TMST, NULL },
+		{ RTE_MBUF_F_TX_TCP_SEG, RTE_MBUF_F_TX_TCP_SEG, NULL },
+		{ RTE_MBUF_F_TX_IPV4, RTE_MBUF_F_TX_IPV4, NULL },
+		{ RTE_MBUF_F_TX_IPV6, RTE_MBUF_F_TX_IPV6, NULL },
+		{ RTE_MBUF_F_TX_OUTER_IP_CKSUM, RTE_MBUF_F_TX_OUTER_IP_CKSUM, NULL },
+		{ RTE_MBUF_F_TX_OUTER_IPV4, RTE_MBUF_F_TX_OUTER_IPV4, NULL },
+		{ RTE_MBUF_F_TX_OUTER_IPV6, RTE_MBUF_F_TX_OUTER_IPV6, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_VXLAN, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_GTP, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_GRE, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_IPIP, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_GENEVE, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_MPLSINUDP, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_IP, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_UDP, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_QINQ, RTE_MBUF_F_TX_QINQ, NULL },
+		{ RTE_MBUF_F_TX_MACSEC, RTE_MBUF_F_TX_MACSEC, NULL },
+		{ RTE_MBUF_F_TX_SEC_OFFLOAD, RTE_MBUF_F_TX_SEC_OFFLOAD, NULL },
+		{ RTE_MBUF_F_TX_UDP_SEG, RTE_MBUF_F_TX_UDP_SEG, NULL },
+		{ RTE_MBUF_F_TX_OUTER_UDP_CKSUM, RTE_MBUF_F_TX_OUTER_UDP_CKSUM, NULL },
 	};
 	const char *name;
 	unsigned int i;
diff --git a/lib/mbuf/rte_mbuf.h b/lib/mbuf/rte_mbuf.h
index a555f216ae..da158076c3 100644
--- a/lib/mbuf/rte_mbuf.h
+++ b/lib/mbuf/rte_mbuf.h
@@ -77,7 +77,7 @@ int rte_get_rx_ol_flag_list(uint64_t mask, char *buf, size_t buflen);
  * @param mask
  *   The mask describing the flag. Usually only one bit must be set.
  *   Several bits can be given if they belong to the same mask.
- *   Ex: PKT_TX_L4_MASK.
+ *   Ex: RTE_MBUF_F_TX_L4_MASK.
  * @return
  *   The name of this flag, or NULL if it's not a valid TX flag.
  */
@@ -874,7 +874,7 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
 	m->nb_segs = 1;
 	m->port = RTE_MBUF_PORT_INVALID;
 
-	m->ol_flags &= EXT_ATTACHED_MBUF;
+	m->ol_flags &= RTE_MBUF_F_EXTERNAL;
 	m->packet_type = 0;
 	rte_pktmbuf_reset_headroom(m);
 
@@ -1089,7 +1089,7 @@ rte_pktmbuf_attach_extbuf(struct rte_mbuf *m, void *buf_addr,
 	m->data_len = 0;
 	m->data_off = 0;
 
-	m->ol_flags |= EXT_ATTACHED_MBUF;
+	m->ol_flags |= RTE_MBUF_F_EXTERNAL;
 	m->shinfo = shinfo;
 }
 
@@ -1163,7 +1163,7 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *m)
 		/* if m is not direct, get the mbuf that embeds the data */
 		rte_mbuf_refcnt_update(rte_mbuf_from_indirect(m), 1);
 		mi->priv_size = m->priv_size;
-		mi->ol_flags = m->ol_flags | IND_ATTACHED_MBUF;
+		mi->ol_flags = m->ol_flags | RTE_MBUF_F_INDIRECT;
 	}
 
 	__rte_pktmbuf_copy_hdr(mi, m);
@@ -1297,7 +1297,7 @@ static inline int __rte_pktmbuf_pinned_extbuf_decref(struct rte_mbuf *m)
 	struct rte_mbuf_ext_shared_info *shinfo;
 
 	/* Clear flags, mbuf is being freed. */
-	m->ol_flags = EXT_ATTACHED_MBUF;
+	m->ol_flags = RTE_MBUF_F_EXTERNAL;
 	shinfo = m->shinfo;
 
 	/* Optimize for performance - do not dec/reinit */
@@ -1828,28 +1828,28 @@ rte_validate_tx_offload(const struct rte_mbuf *m)
 	uint64_t ol_flags = m->ol_flags;
 
 	/* Does packet set any of available offloads? */
-	if (!(ol_flags & PKT_TX_OFFLOAD_MASK))
+	if (!(ol_flags & RTE_MBUF_F_TX_OFFLOAD_MASK))
 		return 0;
 
 	/* IP checksum can be counted only for IPv4 packet */
-	if ((ol_flags & PKT_TX_IP_CKSUM) && (ol_flags & PKT_TX_IPV6))
+	if ((ol_flags & RTE_MBUF_F_TX_IP_CKSUM) && (ol_flags & RTE_MBUF_F_TX_IPV6))
 		return -EINVAL;
 
 	/* IP type not set when required */
-	if (ol_flags & (PKT_TX_L4_MASK | PKT_TX_TCP_SEG))
-		if (!(ol_flags & (PKT_TX_IPV4 | PKT_TX_IPV6)))
+	if (ol_flags & (RTE_MBUF_F_TX_L4_MASK | RTE_MBUF_F_TX_TCP_SEG))
+		if (!(ol_flags & (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IPV6)))
 			return -EINVAL;
 
 	/* Check requirements for TSO packet */
-	if (ol_flags & PKT_TX_TCP_SEG)
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 		if ((m->tso_segsz == 0) ||
-				((ol_flags & PKT_TX_IPV4) &&
-				!(ol_flags & PKT_TX_IP_CKSUM)))
+				((ol_flags & RTE_MBUF_F_TX_IPV4) &&
+				 !(ol_flags & RTE_MBUF_F_TX_IP_CKSUM)))
 			return -EINVAL;
 
-	/* PKT_TX_OUTER_IP_CKSUM set for non outer IPv4 packet. */
-	if ((ol_flags & PKT_TX_OUTER_IP_CKSUM) &&
-			!(ol_flags & PKT_TX_OUTER_IPV4))
+	/* RTE_MBUF_F_TX_OUTER_IP_CKSUM set for non outer IPv4 packet. */
+	if ((ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) &&
+			!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4))
 		return -EINVAL;
 
 	return 0;
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index 93db9292c0..fed231bf91 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -43,271 +43,377 @@ extern "C" {
 /**
  * The RX packet is a 802.1q VLAN packet, and the tci has been
  * saved in in mbuf->vlan_tci.
- * If the flag PKT_RX_VLAN_STRIPPED is also present, the VLAN
+ * If the flag RTE_MBUF_F_RX_VLAN_STRIPPED is also present, the VLAN
  * header has been stripped from mbuf data, else it is still
  * present.
  */
-#define PKT_RX_VLAN          (1ULL << 0)
+#define RTE_MBUF_F_RX_VLAN          (1ULL << 0)
+#define PKT_RX_VLAN RTE_DEPRECATED(PKT_RX_VLAN) RTE_MBUF_F_RX_VLAN
 
 /** RX packet with RSS hash result. */
-#define PKT_RX_RSS_HASH      (1ULL << 1)
+#define RTE_MBUF_F_RX_RSS_HASH      (1ULL << 1)
+#define PKT_RX_RSS_HASH RTE_DEPRECATED(PKT_RX_RSS_HASH) RTE_MBUF_F_RX_RSS_HASH
 
  /** RX packet with FDIR match indicate. */
-#define PKT_RX_FDIR          (1ULL << 2)
+#define RTE_MBUF_F_RX_FDIR          (1ULL << 2)
+#define PKT_RX_FDIR RTE_DEPRECATED(PKT_RX_FDIR) RTE_MBUF_F_RX_FDIR
 
 /**
  * This flag is set when the outermost IP header checksum is detected as
  * wrong by the hardware.
  */
-#define PKT_RX_OUTER_IP_CKSUM_BAD (1ULL << 5)
+#define RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD (1ULL << 5)
+#define PKT_RX_OUTER_IP_CKSUM_BAD RTE_DEPRECATED(PKT_RX_OUTER_IP_CKSUM_BAD) \
+		RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD
 
 /**
  * A vlan has been stripped by the hardware and its tci is saved in
  * mbuf->vlan_tci. This can only happen if vlan stripping is enabled
  * in the RX configuration of the PMD.
- * When PKT_RX_VLAN_STRIPPED is set, PKT_RX_VLAN must also be set.
+ * When RTE_MBUF_F_RX_VLAN_STRIPPED is set, RTE_MBUF_F_RX_VLAN must also be set.
  */
-#define PKT_RX_VLAN_STRIPPED (1ULL << 6)
+#define RTE_MBUF_F_RX_VLAN_STRIPPED (1ULL << 6)
+#define PKT_RX_VLAN_STRIPPED RTE_DEPRECATED(PKT_RX_VLAN_STRIPPED) \
+		RTE_MBUF_F_RX_VLAN_STRIPPED
 
 /**
  * Mask of bits used to determine the status of RX IP checksum.
- * - PKT_RX_IP_CKSUM_UNKNOWN: no information about the RX IP checksum
- * - PKT_RX_IP_CKSUM_BAD: the IP checksum in the packet is wrong
- * - PKT_RX_IP_CKSUM_GOOD: the IP checksum in the packet is valid
- * - PKT_RX_IP_CKSUM_NONE: the IP checksum is not correct in the packet
+ * - RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN: no information about the RX IP checksum
+ * - RTE_MBUF_F_RX_IP_CKSUM_BAD: the IP checksum in the packet is wrong
+ * - RTE_MBUF_F_RX_IP_CKSUM_GOOD: the IP checksum in the packet is valid
+ * - RTE_MBUF_F_RX_IP_CKSUM_NONE: the IP checksum is not correct in the packet
  *   data, but the integrity of the IP header is verified.
  */
-#define PKT_RX_IP_CKSUM_MASK ((1ULL << 4) | (1ULL << 7))
+#define RTE_MBUF_F_RX_IP_CKSUM_MASK ((1ULL << 4) | (1ULL << 7))
+#define PKT_RX_IP_CKSUM_MASK RTE_DEPRECATED(PKT_RX_IP_CKSUM_MASK) \
+		RTE_MBUF_F_RX_IP_CKSUM_MASK
 
-#define PKT_RX_IP_CKSUM_UNKNOWN 0
-#define PKT_RX_IP_CKSUM_BAD     (1ULL << 4)
-#define PKT_RX_IP_CKSUM_GOOD    (1ULL << 7)
-#define PKT_RX_IP_CKSUM_NONE    ((1ULL << 4) | (1ULL << 7))
+#define RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN 0
+#define RTE_MBUF_F_RX_IP_CKSUM_BAD     (1ULL << 4)
+#define RTE_MBUF_F_RX_IP_CKSUM_GOOD    (1ULL << 7)
+#define RTE_MBUF_F_RX_IP_CKSUM_NONE    ((1ULL << 4) | (1ULL << 7))
+#define PKT_RX_IP_CKSUM_UNKNOWN RTE_DEPRECATED(PKT_RX_IP_CKSUM_UNKNOWN) \
+		RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN
+#define PKT_RX_IP_CKSUM_BAD RTE_DEPRECATED(PKT_RX_IP_CKSUM_BAD) \
+		RTE_MBUF_F_RX_IP_CKSUM_BAD
+#define PKT_RX_IP_CKSUM_GOOD RTE_DEPRECATED(PKT_RX_IP_CKSUM_GOOD) \
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD
+#define PKT_RX_IP_CKSUM_NONE RTE_DEPRECATED(PKT_RX_IP_CKSUM_NONE) \
+		RTE_MBUF_F_RX_IP_CKSUM_NONE
 
 /**
  * Mask of bits used to determine the status of RX L4 checksum.
- * - PKT_RX_L4_CKSUM_UNKNOWN: no information about the RX L4 checksum
- * - PKT_RX_L4_CKSUM_BAD: the L4 checksum in the packet is wrong
- * - PKT_RX_L4_CKSUM_GOOD: the L4 checksum in the packet is valid
- * - PKT_RX_L4_CKSUM_NONE: the L4 checksum is not correct in the packet
+ * - RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN: no information about the RX L4 checksum
+ * - RTE_MBUF_F_RX_L4_CKSUM_BAD: the L4 checksum in the packet is wrong
+ * - RTE_MBUF_F_RX_L4_CKSUM_GOOD: the L4 checksum in the packet is valid
+ * - RTE_MBUF_F_RX_L4_CKSUM_NONE: the L4 checksum is not correct in the packet
  *   data, but the integrity of the L4 data is verified.
  */
-#define PKT_RX_L4_CKSUM_MASK ((1ULL << 3) | (1ULL << 8))
-
-#define PKT_RX_L4_CKSUM_UNKNOWN 0
-#define PKT_RX_L4_CKSUM_BAD     (1ULL << 3)
-#define PKT_RX_L4_CKSUM_GOOD    (1ULL << 8)
-#define PKT_RX_L4_CKSUM_NONE    ((1ULL << 3) | (1ULL << 8))
+#define RTE_MBUF_F_RX_L4_CKSUM_MASK ((1ULL << 3) | (1ULL << 8))
+#define PKT_RX_L4_CKSUM_MASK RTE_DEPRECATED(PKT_RX_L4_CKSUM_MASK) \
+		RTE_MBUF_F_RX_L4_CKSUM_MASK
+
+#define RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN 0
+#define RTE_MBUF_F_RX_L4_CKSUM_BAD     (1ULL << 3)
+#define RTE_MBUF_F_RX_L4_CKSUM_GOOD    (1ULL << 8)
+#define RTE_MBUF_F_RX_L4_CKSUM_NONE    ((1ULL << 3) | (1ULL << 8))
+#define PKT_RX_L4_CKSUM_UNKNOWN RTE_DEPRECATED(PKT_RX_L4_CKSUM_UNKNOWN) \
+		RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN
+#define PKT_RX_L4_CKSUM_BAD RTE_DEPRECATED(PKT_RX_L4_CKSUM_BAD) \
+		RTE_MBUF_F_RX_L4_CKSUM_BAD
+#define PKT_RX_L4_CKSUM_GOOD RTE_DEPRECATED(PKT_RX_L4_CKSUM_GOOD) \
+		RTE_MBUF_F_RX_L4_CKSUM_GOOD
+#define PKT_RX_L4_CKSUM_NONE RTE_DEPRECATED(PKT_RX_L4_CKSUM_NONE) \
+		RTE_MBUF_F_RX_L4_CKSUM_NONE
 
 /** RX IEEE1588 L2 Ethernet PT Packet. */
-#define PKT_RX_IEEE1588_PTP  (1ULL << 9)
+#define RTE_MBUF_F_RX_IEEE1588_PTP  (1ULL << 9)
+#define PKT_RX_IEEE1588_PTP RTE_DEPRECATED(PKT_RX_IEEE1588_PTP) \
+		RTE_MBUF_F_RX_IEEE1588_PTP
 
 /** RX IEEE1588 L2/L4 timestamped packet.*/
-#define PKT_RX_IEEE1588_TMST (1ULL << 10)
+#define RTE_MBUF_F_RX_IEEE1588_TMST (1ULL << 10)
+#define PKT_RX_IEEE1588_TMST RTE_DEPRECATED(PKT_RX_IEEE1588_TMST) \
+		RTE_MBUF_F_RX_IEEE1588_TMST
 
 /** FD id reported if FDIR match. */
-#define PKT_RX_FDIR_ID       (1ULL << 13)
+#define RTE_MBUF_F_RX_FDIR_ID       (1ULL << 13)
+#define PKT_RX_FDIR_ID RTE_DEPRECATED(PKT_RX_FDIR_ID) \
+		RTE_MBUF_F_RX_FDIR_ID
 
 /** Flexible bytes reported if FDIR match. */
-#define PKT_RX_FDIR_FLX      (1ULL << 14)
+#define RTE_MBUF_F_RX_FDIR_FLX      (1ULL << 14)
+#define PKT_RX_FDIR_FLX RTE_DEPRECATED(PKT_RX_FDIR_FLX) \
+		RTE_MBUF_F_RX_FDIR_FLX
 
 /**
  * The outer VLAN has been stripped by the hardware and its TCI is
  * saved in mbuf->vlan_tci_outer.
  * This can only happen if VLAN stripping is enabled in the Rx
  * configuration of the PMD.
- * When PKT_RX_QINQ_STRIPPED is set, the flags PKT_RX_VLAN and PKT_RX_QINQ
- * must also be set.
+ * When RTE_MBUF_F_RX_QINQ_STRIPPED is set, the flags RTE_MBUF_F_RX_VLAN
+ * and RTE_MBUF_F_RX_QINQ must also be set.
  *
- * - If both PKT_RX_QINQ_STRIPPED and PKT_RX_VLAN_STRIPPED are set, the 2 VLANs
- *   have been stripped by the hardware and their TCIs are saved in
- *   mbuf->vlan_tci (inner) and mbuf->vlan_tci_outer (outer).
- * - If PKT_RX_QINQ_STRIPPED is set and PKT_RX_VLAN_STRIPPED is unset, only the
- *   outer VLAN is removed from packet data, but both tci are saved in
- *   mbuf->vlan_tci (inner) and mbuf->vlan_tci_outer (outer).
+ * - If both RTE_MBUF_F_RX_QINQ_STRIPPED and RTE_MBUF_F_RX_VLAN_STRIPPED are
+ *   set, the 2 VLANs have been stripped by the hardware and their TCIs are
+ *   saved in mbuf->vlan_tci (inner) and mbuf->vlan_tci_outer (outer).
+ * - If RTE_MBUF_F_RX_QINQ_STRIPPED is set and RTE_MBUF_F_RX_VLAN_STRIPPED
+ *   is unset, only the outer VLAN is removed from packet data, but both tci
+ *   are saved in mbuf->vlan_tci (inner) and mbuf->vlan_tci_outer (outer).
  */
-#define PKT_RX_QINQ_STRIPPED (1ULL << 15)
+#define RTE_MBUF_F_RX_QINQ_STRIPPED (1ULL << 15)
+#define PKT_RX_QINQ_STRIPPED RTE_DEPRECATED(PKT_RX_QINQ_STRIPPED) \
+		RTE_MBUF_F_RX_QINQ_STRIPPED
 
 /**
  * When packets are coalesced by a hardware or virtual driver, this flag
  * can be set in the RX mbuf, meaning that the m->tso_segsz field is
  * valid and is set to the segment size of original packets.
  */
-#define PKT_RX_LRO           (1ULL << 16)
+#define RTE_MBUF_F_RX_LRO           (1ULL << 16)
+#define PKT_RX_LRO RTE_DEPRECATED(PKT_RX_LRO) RTE_MBUF_F_RX_LRO
 
 /* There is no flag defined at offset 17. It is free for any future use. */
 
 /**
  * Indicate that security offload processing was applied on the RX packet.
  */
-#define PKT_RX_SEC_OFFLOAD	(1ULL << 18)
+#define RTE_MBUF_F_RX_SEC_OFFLOAD	(1ULL << 18)
+#define PKT_RX_SEC_OFFLOAD RTE_DEPRECATED(PKT_RX_SEC_OFFLOAD) \
+		RTE_MBUF_F_RX_SEC_OFFLOAD
 
 /**
  * Indicate that security offload processing failed on the RX packet.
  */
-#define PKT_RX_SEC_OFFLOAD_FAILED	(1ULL << 19)
+#define RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED	(1ULL << 19)
+#define PKT_RX_SEC_OFFLOAD_FAILED RTE_DEPRECATED(PKT_RX_SEC_OFFLOAD_FAILED) \
+		RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED
 
 /**
  * The RX packet is a double VLAN, and the outer tci has been
- * saved in mbuf->vlan_tci_outer. If this flag is set, PKT_RX_VLAN
+ * saved in mbuf->vlan_tci_outer. If this flag is set, RTE_MBUF_F_RX_VLAN
  * must also be set and the inner tci is saved in mbuf->vlan_tci.
- * If the flag PKT_RX_QINQ_STRIPPED is also present, both VLANs
+ * If the flag RTE_MBUF_F_RX_QINQ_STRIPPED is also present, both VLANs
  * headers have been stripped from mbuf data, else they are still
  * present.
  */
-#define PKT_RX_QINQ          (1ULL << 20)
+#define RTE_MBUF_F_RX_QINQ          (1ULL << 20)
+#define PKT_RX_QINQ RTE_DEPRECATED(PKT_RX_QINQ) RTE_MBUF_F_RX_QINQ
 
 /**
  * Mask of bits used to determine the status of outer RX L4 checksum.
- * - PKT_RX_OUTER_L4_CKSUM_UNKNOWN: no info about the outer RX L4 checksum
- * - PKT_RX_OUTER_L4_CKSUM_BAD: the outer L4 checksum in the packet is wrong
- * - PKT_RX_OUTER_L4_CKSUM_GOOD: the outer L4 checksum in the packet is valid
- * - PKT_RX_OUTER_L4_CKSUM_INVALID: invalid outer L4 checksum state.
+ * - RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN: no info about the outer RX L4
+ *   checksum
+ * - RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD: the outer L4 checksum in the packet
+ *   is wrong
+ * - RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD: the outer L4 checksum in the packet
+ *   is valid
+ * - RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID: invalid outer L4 checksum state.
  *
- * The detection of PKT_RX_OUTER_L4_CKSUM_GOOD shall be based on the given
- * HW capability, At minimum, the PMD should support
- * PKT_RX_OUTER_L4_CKSUM_UNKNOWN and PKT_RX_OUTER_L4_CKSUM_BAD states
- * if the DEV_RX_OFFLOAD_OUTER_UDP_CKSUM offload is available.
- */
-#define PKT_RX_OUTER_L4_CKSUM_MASK	((1ULL << 21) | (1ULL << 22))
-
-#define PKT_RX_OUTER_L4_CKSUM_UNKNOWN	0
-#define PKT_RX_OUTER_L4_CKSUM_BAD	(1ULL << 21)
-#define PKT_RX_OUTER_L4_CKSUM_GOOD	(1ULL << 22)
-#define PKT_RX_OUTER_L4_CKSUM_INVALID	((1ULL << 21) | (1ULL << 22))
-
-/* add new RX flags here, don't forget to update PKT_FIRST_FREE */
-
-#define PKT_FIRST_FREE (1ULL << 23)
-#define PKT_LAST_FREE (1ULL << 40)
-
-/* add new TX flags here, don't forget to update PKT_LAST_FREE  */
+ * The detection of RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD shall be based on the
+ * given HW capability, At minimum, the PMD should support
+ * RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN and RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD
+ * states if the DEV_RX_OFFLOAD_OUTER_UDP_CKSUM offload is available.
+ */
+#define RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK	((1ULL << 21) | (1ULL << 22))
+#define PKT_RX_OUTER_L4_CKSUM_MASK RTE_DEPRECATED(PKT_RX_OUTER_L4_CKSUM_MASK) \
+		RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK
+
+#define RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN	0
+#define RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD	(1ULL << 21)
+#define RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD	(1ULL << 22)
+#define RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID	((1ULL << 21) | (1ULL << 22))
+#define PKT_RX_OUTER_L4_CKSUM_UNKNOWN \
+	RTE_DEPRECATED(PKT_RX_OUTER_L4_CKSUM_UNKNOWN) \
+	RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
+#define PKT_RX_OUTER_L4_CKSUM_BAD RTE_DEPRECATED(PKT_RX_OUTER_L4_CKSUM_BAD) \
+		RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD
+#define PKT_RX_OUTER_L4_CKSUM_GOOD RTE_DEPRECATED(PKT_RX_OUTER_L4_CKSUM_GOOD) \
+		RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD
+#define PKT_RX_OUTER_L4_CKSUM_INVALID \
+	RTE_DEPRECATED(PKT_RX_OUTER_L4_CKSUM_INVALID) \
+	RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID
+
+/* add new RX flags here, don't forget to update RTE_MBUF_F_FIRST_FREE */
+
+#define RTE_MBUF_F_FIRST_FREE (1ULL << 23)
+#define PKT_FIRST_FREE RTE_DEPRECATED(PKT_FIRST_FREE) RTE_MBUF_F_FIRST_FREE
+#define RTE_MBUF_F_LAST_FREE (1ULL << 40)
+#define PKT_LAST_FREE RTE_DEPRECATED(PKT_LAST_FREE) RTE_MBUF_F_LAST_FREE
+
+/* add new TX flags here, don't forget to update RTE_MBUF_F_LAST_FREE  */
 
 /**
  * Outer UDP checksum offload flag. This flag is used for enabling
  * outer UDP checksum in PMD. To use outer UDP checksum, the user needs to
  * 1) Enable the following in mbuf,
  * a) Fill outer_l2_len and outer_l3_len in mbuf.
- * b) Set the PKT_TX_OUTER_UDP_CKSUM flag.
- * c) Set the PKT_TX_OUTER_IPV4 or PKT_TX_OUTER_IPV6 flag.
+ * b) Set the RTE_MBUF_F_TX_OUTER_UDP_CKSUM flag.
+ * c) Set the RTE_MBUF_F_TX_OUTER_IPV4 or RTE_MBUF_F_TX_OUTER_IPV6 flag.
  * 2) Configure DEV_TX_OFFLOAD_OUTER_UDP_CKSUM offload flag.
  */
-#define PKT_TX_OUTER_UDP_CKSUM     (1ULL << 41)
+#define RTE_MBUF_F_TX_OUTER_UDP_CKSUM     (1ULL << 41)
+#define PKT_TX_OUTER_UDP_CKSUM RTE_DEPRECATED(PKT_TX_OUTER_UDP_CKSUM) \
+		RTE_MBUF_F_TX_OUTER_UDP_CKSUM
 
 /**
  * UDP Fragmentation Offload flag. This flag is used for enabling UDP
  * fragmentation in SW or in HW. When use UFO, mbuf->tso_segsz is used
  * to store the MSS of UDP fragments.
  */
-#define PKT_TX_UDP_SEG	(1ULL << 42)
+#define RTE_MBUF_F_TX_UDP_SEG	(1ULL << 42)
+#define PKT_TX_UDP_SEG RTE_DEPRECATED(PKT_TX_UDP_SEG) RTE_MBUF_F_TX_UDP_SEG
 
 /**
  * Request security offload processing on the TX packet.
  * To use Tx security offload, the user needs to fill l2_len in mbuf
  * indicating L2 header size and where L3 header starts.
  */
-#define PKT_TX_SEC_OFFLOAD	(1ULL << 43)
+#define RTE_MBUF_F_TX_SEC_OFFLOAD	(1ULL << 43)
+#define PKT_TX_SEC_OFFLOAD RTE_DEPRECATED(PKT_TX_SEC_OFFLOAD) \
+		RTE_MBUF_F_TX_SEC_OFFLOAD
 
 /**
  * Offload the MACsec. This flag must be set by the application to enable
  * this offload feature for a packet to be transmitted.
  */
-#define PKT_TX_MACSEC        (1ULL << 44)
+#define RTE_MBUF_F_TX_MACSEC        (1ULL << 44)
+#define PKT_TX_MACSEC RTE_DEPRECATED(PKT_TX_MACSEC) RTE_MBUF_F_TX_MACSEC
 
 /**
  * Bits 45:48 used for the tunnel type.
  * The tunnel type must be specified for TSO or checksum on the inner part
  * of tunnel packets.
- * These flags can be used with PKT_TX_TCP_SEG for TSO, or PKT_TX_xxx_CKSUM.
+ * These flags can be used with RTE_MBUF_F_TX_TCP_SEG for TSO, or
+ * RTE_MBUF_F_TX_xxx_CKSUM.
  * The mbuf fields for inner and outer header lengths are required:
  * outer_l2_len, outer_l3_len, l2_len, l3_len, l4_len and tso_segsz for TSO.
  */
-#define PKT_TX_TUNNEL_VXLAN   (0x1ULL << 45)
-#define PKT_TX_TUNNEL_GRE     (0x2ULL << 45)
-#define PKT_TX_TUNNEL_IPIP    (0x3ULL << 45)
-#define PKT_TX_TUNNEL_GENEVE  (0x4ULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_VXLAN   (0x1ULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_GRE     (0x2ULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_IPIP    (0x3ULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_GENEVE  (0x4ULL << 45)
 /** TX packet with MPLS-in-UDP RFC 7510 header. */
-#define PKT_TX_TUNNEL_MPLSINUDP (0x5ULL << 45)
-#define PKT_TX_TUNNEL_VXLAN_GPE (0x6ULL << 45)
-#define PKT_TX_TUNNEL_GTP       (0x7ULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_MPLSINUDP (0x5ULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE (0x6ULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_GTP       (0x7ULL << 45)
 /**
  * Generic IP encapsulated tunnel type, used for TSO and checksum offload.
  * It can be used for tunnels which are not standards or listed above.
- * It is preferred to use specific tunnel flags like PKT_TX_TUNNEL_GRE
- * or PKT_TX_TUNNEL_IPIP if possible.
+ * It is preferred to use specific tunnel flags like RTE_MBUF_F_TX_TUNNEL_GRE
+ * or RTE_MBUF_F_TX_TUNNEL_IPIP if possible.
  * The ethdev must be configured with DEV_TX_OFFLOAD_IP_TNL_TSO.
  * Outer and inner checksums are done according to the existing flags like
- * PKT_TX_xxx_CKSUM.
+ * RTE_MBUF_F_TX_xxx_CKSUM.
  * Specific tunnel headers that contain payload length, sequence id
  * or checksum are not expected to be updated.
  */
-#define PKT_TX_TUNNEL_IP (0xDULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_IP (0xDULL << 45)
 /**
  * Generic UDP encapsulated tunnel type, used for TSO and checksum offload.
  * UDP tunnel type implies outer IP layer.
  * It can be used for tunnels which are not standards or listed above.
- * It is preferred to use specific tunnel flags like PKT_TX_TUNNEL_VXLAN
+ * It is preferred to use specific tunnel flags like RTE_MBUF_F_TX_TUNNEL_VXLAN
  * if possible.
  * The ethdev must be configured with DEV_TX_OFFLOAD_UDP_TNL_TSO.
  * Outer and inner checksums are done according to the existing flags like
- * PKT_TX_xxx_CKSUM.
+ * RTE_MBUF_F_TX_xxx_CKSUM.
  * Specific tunnel headers that contain payload length, sequence id
  * or checksum are not expected to be updated.
  */
-#define PKT_TX_TUNNEL_UDP (0xEULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_UDP (0xEULL << 45)
 /* add new TX TUNNEL type here */
-#define PKT_TX_TUNNEL_MASK    (0xFULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_MASK    (0xFULL << 45)
+
+#define PKT_TX_TUNNEL_VXLAN RTE_DEPRECATED(PKT_TX_TUNNEL_VXLAN) \
+		RTE_MBUF_F_TX_TUNNEL_VXLAN
+#define PKT_TX_TUNNEL_GRE RTE_DEPRECATED(PKT_TX_TUNNEL_GRE) \
+		RTE_MBUF_F_TX_TUNNEL_GRE
+#define PKT_TX_TUNNEL_IPIP RTE_DEPRECATED(PKT_TX_TUNNEL_IPIP) \
+		RTE_MBUF_F_TX_TUNNEL_IPIP
+#define PKT_TX_TUNNEL_GENEVE RTE_DEPRECATED(PKT_TX_TUNNEL_GENEVE) \
+		RTE_MBUF_F_TX_TUNNEL_GENEVE
+#define PKT_TX_TUNNEL_MPLSINUDP RTE_DEPRECATED(PKT_TX_TUNNEL_MPLSINUDP) \
+		RTE_MBUF_F_TX_TUNNEL_MPLSINUDP
+#define PKT_TX_TUNNEL_VXLAN_GPE RTE_DEPRECATED(PKT_TX_TUNNEL_VXLAN_GPE) \
+		RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE
+#define PKT_TX_TUNNEL_GTP RTE_DEPRECATED(PKT_TX_TUNNEL_GTP) \
+		RTE_MBUF_F_TX_TUNNEL_GTP
+#define PKT_TX_TUNNEL_IP RTE_DEPRECATED(PKT_TX_TUNNEL_IP) \
+		RTE_MBUF_F_TX_TUNNEL_IP
+#define PKT_TX_TUNNEL_UDP RTE_DEPRECATED(PKT_TX_TUNNEL_UDP) \
+		RTE_MBUF_F_TX_TUNNEL_UDP
+#define PKT_TX_TUNNEL_MASK RTE_DEPRECATED(PKT_TX_TUNNEL_MASK) \
+		RTE_MBUF_F_TX_TUNNEL_MASK
 
 /**
  * Double VLAN insertion (QinQ) request to driver, driver may offload the
  * insertion based on device capability.
  * mbuf 'vlan_tci' & 'vlan_tci_outer' must be valid when this flag is set.
  */
-#define PKT_TX_QINQ        (1ULL << 49)
+#define RTE_MBUF_F_TX_QINQ        (1ULL << 49)
+#define PKT_TX_QINQ RTE_DEPRECATED(PKT_TX_QINQ) RTE_MBUF_F_TX_QINQ
 
 /**
  * TCP segmentation offload. To enable this offload feature for a
  * packet to be transmitted on hardware supporting TSO:
- *  - set the PKT_TX_TCP_SEG flag in mbuf->ol_flags (this flag implies
- *    PKT_TX_TCP_CKSUM)
- *  - set the flag PKT_TX_IPV4 or PKT_TX_IPV6
- *  - if it's IPv4, set the PKT_TX_IP_CKSUM flag
+ *  - set the RTE_MBUF_F_TX_TCP_SEG flag in mbuf->ol_flags (this flag implies
+ *    RTE_MBUF_F_TX_TCP_CKSUM)
+ *  - set the flag RTE_MBUF_F_TX_IPV4 or RTE_MBUF_F_TX_IPV6
+ *  - if it's IPv4, set the RTE_MBUF_F_TX_IP_CKSUM flag
  *  - fill the mbuf offload information: l2_len, l3_len, l4_len, tso_segsz
  */
-#define PKT_TX_TCP_SEG       (1ULL << 50)
+#define RTE_MBUF_F_TX_TCP_SEG       (1ULL << 50)
+#define PKT_TX_TCP_SEG RTE_DEPRECATED(PKT_TX_TCP_SEG) RTE_MBUF_F_TX_TCP_SEG
 
 /** TX IEEE1588 packet to timestamp. */
-#define PKT_TX_IEEE1588_TMST (1ULL << 51)
+#define RTE_MBUF_F_TX_IEEE1588_TMST (1ULL << 51)
+#define PKT_TX_IEEE1588_TMST RTE_DEPRECATED(PKT_TX_IEEE1588_TMST) \
+		RTE_MBUF_F_TX_IEEE1588_TMST
 
-/**
+/*
  * Bits 52+53 used for L4 packet type with checksum enabled: 00: Reserved,
  * 01: TCP checksum, 10: SCTP checksum, 11: UDP checksum. To use hardware
  * L4 checksum offload, the user needs to:
  *  - fill l2_len and l3_len in mbuf
- *  - set the flags PKT_TX_TCP_CKSUM, PKT_TX_SCTP_CKSUM or PKT_TX_UDP_CKSUM
- *  - set the flag PKT_TX_IPV4 or PKT_TX_IPV6
+ *  - set the flags RTE_MBUF_F_TX_TCP_CKSUM, RTE_MBUF_F_TX_SCTP_CKSUM or
+ *    RTE_MBUF_F_TX_UDP_CKSUM
+ *  - set the flag RTE_MBUF_F_TX_IPV4 or RTE_MBUF_F_TX_IPV6
  */
-#define PKT_TX_L4_NO_CKSUM   (0ULL << 52) /**< Disable L4 cksum of TX pkt. */
+
+/** Disable L4 cksum of TX pkt. */
+#define RTE_MBUF_F_TX_L4_NO_CKSUM   (0ULL << 52)
 
 /** TCP cksum of TX pkt. computed by NIC. */
-#define PKT_TX_TCP_CKSUM     (1ULL << 52)
+#define RTE_MBUF_F_TX_TCP_CKSUM     (1ULL << 52)
 
 /** SCTP cksum of TX pkt. computed by NIC. */
-#define PKT_TX_SCTP_CKSUM    (2ULL << 52)
+#define RTE_MBUF_F_TX_SCTP_CKSUM    (2ULL << 52)
 
 /** UDP cksum of TX pkt. computed by NIC. */
-#define PKT_TX_UDP_CKSUM     (3ULL << 52)
+#define RTE_MBUF_F_TX_UDP_CKSUM     (3ULL << 52)
 
 /** Mask for L4 cksum offload request. */
-#define PKT_TX_L4_MASK       (3ULL << 52)
+#define RTE_MBUF_F_TX_L4_MASK       (3ULL << 52)
+
+#define PKT_TX_L4_NO_CKSUM RTE_DEPRECATED(PKT_TX_L4_NO_CKSUM) \
+		RTE_MBUF_F_TX_L4_NO_CKSUM
+#define PKT_TX_TCP_CKSUM RTE_DEPRECATED(PKT_TX_TCP_CKSUM) \
+		RTE_MBUF_F_TX_TCP_CKSUM
+#define PKT_TX_SCTP_CKSUM RTE_DEPRECATED(PKT_TX_SCTP_CKSUM) \
+		RTE_MBUF_F_TX_SCTP_CKSUM
+#define PKT_TX_UDP_CKSUM RTE_DEPRECATED(PKT_TX_UDP_CKSUM) \
+		RTE_MBUF_F_TX_UDP_CKSUM
+#define PKT_TX_L4_MASK RTE_DEPRECATED(PKT_TX_L4_MASK) RTE_MBUF_F_TX_L4_MASK
 
 /**
- * Offload the IP checksum in the hardware. The flag PKT_TX_IPV4 should
+ * Offload the IP checksum in the hardware. The flag RTE_MBUF_F_TX_IPV4 should
  * also be set by the application, although a PMD will only check
- * PKT_TX_IP_CKSUM.
+ * RTE_MBUF_F_TX_IP_CKSUM.
  *  - fill the mbuf offload information: l2_len, l3_len
  */
-#define PKT_TX_IP_CKSUM      (1ULL << 54)
+#define RTE_MBUF_F_TX_IP_CKSUM      (1ULL << 54)
+#define PKT_TX_IP_CKSUM RTE_DEPRECATED(PKT_TX_IP_CKSUM) RTE_MBUF_F_TX_IP_CKSUM
 
 /**
  * Packet is IPv4. This flag must be set when using any offload feature
@@ -315,7 +421,8 @@ extern "C" {
  * packet. If the packet is a tunneled packet, this flag is related to
  * the inner headers.
  */
-#define PKT_TX_IPV4          (1ULL << 55)
+#define RTE_MBUF_F_TX_IPV4          (1ULL << 55)
+#define PKT_TX_IPV4 RTE_DEPRECATED(PKT_TX_IPV4) RTE_MBUF_F_TX_IPV4
 
 /**
  * Packet is IPv6. This flag must be set when using an offload feature
@@ -323,65 +430,76 @@ extern "C" {
  * packet. If the packet is a tunneled packet, this flag is related to
  * the inner headers.
  */
-#define PKT_TX_IPV6          (1ULL << 56)
+#define RTE_MBUF_F_TX_IPV6          (1ULL << 56)
+#define PKT_TX_IPV6 RTE_DEPRECATED(PKT_TX_IPV6) RTE_MBUF_F_TX_IPV6
 
 /**
  * VLAN tag insertion request to driver, driver may offload the insertion
  * based on the device capability.
  * mbuf 'vlan_tci' field must be valid when this flag is set.
  */
-#define PKT_TX_VLAN          (1ULL << 57)
+#define RTE_MBUF_F_TX_VLAN          (1ULL << 57)
+#define PKT_TX_VLAN RTE_DEPRECATED(PKT_TX_VLAN) RTE_MBUF_F_TX_VLAN
 
 /**
  * Offload the IP checksum of an external header in the hardware. The
- * flag PKT_TX_OUTER_IPV4 should also be set by the application, although
- * a PMD will only check PKT_TX_OUTER_IP_CKSUM.
+ * flag RTE_MBUF_F_TX_OUTER_IPV4 should also be set by the application, although
+ * a PMD will only check RTE_MBUF_F_TX_OUTER_IP_CKSUM.
  *  - fill the mbuf offload information: outer_l2_len, outer_l3_len
  */
-#define PKT_TX_OUTER_IP_CKSUM   (1ULL << 58)
+#define RTE_MBUF_F_TX_OUTER_IP_CKSUM   (1ULL << 58)
+#define PKT_TX_OUTER_IP_CKSUM RTE_DEPRECATED(PKT_TX_OUTER_IP_CKSUM) \
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM
 
 /**
  * Packet outer header is IPv4. This flag must be set when using any
  * outer offload feature (L3 or L4 checksum) to tell the NIC that the
  * outer header of the tunneled packet is an IPv4 packet.
  */
-#define PKT_TX_OUTER_IPV4   (1ULL << 59)
+#define RTE_MBUF_F_TX_OUTER_IPV4   (1ULL << 59)
+#define PKT_TX_OUTER_IPV4 RTE_DEPRECATED(PKT_TX_OUTER_IPV4) \
+		RTE_MBUF_F_TX_OUTER_IPV4
 
 /**
  * Packet outer header is IPv6. This flag must be set when using any
  * outer offload feature (L4 checksum) to tell the NIC that the outer
  * header of the tunneled packet is an IPv6 packet.
  */
-#define PKT_TX_OUTER_IPV6    (1ULL << 60)
+#define RTE_MBUF_F_TX_OUTER_IPV6    (1ULL << 60)
+#define PKT_TX_OUTER_IPV6 RTE_DEPRECATED(PKT_TX_OUTER_IPV6) \
+		RTE_MBUF_F_TX_OUTER_IPV6
 
 /**
  * Bitmask of all supported packet Tx offload features flags,
  * which can be set for packet.
  */
-#define PKT_TX_OFFLOAD_MASK (    \
-		PKT_TX_OUTER_IPV6 |	 \
-		PKT_TX_OUTER_IPV4 |	 \
-		PKT_TX_OUTER_IP_CKSUM |  \
-		PKT_TX_VLAN |        \
-		PKT_TX_IPV6 |		 \
-		PKT_TX_IPV4 |		 \
-		PKT_TX_IP_CKSUM |        \
-		PKT_TX_L4_MASK |         \
-		PKT_TX_IEEE1588_TMST |	 \
-		PKT_TX_TCP_SEG |         \
-		PKT_TX_QINQ |        \
-		PKT_TX_TUNNEL_MASK |	 \
-		PKT_TX_MACSEC |		 \
-		PKT_TX_SEC_OFFLOAD |	 \
-		PKT_TX_UDP_SEG |	 \
-		PKT_TX_OUTER_UDP_CKSUM)
+#define RTE_MBUF_F_TX_OFFLOAD_MASK (    \
+		RTE_MBUF_F_TX_OUTER_IPV6 |	 \
+		RTE_MBUF_F_TX_OUTER_IPV4 |	 \
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM |  \
+		RTE_MBUF_F_TX_VLAN |        \
+		RTE_MBUF_F_TX_IPV6 |		 \
+		RTE_MBUF_F_TX_IPV4 |		 \
+		RTE_MBUF_F_TX_IP_CKSUM |        \
+		RTE_MBUF_F_TX_L4_MASK |         \
+		RTE_MBUF_F_TX_IEEE1588_TMST |	 \
+		RTE_MBUF_F_TX_TCP_SEG |         \
+		RTE_MBUF_F_TX_QINQ |        \
+		RTE_MBUF_F_TX_TUNNEL_MASK |	 \
+		RTE_MBUF_F_TX_MACSEC |		 \
+		RTE_MBUF_F_TX_SEC_OFFLOAD |	 \
+		RTE_MBUF_F_TX_UDP_SEG |	 \
+		RTE_MBUF_F_TX_OUTER_UDP_CKSUM)
+#define PKT_TX_OFFLOAD_MASK RTE_DEPRECATED(PKT_TX_OFFLOAD_MASK) RTE_MBUF_F_TX_OFFLOAD_MASK
 
 /**
  * Mbuf having an external buffer attached. shinfo in mbuf must be filled.
  */
-#define EXT_ATTACHED_MBUF    (1ULL << 61)
+#define RTE_MBUF_F_EXTERNAL    (1ULL << 61)
+#define EXT_ATTACHED_MBUF RTE_DEPRECATED(EXT_ATTACHED_MBUF) RTE_MBUF_F_EXTERNAL
 
-#define IND_ATTACHED_MBUF    (1ULL << 62) /**< Indirect attached mbuf */
+#define RTE_MBUF_F_INDIRECT    (1ULL << 62) /**< Indirect attached mbuf */
+#define IND_ATTACHED_MBUF RTE_DEPRECATED(IND_ATTACHED_MBUF) RTE_MBUF_F_INDIRECT
 
 /** Alignment constraint of mbuf private area. */
 #define RTE_MBUF_PRIV_ALIGN 8
@@ -528,7 +646,7 @@ struct rte_mbuf {
 
 	uint32_t pkt_len;         /**< Total pkt len: sum of all segments. */
 	uint16_t data_len;        /**< Amount of data in segment buffer. */
-	/** VLAN TCI (CPU order), valid if PKT_RX_VLAN is set. */
+	/** VLAN TCI (CPU order), valid if RTE_MBUF_F_RX_VLAN is set. */
 	uint16_t vlan_tci;
 
 	RTE_STD_C11
@@ -546,7 +664,7 @@ struct rte_mbuf {
 				};
 				uint32_t hi;
 				/**< First 4 flexible bytes or FD ID, dependent
-				 * on PKT_RX_FDIR_* flag in ol_flags.
+				 * on RTE_MBUF_F_RX_FDIR_* flag in ol_flags.
 				 */
 			} fdir;	/**< Filter identifier if FDIR enabled */
 			struct rte_mbuf_sched sched;
@@ -565,7 +683,7 @@ struct rte_mbuf {
 		} hash;                   /**< hash information */
 	};
 
-	/** Outer VLAN TCI (CPU order), valid if PKT_RX_QINQ is set. */
+	/** Outer VLAN TCI (CPU order), valid if RTE_MBUF_F_RX_QINQ is set. */
 	uint16_t vlan_tci_outer;
 
 	uint16_t buf_len;         /**< Length of segment buffer. */
@@ -655,14 +773,14 @@ struct rte_mbuf_ext_shared_info {
  * If a mbuf has its data in another mbuf and references it by mbuf
  * indirection, this mbuf can be defined as a cloned mbuf.
  */
-#define RTE_MBUF_CLONED(mb)     ((mb)->ol_flags & IND_ATTACHED_MBUF)
+#define RTE_MBUF_CLONED(mb)     ((mb)->ol_flags & RTE_MBUF_F_INDIRECT)
 
 /**
  * Returns TRUE if given mbuf has an external buffer, or FALSE otherwise.
  *
  * External buffer is a user-provided anonymous buffer.
  */
-#define RTE_MBUF_HAS_EXTBUF(mb) ((mb)->ol_flags & EXT_ATTACHED_MBUF)
+#define RTE_MBUF_HAS_EXTBUF(mb) ((mb)->ol_flags & RTE_MBUF_F_EXTERNAL)
 
 /**
  * Returns TRUE if given mbuf is direct, or FALSE otherwise.
@@ -671,7 +789,7 @@ struct rte_mbuf_ext_shared_info {
  * can be defined as a direct mbuf.
  */
 #define RTE_MBUF_DIRECT(mb) \
-	(!((mb)->ol_flags & (IND_ATTACHED_MBUF | EXT_ATTACHED_MBUF)))
+	(!((mb)->ol_flags & (RTE_MBUF_F_INDIRECT | RTE_MBUF_F_EXTERNAL)))
 
 /** Uninitialized or unspecified port. */
 #define RTE_MBUF_PORT_INVALID UINT16_MAX
diff --git a/lib/mbuf/rte_mbuf_dyn.c b/lib/mbuf/rte_mbuf_dyn.c
index ca46eb279e..0f28d51651 100644
--- a/lib/mbuf/rte_mbuf_dyn.c
+++ b/lib/mbuf/rte_mbuf_dyn.c
@@ -130,7 +130,7 @@ init_shared_mem(void)
 		mark_free(dynfield1);
 
 		/* init free_flags */
-		for (mask = PKT_FIRST_FREE; mask <= PKT_LAST_FREE; mask <<= 1)
+		for (mask = RTE_MBUF_F_FIRST_FREE; mask <= RTE_MBUF_F_LAST_FREE; mask <<= 1)
 			shm->free_flags |= mask;
 
 		process_score();
diff --git a/lib/net/rte_ether.h b/lib/net/rte_ether.h
index d0eeb6f996..43d782d986 100644
--- a/lib/net/rte_ether.h
+++ b/lib/net/rte_ether.h
@@ -350,7 +350,7 @@ static inline int rte_vlan_strip(struct rte_mbuf *m)
 		return -1;
 
 	vh = (struct rte_vlan_hdr *)(eh + 1);
-	m->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+	m->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 	m->vlan_tci = rte_be_to_cpu_16(vh->vlan_tci);
 
 	/* Copy ether header over rather than moving whole packet */
@@ -397,9 +397,9 @@ static inline int rte_vlan_insert(struct rte_mbuf **m)
 	vh = (struct rte_vlan_hdr *) (nh + 1);
 	vh->vlan_tci = rte_cpu_to_be_16((*m)->vlan_tci);
 
-	(*m)->ol_flags &= ~(PKT_RX_VLAN_STRIPPED | PKT_TX_VLAN);
+	(*m)->ol_flags &= ~(RTE_MBUF_F_RX_VLAN_STRIPPED | RTE_MBUF_F_TX_VLAN);
 
-	if ((*m)->ol_flags & PKT_TX_TUNNEL_MASK)
+	if ((*m)->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 		(*m)->outer_l2_len += sizeof(struct rte_vlan_hdr);
 	else
 		(*m)->l2_len += sizeof(struct rte_vlan_hdr);
diff --git a/lib/net/rte_ip.h b/lib/net/rte_ip.h
index 05948b69b7..38e5c9ae8a 100644
--- a/lib/net/rte_ip.h
+++ b/lib/net/rte_ip.h
@@ -333,7 +333,7 @@ rte_ipv4_phdr_cksum(const struct rte_ipv4_hdr *ipv4_hdr, uint64_t ol_flags)
 	psd_hdr.dst_addr = ipv4_hdr->dst_addr;
 	psd_hdr.zero = 0;
 	psd_hdr.proto = ipv4_hdr->next_proto_id;
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		psd_hdr.len = 0;
 	} else {
 		l3_len = rte_be_to_cpu_16(ipv4_hdr->total_length);
@@ -474,7 +474,7 @@ rte_ipv6_phdr_cksum(const struct rte_ipv6_hdr *ipv6_hdr, uint64_t ol_flags)
 	} psd_hdr;
 
 	psd_hdr.proto = (uint32_t)(ipv6_hdr->proto << 24);
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		psd_hdr.len = 0;
 	} else {
 		psd_hdr.len = ipv6_hdr->payload_len;
diff --git a/lib/net/rte_net.h b/lib/net/rte_net.h
index 42639bc154..917b7748bc 100644
--- a/lib/net/rte_net.h
+++ b/lib/net/rte_net.h
@@ -125,17 +125,17 @@ rte_net_intel_cksum_flags_prepare(struct rte_mbuf *m, uint64_t ol_flags)
 	 * Mainly it is required to avoid fragmented headers check if
 	 * no offloads are requested.
 	 */
-	if (!(ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK | PKT_TX_TCP_SEG |
-			  PKT_TX_OUTER_IP_CKSUM)))
+	if (!(ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK | RTE_MBUF_F_TX_TCP_SEG |
+			  RTE_MBUF_F_TX_OUTER_IP_CKSUM)))
 		return 0;
 
-	if (ol_flags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6)) {
+	if (ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IPV6)) {
 		inner_l3_offset += m->outer_l2_len + m->outer_l3_len;
 		/*
 		 * prepare outer IPv4 header checksum by setting it to 0,
 		 * in order to be computed by hardware NICs.
 		 */
-		if (ol_flags & PKT_TX_OUTER_IP_CKSUM) {
+		if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) {
 			ipv4_hdr = rte_pktmbuf_mtod_offset(m,
 					struct rte_ipv4_hdr *, m->outer_l2_len);
 			ipv4_hdr->hdr_checksum = 0;
@@ -151,16 +151,16 @@ rte_net_intel_cksum_flags_prepare(struct rte_mbuf *m, uint64_t ol_flags)
 		     inner_l3_offset + m->l3_len + m->l4_len))
 		return -ENOTSUP;
 
-	if (ol_flags & PKT_TX_IPV4) {
+	if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		ipv4_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv4_hdr *,
 				inner_l3_offset);
 
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			ipv4_hdr->hdr_checksum = 0;
 	}
 
-	if ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_UDP_CKSUM) {
-		if (ol_flags & PKT_TX_IPV4) {
+	if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_UDP_CKSUM) {
+		if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 			udp_hdr = (struct rte_udp_hdr *)((char *)ipv4_hdr +
 					m->l3_len);
 			udp_hdr->dgram_cksum = rte_ipv4_phdr_cksum(ipv4_hdr,
@@ -175,9 +175,9 @@ rte_net_intel_cksum_flags_prepare(struct rte_mbuf *m, uint64_t ol_flags)
 			udp_hdr->dgram_cksum = rte_ipv6_phdr_cksum(ipv6_hdr,
 					ol_flags);
 		}
-	} else if ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM ||
-			(ol_flags & PKT_TX_TCP_SEG)) {
-		if (ol_flags & PKT_TX_IPV4) {
+	} else if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_TCP_CKSUM ||
+			(ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
+		if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 			/* non-TSO tcp or TSO */
 			tcp_hdr = (struct rte_tcp_hdr *)((char *)ipv4_hdr +
 					m->l3_len);
diff --git a/lib/pipeline/rte_table_action.c b/lib/pipeline/rte_table_action.c
index ad7904c0ee..a4814b8198 100644
--- a/lib/pipeline/rte_table_action.c
+++ b/lib/pipeline/rte_table_action.c
@@ -2085,7 +2085,7 @@ pkt_work_tag(struct rte_mbuf *mbuf,
 	struct tag_data *data)
 {
 	mbuf->hash.fdir.hi = data->tag;
-	mbuf->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+	mbuf->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 }
 
 static __rte_always_inline void
@@ -2103,10 +2103,10 @@ pkt4_work_tag(struct rte_mbuf *mbuf0,
 	mbuf2->hash.fdir.hi = data2->tag;
 	mbuf3->hash.fdir.hi = data3->tag;
 
-	mbuf0->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
-	mbuf1->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
-	mbuf2->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
-	mbuf3->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+	mbuf0->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
+	mbuf1->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
+	mbuf2->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
+	mbuf3->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 }
 
 /**
diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index ec2c91e7a7..56bf98d9b1 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -411,25 +411,25 @@ vhost_shadow_enqueue_single_packed(struct virtio_net *dev,
 static __rte_always_inline void
 virtio_enqueue_offload(struct rte_mbuf *m_buf, struct virtio_net_hdr *net_hdr)
 {
-	uint64_t csum_l4 = m_buf->ol_flags & PKT_TX_L4_MASK;
+	uint64_t csum_l4 = m_buf->ol_flags & RTE_MBUF_F_TX_L4_MASK;
 
-	if (m_buf->ol_flags & PKT_TX_TCP_SEG)
-		csum_l4 |= PKT_TX_TCP_CKSUM;
+	if (m_buf->ol_flags & RTE_MBUF_F_TX_TCP_SEG)
+		csum_l4 |= RTE_MBUF_F_TX_TCP_CKSUM;
 
 	if (csum_l4) {
 		net_hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
 		net_hdr->csum_start = m_buf->l2_len + m_buf->l3_len;
 
 		switch (csum_l4) {
-		case PKT_TX_TCP_CKSUM:
+		case RTE_MBUF_F_TX_TCP_CKSUM:
 			net_hdr->csum_offset = (offsetof(struct rte_tcp_hdr,
 						cksum));
 			break;
-		case PKT_TX_UDP_CKSUM:
+		case RTE_MBUF_F_TX_UDP_CKSUM:
 			net_hdr->csum_offset = (offsetof(struct rte_udp_hdr,
 						dgram_cksum));
 			break;
-		case PKT_TX_SCTP_CKSUM:
+		case RTE_MBUF_F_TX_SCTP_CKSUM:
 			net_hdr->csum_offset = (offsetof(struct rte_sctp_hdr,
 						cksum));
 			break;
@@ -441,7 +441,7 @@ virtio_enqueue_offload(struct rte_mbuf *m_buf, struct virtio_net_hdr *net_hdr)
 	}
 
 	/* IP cksum verification cannot be bypassed, then calculate here */
-	if (m_buf->ol_flags & PKT_TX_IP_CKSUM) {
+	if (m_buf->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		struct rte_ipv4_hdr *ipv4_hdr;
 
 		ipv4_hdr = rte_pktmbuf_mtod_offset(m_buf, struct rte_ipv4_hdr *,
@@ -450,15 +450,15 @@ virtio_enqueue_offload(struct rte_mbuf *m_buf, struct virtio_net_hdr *net_hdr)
 		ipv4_hdr->hdr_checksum = rte_ipv4_cksum(ipv4_hdr);
 	}
 
-	if (m_buf->ol_flags & PKT_TX_TCP_SEG) {
-		if (m_buf->ol_flags & PKT_TX_IPV4)
+	if (m_buf->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
+		if (m_buf->ol_flags & RTE_MBUF_F_TX_IPV4)
 			net_hdr->gso_type = VIRTIO_NET_HDR_GSO_TCPV4;
 		else
 			net_hdr->gso_type = VIRTIO_NET_HDR_GSO_TCPV6;
 		net_hdr->gso_size = m_buf->tso_segsz;
 		net_hdr->hdr_len = m_buf->l2_len + m_buf->l3_len
 					+ m_buf->l4_len;
-	} else if (m_buf->ol_flags & PKT_TX_UDP_SEG) {
+	} else if (m_buf->ol_flags & RTE_MBUF_F_TX_UDP_SEG) {
 		net_hdr->gso_type = VIRTIO_NET_HDR_GSO_UDP;
 		net_hdr->gso_size = m_buf->tso_segsz;
 		net_hdr->hdr_len = m_buf->l2_len + m_buf->l3_len +
@@ -2259,7 +2259,7 @@ parse_headers(struct rte_mbuf *m, uint8_t *l4_proto)
 		m->l3_len = rte_ipv4_hdr_len(ipv4_hdr);
 		if (data_len < m->l2_len + m->l3_len)
 			goto error;
-		m->ol_flags |= PKT_TX_IPV4;
+		m->ol_flags |= RTE_MBUF_F_TX_IPV4;
 		*l4_proto = ipv4_hdr->next_proto_id;
 		break;
 	case RTE_ETHER_TYPE_IPV6:
@@ -2268,7 +2268,7 @@ parse_headers(struct rte_mbuf *m, uint8_t *l4_proto)
 		ipv6_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv6_hdr *,
 				m->l2_len);
 		m->l3_len = sizeof(struct rte_ipv6_hdr);
-		m->ol_flags |= PKT_TX_IPV6;
+		m->ol_flags |= RTE_MBUF_F_TX_IPV6;
 		*l4_proto = ipv6_hdr->proto;
 		break;
 	default:
@@ -2323,17 +2323,17 @@ vhost_dequeue_offload_legacy(struct virtio_net_hdr *hdr, struct rte_mbuf *m)
 			case (offsetof(struct rte_tcp_hdr, cksum)):
 				if (l4_proto != IPPROTO_TCP)
 					goto error;
-				m->ol_flags |= PKT_TX_TCP_CKSUM;
+				m->ol_flags |= RTE_MBUF_F_TX_TCP_CKSUM;
 				break;
 			case (offsetof(struct rte_udp_hdr, dgram_cksum)):
 				if (l4_proto != IPPROTO_UDP)
 					goto error;
-				m->ol_flags |= PKT_TX_UDP_CKSUM;
+				m->ol_flags |= RTE_MBUF_F_TX_UDP_CKSUM;
 				break;
 			case (offsetof(struct rte_sctp_hdr, cksum)):
 				if (l4_proto != IPPROTO_SCTP)
 					goto error;
-				m->ol_flags |= PKT_TX_SCTP_CKSUM;
+				m->ol_flags |= RTE_MBUF_F_TX_SCTP_CKSUM;
 				break;
 			default:
 				goto error;
@@ -2355,14 +2355,14 @@ vhost_dequeue_offload_legacy(struct virtio_net_hdr *hdr, struct rte_mbuf *m)
 			tcp_len = (tcp_hdr->data_off & 0xf0) >> 2;
 			if (data_len < m->l2_len + m->l3_len + tcp_len)
 				goto error;
-			m->ol_flags |= PKT_TX_TCP_SEG;
+			m->ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 			m->tso_segsz = hdr->gso_size;
 			m->l4_len = tcp_len;
 			break;
 		case VIRTIO_NET_HDR_GSO_UDP:
 			if (l4_proto != IPPROTO_UDP)
 				goto error;
-			m->ol_flags |= PKT_TX_UDP_SEG;
+			m->ol_flags |= RTE_MBUF_F_TX_UDP_SEG;
 			m->tso_segsz = hdr->gso_size;
 			m->l4_len = sizeof(struct rte_udp_hdr);
 			break;
@@ -2396,7 +2396,7 @@ vhost_dequeue_offload(struct virtio_net_hdr *hdr, struct rte_mbuf *m,
 		return;
 	}
 
-	m->ol_flags |= PKT_RX_IP_CKSUM_UNKNOWN;
+	m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 
 	ptype = rte_net_get_ptype(m, &hdr_lens, RTE_PTYPE_ALL_MASK);
 	m->packet_type = ptype;
@@ -2423,7 +2423,7 @@ vhost_dequeue_offload(struct virtio_net_hdr *hdr, struct rte_mbuf *m,
 
 		hdrlen = hdr_lens.l2_len + hdr_lens.l3_len + hdr_lens.l4_len;
 		if (hdr->csum_start <= hdrlen && l4_supported != 0) {
-			m->ol_flags |= PKT_RX_L4_CKSUM_NONE;
+			m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_NONE;
 		} else {
 			/* Unknown proto or tunnel, do sw cksum. We can assume
 			 * the cksum field is in the first segment since the
@@ -2453,13 +2453,13 @@ vhost_dequeue_offload(struct virtio_net_hdr *hdr, struct rte_mbuf *m,
 		case VIRTIO_NET_HDR_GSO_TCPV6:
 			if ((ptype & RTE_PTYPE_L4_MASK) != RTE_PTYPE_L4_TCP)
 				break;
-			m->ol_flags |= PKT_RX_LRO | PKT_RX_L4_CKSUM_NONE;
+			m->ol_flags |= RTE_MBUF_F_RX_LRO | RTE_MBUF_F_RX_L4_CKSUM_NONE;
 			m->tso_segsz = hdr->gso_size;
 			break;
 		case VIRTIO_NET_HDR_GSO_UDP:
 			if ((ptype & RTE_PTYPE_L4_MASK) != RTE_PTYPE_L4_UDP)
 				break;
-			m->ol_flags |= PKT_RX_LRO | PKT_RX_L4_CKSUM_NONE;
+			m->ol_flags |= RTE_MBUF_F_RX_LRO | RTE_MBUF_F_RX_L4_CKSUM_NONE;
 			m->tso_segsz = hdr->gso_size;
 			break;
 		default:
-- 
2.30.2


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH 1/3] mbuf: remove deprecated offload flags
  2021-09-29 21:48 ` [dpdk-dev] [PATCH 1/3] mbuf: remove deprecated offload flags Olivier Matz
@ 2021-10-04  8:29   ` David Marchand
  2021-10-04  9:46     ` Olivier Matz
  0 siblings, 1 reply; 20+ messages in thread
From: David Marchand @ 2021-10-04  8:29 UTC (permalink / raw)
  To: Olivier Matz; +Cc: dev, Andrew Rybchenko, Yigit, Ferruh, Thomas Monjalon

On Wed, Sep 29, 2021 at 11:50 PM Olivier Matz <olivier.matz@6wind.com> wrote:
>
> The flags PKT_TX_VLAN_PKT, PKT_TX_QINQ_PKT, PKT_RX_EIP_CKSUM_BAD are
> marked as deprecated since commit 380a7aab1ae2 ("mbuf: rename deprecated
> VLAN flags") (2017). Remove their definitions from rte_mbuf_core.h,
> and replace their usages.

The patch lgtm except the removal of some "bad checksum" flags, see below.

[snip]


> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 05fc2fdee7..549e9416c4 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -159,11 +159,6 @@ Deprecation Notices
>    will be limited to maximum 256 queues.
>    Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` will be removed.
>
> -* ethdev: The offload flag ``PKT_RX_EIP_CKSUM_BAD`` will be removed and
> -  replaced by the new flag ``PKT_RX_OUTER_IP_CKSUM_BAD``. The new name is more
> -  consistent with existing outer header checksum status flag naming, which
> -  should help in reducing confusion about its usage.
> -
>  * i40e: As there are both i40evf and iavf pmd, the functions of them are
>    duplicated. And now more and more advanced features are developed on iavf.
>    To keep consistent with kernel driver's name

Those 3 flags are easy to replace, but some projects are still using them.

$ git grep-all -El
'\<(PKT_TX_VLAN_PKT|PKT_TX_QINQ_PKT|PKT_RX_EIP_CKSUM_BAD)\>' |grep -v
\\.patch$
DPVS/src/netif.c
DPVS/src/vlan.c
FD.io-VPP/src/plugins/dpdk/device/format.c
gatekeeper/bpf/bpf_mbuf.h
lagopus/src/dataplane/dpdk/worker.c
packet-journey/app/main.c
Trex/src/pal/common/common_mbuf.h
Trex/src/pal/linux/mbuf.h

Please update the release notes to announce this API update.


[snip]

> diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
> index 9d8e3ddc86..93db9292c0 100644
> --- a/lib/mbuf/rte_mbuf_core.h
> +++ b/lib/mbuf/rte_mbuf_core.h
> @@ -55,37 +55,12 @@ extern "C" {
>   /** RX packet with FDIR match indicate. */
>  #define PKT_RX_FDIR          (1ULL << 2)
>
> -/**
> - * Deprecated.
> - * Checking this flag alone is deprecated: check the 2 bits of
> - * PKT_RX_L4_CKSUM_MASK.
> - * This flag was set when the L4 checksum of a packet was detected as
> - * wrong by the hardware.
> - */
> -#define PKT_RX_L4_CKSUM_BAD  (1ULL << 3)
> -
> -/**
> - * Deprecated.
> - * Checking this flag alone is deprecated: check the 2 bits of
> - * PKT_RX_IP_CKSUM_MASK.
> - * This flag was set when the IP checksum of a packet was detected as
> - * wrong by the hardware.
> - */
> -#define PKT_RX_IP_CKSUM_BAD  (1ULL << 4)
> -

You did not mention PKT_RX_IP_CKSUM_BAD and PKT_RX_L4_CKSUM_BAD in the
commitlog.
There was no deprecation notice, and those flags were not marked
RTE_DEPRECATED (there are still many projects referencing them).

Is this removal intended?


-- 
David Marchand


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH 1/3] mbuf: remove deprecated offload flags
  2021-10-04  8:29   ` David Marchand
@ 2021-10-04  9:46     ` Olivier Matz
  0 siblings, 0 replies; 20+ messages in thread
From: Olivier Matz @ 2021-10-04  9:46 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, Andrew Rybchenko, Yigit, Ferruh, Thomas Monjalon

Hi David,

Thank you for the review, my comments below.

On Mon, Oct 04, 2021 at 10:29:36AM +0200, David Marchand wrote:
> On Wed, Sep 29, 2021 at 11:50 PM Olivier Matz <olivier.matz@6wind.com> wrote:
> >
> > The flags PKT_TX_VLAN_PKT, PKT_TX_QINQ_PKT, PKT_RX_EIP_CKSUM_BAD are
> > marked as deprecated since commit 380a7aab1ae2 ("mbuf: rename deprecated
> > VLAN flags") (2017). Remove their definitions from rte_mbuf_core.h,
> > and replace their usages.
> 
> The patch lgtm except the removal of some "bad checksum" flags, see below.
>
> [snip]
> 
> 
> > diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> > index 05fc2fdee7..549e9416c4 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -159,11 +159,6 @@ Deprecation Notices
> >    will be limited to maximum 256 queues.
> >    Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` will be removed.
> >
> > -* ethdev: The offload flag ``PKT_RX_EIP_CKSUM_BAD`` will be removed and
> > -  replaced by the new flag ``PKT_RX_OUTER_IP_CKSUM_BAD``. The new name is more
> > -  consistent with existing outer header checksum status flag naming, which
> > -  should help in reducing confusion about its usage.
> > -
> >  * i40e: As there are both i40evf and iavf pmd, the functions of them are
> >    duplicated. And now more and more advanced features are developed on iavf.
> >    To keep consistent with kernel driver's name
> 
> Those 3 flags are easy to replace, but some projects are still using them.
> 
> $ git grep-all -El
> '\<(PKT_TX_VLAN_PKT|PKT_TX_QINQ_PKT|PKT_RX_EIP_CKSUM_BAD)\>' |grep -v
> \\.patch$
> DPVS/src/netif.c
> DPVS/src/vlan.c
> FD.io-VPP/src/plugins/dpdk/device/format.c
> gatekeeper/bpf/bpf_mbuf.h
> lagopus/src/dataplane/dpdk/worker.c
> packet-journey/app/main.c
> Trex/src/pal/common/common_mbuf.h
> Trex/src/pal/linux/mbuf.h
> 
> Please update the release notes to announce this API update.

I will add an additional note in the release note.

FYI, the flags PKT_TX_VLAN_PKT and PKT_TX_QINQ_PKT were not marked
RTE_DEPRECATED because their deprecation is older than this macro. If needed, I
can keep them for one more version in the header file.

> [snip]
> 
> > diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
> > index 9d8e3ddc86..93db9292c0 100644
> > --- a/lib/mbuf/rte_mbuf_core.h
> > +++ b/lib/mbuf/rte_mbuf_core.h
> > @@ -55,37 +55,12 @@ extern "C" {
> >   /** RX packet with FDIR match indicate. */
> >  #define PKT_RX_FDIR          (1ULL << 2)
> >
> > -/**
> > - * Deprecated.
> > - * Checking this flag alone is deprecated: check the 2 bits of
> > - * PKT_RX_L4_CKSUM_MASK.
> > - * This flag was set when the L4 checksum of a packet was detected as
> > - * wrong by the hardware.
> > - */
> > -#define PKT_RX_L4_CKSUM_BAD  (1ULL << 3)
> > -
> > -/**
> > - * Deprecated.
> > - * Checking this flag alone is deprecated: check the 2 bits of
> > - * PKT_RX_IP_CKSUM_MASK.
> > - * This flag was set when the IP checksum of a packet was detected as
> > - * wrong by the hardware.
> > - */
> > -#define PKT_RX_IP_CKSUM_BAD  (1ULL << 4)
> > -
> 
> You did not mention PKT_RX_IP_CKSUM_BAD and PKT_RX_L4_CKSUM_BAD in the
> commitlog.
> There was no deprecation notice, and those flags were not marked
> RTE_DEPRECATED (there are still many projects referencing them).
> 
> Is this removal intended?

Yes, actually these flags were defined twice at different places. I'm just
removing one occurence, and the other remains.

Thanks,
Olivier

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [dpdk-dev] [PATCH v2 0/4] mbuf: offload flags namespace
  2021-09-29 21:48 [dpdk-dev] [PATCH 0/3] mbuf: offload flags namespace Olivier Matz
                   ` (2 preceding siblings ...)
  2021-09-29 21:48 ` [dpdk-dev] [PATCH 3/3] mbuf: add rte prefix to " Olivier Matz
@ 2021-10-15 19:24 ` Olivier Matz
  2021-10-15 19:24   ` [dpdk-dev] [PATCH v2 1/4] mbuf: remove duplicate definition of cksum offload flags Olivier Matz
                     ` (4 more replies)
  3 siblings, 5 replies; 20+ messages in thread
From: Olivier Matz @ 2021-10-15 19:24 UTC (permalink / raw)
  To: dev; +Cc: David Marchand

This patchset fixes the mbuf offload flags namespace by adding an RTE_
prefix to the definitions. The old flags remain usable, but a deprecation
warning is issued at compilation.

v2
* rebase
* remove part about PKT_RX_EIP_CKSUM_BAD, already in 74a74bf98c
* mark old vlan flags as deprecated instead of removing them
* clarify that rx checksum flags are not really removed (just remove dup)


Olivier Matz (4):
  mbuf: remove duplicate definition of cksum offload flags
  mbuf: mark old VLAN offload flags as deprecated
  cocci: add cocci script to rename mbuf offload flags
  mbuf: add rte prefix to offload flags

 app/test-pmd/csumonly.c                       |  62 +--
 app/test-pmd/flowgen.c                        |   8 +-
 app/test-pmd/ieee1588fwd.c                    |   6 +-
 app/test-pmd/macfwd.c                         |   8 +-
 app/test-pmd/macswap_common.h                 |  12 +-
 app/test-pmd/txonly.c                         |   8 +-
 app/test-pmd/util.c                           |  18 +-
 app/test/test_cryptodev_security_ipsec.c      |   4 +-
 app/test/test_ipsec.c                         |   4 +-
 app/test/test_mbuf.c                          | 144 +++---
 .../cocci/prefix_mbuf_offload_flags.cocci     | 441 ++++++++++++++++++
 doc/guides/nics/bnxt.rst                      |   8 +-
 doc/guides/nics/enic.rst                      |   8 +-
 doc/guides/nics/features.rst                  |  70 +--
 doc/guides/nics/ixgbe.rst                     |   2 +-
 doc/guides/nics/mlx5.rst                      |   6 +-
 .../generic_segmentation_offload_lib.rst      |   4 +-
 doc/guides/prog_guide/mbuf_lib.rst            |  18 +-
 doc/guides/prog_guide/metrics_lib.rst         |   2 +-
 doc/guides/prog_guide/rte_flow.rst            |  14 +-
 doc/guides/rel_notes/deprecation.rst          |   5 -
 doc/guides/rel_notes/release_21_11.rst        |   3 +
 drivers/compress/mlx5/mlx5_compress.c         |   2 +-
 drivers/crypto/cnxk/cn10k_cryptodev_ops.c     |  12 +-
 drivers/crypto/cnxk/cn10k_ipsec_la_ops.h      |   4 +-
 drivers/crypto/mlx5/mlx5_crypto.c             |   2 +-
 drivers/event/cnxk/cn9k_worker.h              |   2 +-
 drivers/event/octeontx/ssovf_worker.c         |  36 +-
 drivers/event/octeontx/ssovf_worker.h         |   2 +-
 drivers/event/octeontx2/otx2_worker.h         |   2 +-
 drivers/net/af_packet/rte_eth_af_packet.c     |   4 +-
 drivers/net/atlantic/atl_rxtx.c               |  46 +-
 drivers/net/avp/avp_ethdev.c                  |   8 +-
 drivers/net/axgbe/axgbe_rxtx.c                |  64 +--
 drivers/net/axgbe/axgbe_rxtx_vec_sse.c        |   2 +-
 drivers/net/bnx2x/bnx2x.c                     |   2 +-
 drivers/net/bnx2x/bnx2x_rxtx.c                |   2 +-
 drivers/net/bnxt/bnxt_rxr.c                   |  50 +-
 drivers/net/bnxt/bnxt_rxr.h                   |  32 +-
 drivers/net/bnxt/bnxt_txr.c                   |  40 +-
 drivers/net/bnxt/bnxt_txr.h                   |  38 +-
 drivers/net/bonding/rte_eth_bond_pmd.c        |   2 +-
 drivers/net/cnxk/cn10k_ethdev.c               |  18 +-
 drivers/net/cnxk/cn10k_rx.h                   |  38 +-
 drivers/net/cnxk/cn10k_tx.h                   | 178 +++----
 drivers/net/cnxk/cn9k_ethdev.c                |  18 +-
 drivers/net/cnxk/cn9k_rx.h                    |  32 +-
 drivers/net/cnxk/cn9k_tx.h                    | 170 +++----
 drivers/net/cnxk/cnxk_ethdev.h                |  10 +-
 drivers/net/cnxk/cnxk_lookup.c                |  40 +-
 drivers/net/cxgbe/sge.c                       |  46 +-
 drivers/net/dpaa/dpaa_ethdev.h                |   7 +-
 drivers/net/dpaa/dpaa_rxtx.c                  |  10 +-
 drivers/net/dpaa2/dpaa2_rxtx.c                |  30 +-
 drivers/net/e1000/em_rxtx.c                   |  39 +-
 drivers/net/e1000/igb_rxtx.c                  |  81 ++--
 drivers/net/ena/ena_ethdev.c                  |  53 ++-
 drivers/net/enetc/enetc_rxtx.c                |  44 +-
 drivers/net/enic/enic_main.c                  |  10 +-
 drivers/net/enic/enic_res.c                   |  12 +-
 drivers/net/enic/enic_rxtx.c                  |  24 +-
 drivers/net/enic/enic_rxtx_common.h           |  18 +-
 drivers/net/enic/enic_rxtx_vec_avx2.c         |  80 ++--
 drivers/net/fm10k/fm10k_rxtx.c                |  43 +-
 drivers/net/fm10k/fm10k_rxtx_vec.c            |  25 +-
 drivers/net/hinic/hinic_pmd_rx.c              |  22 +-
 drivers/net/hinic/hinic_pmd_tx.c              |  56 +--
 drivers/net/hinic/hinic_pmd_tx.h              |  13 +-
 drivers/net/hns3/hns3_ethdev.h                |   2 +-
 drivers/net/hns3/hns3_rxtx.c                  | 108 ++---
 drivers/net/hns3/hns3_rxtx.h                  |  25 +-
 drivers/net/hns3/hns3_rxtx_vec_neon.h         |   2 +-
 drivers/net/hns3/hns3_rxtx_vec_sve.c          |   2 +-
 drivers/net/i40e/i40e_rxtx.c                  | 157 +++----
 drivers/net/i40e/i40e_rxtx_vec_altivec.c      |  22 +-
 drivers/net/i40e/i40e_rxtx_vec_avx2.c         |  70 +--
 drivers/net/i40e/i40e_rxtx_vec_avx512.c       |  62 +--
 drivers/net/i40e/i40e_rxtx_vec_neon.c         |  50 +-
 drivers/net/i40e/i40e_rxtx_vec_sse.c          |  60 +--
 drivers/net/iavf/iavf_rxtx.c                  |  90 ++--
 drivers/net/iavf/iavf_rxtx.h                  |  28 +-
 drivers/net/iavf/iavf_rxtx_vec_avx2.c         | 140 +++---
 drivers/net/iavf/iavf_rxtx_vec_avx512.c       | 140 +++---
 drivers/net/iavf/iavf_rxtx_vec_common.h       |  16 +-
 drivers/net/iavf/iavf_rxtx_vec_sse.c          | 112 ++---
 drivers/net/ice/ice_rxtx.c                    | 117 +++--
 drivers/net/ice/ice_rxtx_vec_avx2.c           | 158 +++----
 drivers/net/ice/ice_rxtx_vec_avx512.c         | 158 +++----
 drivers/net/ice/ice_rxtx_vec_common.h         |  16 +-
 drivers/net/ice/ice_rxtx_vec_sse.c            | 112 ++---
 drivers/net/igc/igc_txrx.c                    |  67 +--
 drivers/net/ionic/ionic_rxtx.c                |  59 ++-
 drivers/net/ixgbe/ixgbe_ethdev.c              |   4 +-
 drivers/net/ixgbe/ixgbe_rxtx.c                | 113 +++--
 drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c       |  38 +-
 drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c        |  44 +-
 drivers/net/liquidio/lio_rxtx.c               |  16 +-
 drivers/net/mlx4/mlx4_rxtx.c                  |  22 +-
 drivers/net/mlx5/mlx5_flow.c                  |   2 +-
 drivers/net/mlx5/mlx5_rx.c                    |  18 +-
 drivers/net/mlx5/mlx5_rx.h                    |   4 +-
 drivers/net/mlx5/mlx5_rxq.c                   |   2 +-
 drivers/net/mlx5/mlx5_rxtx.c                  |  18 +-
 drivers/net/mlx5/mlx5_rxtx_vec_altivec.h      |  76 +--
 drivers/net/mlx5/mlx5_rxtx_vec_neon.h         |  36 +-
 drivers/net/mlx5/mlx5_rxtx_vec_sse.h          |  38 +-
 drivers/net/mlx5/mlx5_tx.h                    | 104 ++---
 drivers/net/mvneta/mvneta_ethdev.h            |   6 +-
 drivers/net/mvneta/mvneta_rxtx.c              |  16 +-
 drivers/net/mvpp2/mrvl_ethdev.c               |  22 +-
 drivers/net/netvsc/hn_rxtx.c                  |  28 +-
 drivers/net/nfp/nfp_rxtx.c                    |  26 +-
 drivers/net/octeontx/octeontx_rxtx.h          |  38 +-
 drivers/net/octeontx2/otx2_ethdev.c           |  18 +-
 drivers/net/octeontx2/otx2_lookup.c           |  40 +-
 drivers/net/octeontx2/otx2_rx.c               |  12 +-
 drivers/net/octeontx2/otx2_rx.h               |  22 +-
 drivers/net/octeontx2/otx2_tx.c               |  86 ++--
 drivers/net/octeontx2/otx2_tx.h               |  70 +--
 drivers/net/qede/qede_rxtx.c                  | 104 ++---
 drivers/net/qede/qede_rxtx.h                  |  20 +-
 drivers/net/sfc/sfc_dp_tx.h                   |  14 +-
 drivers/net/sfc/sfc_ef100_rx.c                |  18 +-
 drivers/net/sfc/sfc_ef100_tx.c                |  52 +--
 drivers/net/sfc/sfc_ef10_essb_rx.c            |   6 +-
 drivers/net/sfc/sfc_ef10_rx.c                 |   6 +-
 drivers/net/sfc/sfc_ef10_rx_ev.h              |  16 +-
 drivers/net/sfc/sfc_ef10_tx.c                 |  18 +-
 drivers/net/sfc/sfc_rx.c                      |  22 +-
 drivers/net/sfc/sfc_tso.c                     |   2 +-
 drivers/net/sfc/sfc_tso.h                     |   2 +-
 drivers/net/sfc/sfc_tx.c                      |   4 +-
 drivers/net/tap/rte_eth_tap.c                 |  28 +-
 drivers/net/thunderx/nicvf_rxtx.c             |  24 +-
 drivers/net/thunderx/nicvf_rxtx.h             |   2 +-
 drivers/net/txgbe/txgbe_ethdev.c              |   4 +-
 drivers/net/txgbe/txgbe_rxtx.c                | 172 +++----
 drivers/net/vhost/rte_eth_vhost.c             |   2 +-
 drivers/net/virtio/virtio_rxtx.c              |  14 +-
 drivers/net/virtio/virtio_rxtx_packed.h       |   6 +-
 drivers/net/virtio/virtqueue.h                |  14 +-
 drivers/net/vmxnet3/vmxnet3_rxtx.c            |  59 ++-
 drivers/regex/mlx5/mlx5_regex_fastpath.c      |   2 +-
 examples/bpf/t2.c                             |   4 +-
 examples/ip_fragmentation/main.c              |   2 +-
 examples/ip_reassembly/main.c                 |   2 +-
 examples/ipsec-secgw/esp.c                    |   6 +-
 examples/ipsec-secgw/ipsec-secgw.c            |  20 +-
 examples/ipsec-secgw/ipsec_worker.c           |  12 +-
 examples/ipsec-secgw/sa.c                     |   2 +-
 examples/ptpclient/ptpclient.c                |   4 +-
 examples/qos_meter/main.c                     |  12 +-
 examples/vhost/main.c                         |  12 +-
 lib/ethdev/rte_ethdev.h                       |   4 +-
 lib/ethdev/rte_flow.h                         |  33 +-
 lib/eventdev/rte_event_eth_rx_adapter.c       |   4 +-
 lib/gso/gso_common.h                          |  40 +-
 lib/gso/gso_tunnel_tcp4.c                     |   2 +-
 lib/gso/rte_gso.c                             |  10 +-
 lib/gso/rte_gso.h                             |   4 +-
 lib/ipsec/esp_inb.c                           |  10 +-
 lib/ipsec/esp_outb.c                          |   4 +-
 lib/ipsec/misc.h                              |   2 +-
 lib/ipsec/rte_ipsec_group.h                   |   6 +-
 lib/ipsec/sa.c                                |   2 +-
 lib/mbuf/rte_mbuf.c                           | 220 ++++-----
 lib/mbuf/rte_mbuf.h                           |  30 +-
 lib/mbuf/rte_mbuf_core.h                      | 422 ++++++++++-------
 lib/mbuf/rte_mbuf_dyn.c                       |   2 +-
 lib/net/rte_ether.h                           |   6 +-
 lib/net/rte_ip.h                              |   4 +-
 lib/net/rte_net.h                             |  22 +-
 lib/pipeline/rte_table_action.c               |  10 +-
 lib/security/rte_security.h                   |  10 +-
 lib/vhost/virtio_net.c                        |  42 +-
 175 files changed, 3561 insertions(+), 3031 deletions(-)
 create mode 100644 devtools/cocci/prefix_mbuf_offload_flags.cocci

-- 
2.30.2


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [dpdk-dev] [PATCH v2 1/4] mbuf: remove duplicate definition of cksum offload flags
  2021-10-15 19:24 ` [dpdk-dev] [PATCH v2 0/4] mbuf: offload flags namespace Olivier Matz
@ 2021-10-15 19:24   ` Olivier Matz
  2021-10-16  7:47     ` Andrew Rybchenko
  2021-10-15 19:24   ` [dpdk-dev] [PATCH v2 2/4] mbuf: mark old VLAN offload flags as deprecated Olivier Matz
                     ` (3 subsequent siblings)
  4 siblings, 1 reply; 20+ messages in thread
From: Olivier Matz @ 2021-10-15 19:24 UTC (permalink / raw)
  To: dev; +Cc: David Marchand

The flags PKT_RX_L4_CKSUM_BAD and PKT_RX_IP_CKSUM_BAD are defined
twice with the same value. Remove one of the occurence, which was
marked as "deprecated".

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 lib/mbuf/rte_mbuf_core.h | 18 ------------------
 1 file changed, 18 deletions(-)

diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index d6f1679944..e1e0f4b51b 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -55,24 +55,6 @@ extern "C" {
  /** RX packet with FDIR match indicate. */
 #define PKT_RX_FDIR          (1ULL << 2)
 
-/**
- * Deprecated.
- * Checking this flag alone is deprecated: check the 2 bits of
- * PKT_RX_L4_CKSUM_MASK.
- * This flag was set when the L4 checksum of a packet was detected as
- * wrong by the hardware.
- */
-#define PKT_RX_L4_CKSUM_BAD  (1ULL << 3)
-
-/**
- * Deprecated.
- * Checking this flag alone is deprecated: check the 2 bits of
- * PKT_RX_IP_CKSUM_MASK.
- * This flag was set when the IP checksum of a packet was detected as
- * wrong by the hardware.
- */
-#define PKT_RX_IP_CKSUM_BAD  (1ULL << 4)
-
 /**
  * This flag is set when the outermost IP header checksum is detected as
  * wrong by the hardware.
-- 
2.30.2


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [dpdk-dev] [PATCH v2 2/4] mbuf: mark old VLAN offload flags as deprecated
  2021-10-15 19:24 ` [dpdk-dev] [PATCH v2 0/4] mbuf: offload flags namespace Olivier Matz
  2021-10-15 19:24   ` [dpdk-dev] [PATCH v2 1/4] mbuf: remove duplicate definition of cksum offload flags Olivier Matz
@ 2021-10-15 19:24   ` Olivier Matz
  2021-10-16  7:50     ` Andrew Rybchenko
  2021-10-15 19:24   ` [dpdk-dev] [PATCH v2 3/4] cocci: add cocci script to rename mbuf offload flags Olivier Matz
                     ` (2 subsequent siblings)
  4 siblings, 1 reply; 20+ messages in thread
From: Olivier Matz @ 2021-10-15 19:24 UTC (permalink / raw)
  To: dev; +Cc: David Marchand

The flags PKT_TX_VLAN_PKT and PKT_TX_QINQ_PKT are
marked as deprecated since commit 380a7aab1ae2 ("mbuf: rename deprecated
VLAN flags") (2017). But they were not using the RTE_DEPRECATED
macro, because it did not exist at this time. Add it, and replace
usage of these flags.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 app/test-pmd/flowgen.c                    |  4 ++--
 app/test-pmd/macfwd.c                     |  4 ++--
 app/test-pmd/txonly.c                     |  4 ++--
 drivers/net/af_packet/rte_eth_af_packet.c |  2 +-
 drivers/net/avp/avp_ethdev.c              |  4 ++--
 drivers/net/axgbe/axgbe_rxtx.c            |  2 +-
 drivers/net/bnx2x/bnx2x.c                 |  2 +-
 drivers/net/bnxt/bnxt_txr.c               |  8 ++++----
 drivers/net/cxgbe/sge.c                   |  4 ++--
 drivers/net/dpaa2/dpaa2_rxtx.c            |  6 +++---
 drivers/net/e1000/em_rxtx.c               |  4 ++--
 drivers/net/e1000/igb_rxtx.c              |  6 +++---
 drivers/net/fm10k/fm10k_rxtx.c            |  4 ++--
 drivers/net/hinic/hinic_pmd_tx.c          |  2 +-
 drivers/net/hns3/hns3_rxtx.c              | 14 ++++++-------
 drivers/net/i40e/i40e_rxtx.c              | 10 +++++-----
 drivers/net/iavf/iavf_rxtx.c              |  6 +++---
 drivers/net/iavf/iavf_rxtx.h              |  2 +-
 drivers/net/igc/igc_txrx.c                |  6 +++---
 drivers/net/ionic/ionic_rxtx.c            |  4 ++--
 drivers/net/ixgbe/ixgbe_rxtx.c            |  6 +++---
 drivers/net/mlx5/mlx5_tx.h                | 24 +++++++++++------------
 drivers/net/netvsc/hn_rxtx.c              |  2 +-
 drivers/net/nfp/nfp_rxtx.c                |  2 +-
 drivers/net/qede/qede_rxtx.c              |  2 +-
 drivers/net/qede/qede_rxtx.h              |  2 +-
 drivers/net/sfc/sfc_ef100_tx.c            |  4 ++--
 drivers/net/sfc/sfc_ef10_tx.c             |  2 +-
 drivers/net/sfc/sfc_tx.c                  |  2 +-
 drivers/net/txgbe/txgbe_rxtx.c            |  8 ++++----
 drivers/net/vhost/rte_eth_vhost.c         |  2 +-
 drivers/net/virtio/virtio_rxtx.c          |  2 +-
 drivers/net/vmxnet3/vmxnet3_rxtx.c        |  4 ++--
 examples/vhost/main.c                     |  2 +-
 lib/mbuf/rte_mbuf_core.h                  |  8 ++++----
 35 files changed, 85 insertions(+), 85 deletions(-)

diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index a96169e680..0ce2afbea5 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -100,9 +100,9 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
 
 	tx_offloads = ports[fs->tx_port].dev_conf.txmode.offloads;
 	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
-		ol_flags |= PKT_TX_VLAN_PKT;
+		ol_flags |= PKT_TX_VLAN;
 	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
-		ol_flags |= PKT_TX_QINQ_PKT;
+		ol_flags |= PKT_TX_QINQ;
 	if (tx_offloads	& DEV_TX_OFFLOAD_MACSEC_INSERT)
 		ol_flags |= PKT_TX_MACSEC;
 
diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c
index ee76df7f03..a67907b449 100644
--- a/app/test-pmd/macfwd.c
+++ b/app/test-pmd/macfwd.c
@@ -73,9 +73,9 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
 	txp = &ports[fs->tx_port];
 	tx_offloads = txp->dev_conf.txmode.offloads;
 	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
-		ol_flags = PKT_TX_VLAN_PKT;
+		ol_flags = PKT_TX_VLAN;
 	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
-		ol_flags |= PKT_TX_QINQ_PKT;
+		ol_flags |= PKT_TX_QINQ;
 	if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
 		ol_flags |= PKT_TX_MACSEC;
 	for (i = 0; i < nb_rx; i++) {
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index 40655801cc..0e44bc4d3b 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -353,9 +353,9 @@ pkt_burst_transmit(struct fwd_stream *fs)
 	vlan_tci = txp->tx_vlan_id;
 	vlan_tci_outer = txp->tx_vlan_id_outer;
 	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
-		ol_flags = PKT_TX_VLAN_PKT;
+		ol_flags = PKT_TX_VLAN;
 	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
-		ol_flags |= PKT_TX_QINQ_PKT;
+		ol_flags |= PKT_TX_QINQ;
 	if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
 		ol_flags |= PKT_TX_MACSEC;
 
diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
index a077376dc0..931fc230e5 100644
--- a/drivers/net/af_packet/rte_eth_af_packet.c
+++ b/drivers/net/af_packet/rte_eth_af_packet.c
@@ -229,7 +229,7 @@ eth_af_packet_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		}
 
 		/* insert vlan info if necessary */
-		if (mbuf->ol_flags & PKT_TX_VLAN_PKT) {
+		if (mbuf->ol_flags & PKT_TX_VLAN) {
 			if (rte_vlan_insert(&mbuf)) {
 				rte_pktmbuf_free(mbuf);
 				continue;
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 6cb8bb4338..6f0dafc287 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -1674,7 +1674,7 @@ avp_dev_copy_to_buffers(struct avp_dev *avp,
 	first_buf->nb_segs = count;
 	first_buf->pkt_len = total_length;
 
-	if (mbuf->ol_flags & PKT_TX_VLAN_PKT) {
+	if (mbuf->ol_flags & PKT_TX_VLAN) {
 		first_buf->ol_flags |= RTE_AVP_TX_VLAN_PKT;
 		first_buf->vlan_tci = mbuf->vlan_tci;
 	}
@@ -1905,7 +1905,7 @@ avp_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		pkt_buf->nb_segs = 1;
 		pkt_buf->next = NULL;
 
-		if (m->ol_flags & PKT_TX_VLAN_PKT) {
+		if (m->ol_flags & PKT_TX_VLAN) {
 			pkt_buf->ol_flags |= RTE_AVP_TX_VLAN_PKT;
 			pkt_buf->vlan_tci = m->vlan_tci;
 		}
diff --git a/drivers/net/axgbe/axgbe_rxtx.c b/drivers/net/axgbe/axgbe_rxtx.c
index c8618d2d6d..c9d5800b01 100644
--- a/drivers/net/axgbe/axgbe_rxtx.c
+++ b/drivers/net/axgbe/axgbe_rxtx.c
@@ -811,7 +811,7 @@ static int axgbe_xmit_hw(struct axgbe_tx_queue *txq,
 		AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, CIC, 0x1);
 	rte_wmb();
 
-	if (mbuf->ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {
+	if (mbuf->ol_flags & (PKT_TX_VLAN | PKT_TX_QINQ)) {
 		/* Mark it as a CONTEXT descriptor */
 		AXGMAC_SET_BITS_LE(desc->desc3, TX_CONTEXT_DESC3,
 				  CTXT, 1);
diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c
index 083deff1b1..819e54044b 100644
--- a/drivers/net/bnx2x/bnx2x.c
+++ b/drivers/net/bnx2x/bnx2x.c
@@ -2189,7 +2189,7 @@ int bnx2x_tx_encap(struct bnx2x_tx_queue *txq, struct rte_mbuf *m0)
 
 	tx_start_bd->nbd = rte_cpu_to_le_16(2);
 
-	if (m0->ol_flags & PKT_TX_VLAN_PKT) {
+	if (m0->ol_flags & PKT_TX_VLAN) {
 		tx_start_bd->vlan_or_ethertype =
 		    rte_cpu_to_le_16(m0->vlan_tci);
 		tx_start_bd->bd_flags.as_bitfield |=
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 9e45ddd7a8..e36da59fce 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -113,10 +113,10 @@ bnxt_xmit_need_long_bd(struct rte_mbuf *tx_pkt, struct bnxt_tx_queue *txq)
 {
 	if (tx_pkt->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_TCP_CKSUM |
 				PKT_TX_UDP_CKSUM | PKT_TX_IP_CKSUM |
-				PKT_TX_VLAN_PKT | PKT_TX_OUTER_IP_CKSUM |
+				PKT_TX_VLAN | PKT_TX_OUTER_IP_CKSUM |
 				PKT_TX_TUNNEL_GRE | PKT_TX_TUNNEL_VXLAN |
 				PKT_TX_TUNNEL_GENEVE | PKT_TX_IEEE1588_TMST |
-				PKT_TX_QINQ_PKT) ||
+				PKT_TX_QINQ) ||
 	     (BNXT_TRUFLOW_EN(txq->bp) &&
 	      (txq->bp->tx_cfa_action || txq->vfr_tx_cfa_action)))
 		return true;
@@ -203,13 +203,13 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 		vlan_tag_flags = 0;
 
 		/* HW can accelerate only outer vlan in QinQ mode */
-		if (tx_pkt->ol_flags & PKT_TX_QINQ_PKT) {
+		if (tx_pkt->ol_flags & PKT_TX_QINQ) {
 			vlan_tag_flags = TX_BD_LONG_CFA_META_KEY_VLAN_TAG |
 				tx_pkt->vlan_tci_outer;
 			outer_tpid_bd = txq->bp->outer_tpid_bd &
 				BNXT_OUTER_TPID_BD_MASK;
 			vlan_tag_flags |= outer_tpid_bd;
-		} else if (tx_pkt->ol_flags & PKT_TX_VLAN_PKT) {
+		} else if (tx_pkt->ol_flags & PKT_TX_VLAN) {
 			/* shurd: Should this mask at
 			 * TX_BD_LONG_CFA_META_VLAN_VID_MASK?
 			 */
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index e5f7721dc4..3299d6252e 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -1037,7 +1037,7 @@ static inline int tx_do_packet_coalesce(struct sge_eth_txq *txq,
 		cntrl = F_TXPKT_L4CSUM_DIS | F_TXPKT_IPCSUM_DIS;
 	}
 
-	if (mbuf->ol_flags & PKT_TX_VLAN_PKT) {
+	if (mbuf->ol_flags & PKT_TX_VLAN) {
 		txq->stats.vlan_ins++;
 		cntrl |= F_TXPKT_VLAN_VLD | V_TXPKT_VLAN(mbuf->vlan_tci);
 	}
@@ -1258,7 +1258,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
 		txq->stats.tx_cso += m->tso_segsz;
 	}
 
-	if (m->ol_flags & PKT_TX_VLAN_PKT) {
+	if (m->ol_flags & PKT_TX_VLAN) {
 		txq->stats.vlan_ins++;
 		cntrl |= F_TXPKT_VLAN_VLD | V_TXPKT_VLAN(m->vlan_tci);
 	}
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index f40369e2c3..f491f4d10a 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -1228,7 +1228,7 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 				    (*bufs)->nb_segs == 1 &&
 				    rte_mbuf_refcnt_read((*bufs)) == 1)) {
 					if (unlikely(((*bufs)->ol_flags
-						& PKT_TX_VLAN_PKT) ||
+						& PKT_TX_VLAN) ||
 						(eth_data->dev_conf.txmode.offloads
 						& DEV_TX_OFFLOAD_VLAN_INSERT))) {
 						ret = rte_vlan_insert(bufs);
@@ -1271,7 +1271,7 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 				goto send_n_return;
 			}
 
-			if (unlikely(((*bufs)->ol_flags & PKT_TX_VLAN_PKT) ||
+			if (unlikely(((*bufs)->ol_flags & PKT_TX_VLAN) ||
 				(eth_data->dev_conf.txmode.offloads
 				& DEV_TX_OFFLOAD_VLAN_INSERT))) {
 				int ret = rte_vlan_insert(bufs);
@@ -1532,7 +1532,7 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 				    (*bufs)->nb_segs == 1 &&
 				    rte_mbuf_refcnt_read((*bufs)) == 1)) {
 					if (unlikely((*bufs)->ol_flags
-						& PKT_TX_VLAN_PKT)) {
+						& PKT_TX_VLAN)) {
 					  ret = rte_vlan_insert(bufs);
 					  if (ret)
 						goto send_n_return;
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index 8542a15320..82873c91b0 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -55,7 +55,7 @@
 		PKT_TX_IPV4 |           \
 		PKT_TX_IP_CKSUM |       \
 		PKT_TX_L4_MASK |        \
-		PKT_TX_VLAN_PKT)
+		PKT_TX_VLAN)
 
 #define E1000_TX_OFFLOAD_NOTSUP_MASK \
 		(PKT_TX_OFFLOAD_MASK ^ E1000_TX_OFFLOAD_MASK)
@@ -508,7 +508,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		popts_spec = 0;
 
 		/* Set VLAN Tag offload fields. */
-		if (ol_flags & PKT_TX_VLAN_PKT) {
+		if (ol_flags & PKT_TX_VLAN) {
 			cmd_type_len |= E1000_TXD_CMD_VLE;
 			popts_spec = tx_pkt->vlan_tci << E1000_TXD_VLAN_SHIFT;
 		}
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 8d64d7397a..fa2797074f 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -54,7 +54,7 @@
 		PKT_TX_OUTER_IPV4 |	 \
 		PKT_TX_IPV6 |		 \
 		PKT_TX_IPV4 |		 \
-		PKT_TX_VLAN_PKT |		 \
+		PKT_TX_VLAN |		 \
 		PKT_TX_IP_CKSUM |		 \
 		PKT_TX_L4_MASK |		 \
 		PKT_TX_TCP_SEG |		 \
@@ -262,7 +262,7 @@ igbe_set_xmit_ctx(struct igb_tx_queue* txq,
 	/* Specify which HW CTX to upload. */
 	mss_l4len_idx = (ctx_idx << E1000_ADVTXD_IDX_SHIFT);
 
-	if (ol_flags & PKT_TX_VLAN_PKT)
+	if (ol_flags & PKT_TX_VLAN)
 		tx_offload_mask.data |= TX_VLAN_CMP_MASK;
 
 	/* check if TCP segmentation required for this packet */
@@ -371,7 +371,7 @@ tx_desc_vlan_flags_to_cmdtype(uint64_t ol_flags)
 	uint32_t cmdtype;
 	static uint32_t vlan_cmd[2] = {0, E1000_ADVTXD_DCMD_VLE};
 	static uint32_t tso_cmd[2] = {0, E1000_ADVTXD_DCMD_TSE};
-	cmdtype = vlan_cmd[(ol_flags & PKT_TX_VLAN_PKT) != 0];
+	cmdtype = vlan_cmd[(ol_flags & PKT_TX_VLAN) != 0];
 	cmdtype |= tso_cmd[(ol_flags & PKT_TX_TCP_SEG) != 0];
 	return cmdtype;
 }
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index 0a9a27aa5a..496e72a003 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -38,7 +38,7 @@ static inline void dump_rxd(union fm10k_rx_desc *rxd)
 #endif
 
 #define FM10K_TX_OFFLOAD_MASK (  \
-		PKT_TX_VLAN_PKT |        \
+		PKT_TX_VLAN |        \
 		PKT_TX_IPV6 |            \
 		PKT_TX_IPV4 |            \
 		PKT_TX_IP_CKSUM |        \
@@ -609,7 +609,7 @@ static inline void tx_xmit_pkt(struct fm10k_tx_queue *q, struct rte_mbuf *mb)
 		q->hw_ring[q->next_free].flags |= FM10K_TXD_FLAG_CSUM;
 
 	/* set vlan if requested */
-	if (mb->ol_flags & PKT_TX_VLAN_PKT)
+	if (mb->ol_flags & PKT_TX_VLAN)
 		q->hw_ring[q->next_free].vlan = mb->vlan_tci;
 	else
 		q->hw_ring[q->next_free].vlan = 0;
diff --git a/drivers/net/hinic/hinic_pmd_tx.c b/drivers/net/hinic/hinic_pmd_tx.c
index 669f82389c..e14937139d 100644
--- a/drivers/net/hinic/hinic_pmd_tx.c
+++ b/drivers/net/hinic/hinic_pmd_tx.c
@@ -592,7 +592,7 @@ hinic_fill_tx_offload_info(struct rte_mbuf *mbuf,
 	task->pkt_info2 = 0;
 
 	/* Base VLAN */
-	if (unlikely(ol_flags & PKT_TX_VLAN_PKT)) {
+	if (unlikely(ol_flags & PKT_TX_VLAN)) {
 		vlan_tag = mbuf->vlan_tci;
 		hinic_set_vlan_tx_offload(task, queue_info, vlan_tag,
 					  vlan_tag >> VLAN_PRIO_SHIFT);
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 6b77672aa1..feac7eb218 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -3202,11 +3202,11 @@ hns3_fill_first_desc(struct hns3_tx_queue *txq, struct hns3_desc *desc,
 	 * To avoid the VLAN of Tx descriptor is overwritten by PVID, it should
 	 * be added to the position close to the IP header when PVID is enabled.
 	 */
-	if (!txq->pvid_sw_shift_en && ol_flags & (PKT_TX_VLAN_PKT |
-				PKT_TX_QINQ_PKT)) {
+	if (!txq->pvid_sw_shift_en && ol_flags & (PKT_TX_VLAN |
+				PKT_TX_QINQ)) {
 		desc->tx.ol_type_vlan_len_msec |=
 				rte_cpu_to_le_32(BIT(HNS3_TXD_OVLAN_B));
-		if (ol_flags & PKT_TX_QINQ_PKT)
+		if (ol_flags & PKT_TX_QINQ)
 			desc->tx.outer_vlan_tag =
 					rte_cpu_to_le_16(rxm->vlan_tci_outer);
 		else
@@ -3214,8 +3214,8 @@ hns3_fill_first_desc(struct hns3_tx_queue *txq, struct hns3_desc *desc,
 					rte_cpu_to_le_16(rxm->vlan_tci);
 	}
 
-	if (ol_flags & PKT_TX_QINQ_PKT ||
-	    ((ol_flags & PKT_TX_VLAN_PKT) && txq->pvid_sw_shift_en)) {
+	if (ol_flags & PKT_TX_QINQ ||
+	    ((ol_flags & PKT_TX_VLAN) && txq->pvid_sw_shift_en)) {
 		desc->tx.type_cs_vlan_tso_len |=
 					rte_cpu_to_le_32(BIT(HNS3_TXD_VLAN_B));
 		desc->tx.vlan_tag = rte_cpu_to_le_16(rxm->vlan_tci);
@@ -3754,12 +3754,12 @@ hns3_vld_vlan_chk(struct hns3_tx_queue *txq, struct rte_mbuf *m)
 	 * implementation function named hns3_prep_pkts to inform users that
 	 * these packets will be discarded.
 	 */
-	if (m->ol_flags & PKT_TX_QINQ_PKT)
+	if (m->ol_flags & PKT_TX_QINQ)
 		return -EINVAL;
 
 	eh = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 	if (eh->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN)) {
-		if (m->ol_flags & PKT_TX_VLAN_PKT)
+		if (m->ol_flags & PKT_TX_VLAN)
 			return -EINVAL;
 
 		/* Ensure the incoming packet is not a QinQ packet */
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 807e1a4133..c858354b73 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -64,8 +64,8 @@
 		PKT_TX_L4_MASK |        \
 		PKT_TX_OUTER_IP_CKSUM | \
 		PKT_TX_TCP_SEG |        \
-		PKT_TX_QINQ_PKT |       \
-		PKT_TX_VLAN_PKT |	\
+		PKT_TX_QINQ |       \
+		PKT_TX_VLAN |	\
 		PKT_TX_TUNNEL_MASK |	\
 		I40E_TX_IEEE1588_TMST)
 
@@ -1006,7 +1006,7 @@ i40e_calc_context_desc(uint64_t flags)
 {
 	static uint64_t mask = PKT_TX_OUTER_IP_CKSUM |
 		PKT_TX_TCP_SEG |
-		PKT_TX_QINQ_PKT |
+		PKT_TX_QINQ |
 		PKT_TX_TUNNEL_MASK;
 
 #ifdef RTE_LIBRTE_IEEE1588
@@ -1151,7 +1151,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Descriptor based VLAN insertion */
-		if (ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {
+		if (ol_flags & (PKT_TX_VLAN | PKT_TX_QINQ)) {
 			td_cmd |= I40E_TX_DESC_CMD_IL2TAG1;
 			td_tag = tx_pkt->vlan_tci;
 		}
@@ -1200,7 +1200,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 
 			ctx_txd->tunneling_params =
 				rte_cpu_to_le_32(cd_tunneling_params);
-			if (ol_flags & PKT_TX_QINQ_PKT) {
+			if (ol_flags & PKT_TX_QINQ) {
 				cd_l2tag2 = tx_pkt->vlan_tci_outer;
 				cd_type_cmd_tso_mss |=
 					((uint64_t)I40E_TX_CTX_DESC_IL2TAG2 <<
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 88661e5d74..045fd92368 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -2074,7 +2074,7 @@ iavf_calc_context_desc(uint64_t flags, uint8_t vlan_flag)
 {
 	if (flags & PKT_TX_TCP_SEG)
 		return 1;
-	if (flags & PKT_TX_VLAN_PKT &&
+	if (flags & PKT_TX_VLAN &&
 	    vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2)
 		return 1;
 	return 0;
@@ -2260,7 +2260,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Descriptor based VLAN insertion */
-		if (ol_flags & PKT_TX_VLAN_PKT &&
+		if (ol_flags & PKT_TX_VLAN &&
 		    txq->vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG1) {
 			td_cmd |= IAVF_TX_DESC_CMD_IL2TAG1;
 			td_tag = tx_pkt->vlan_tci;
@@ -2301,7 +2301,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 				cd_type_cmd_tso_mss |=
 					iavf_set_tso_ctx(tx_pkt, tx_offload);
 
-			if (ol_flags & PKT_TX_VLAN_PKT &&
+			if (ol_flags & PKT_TX_VLAN &&
 			   txq->vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2) {
 				cd_type_cmd_tso_mss |= IAVF_TX_CTX_DESC_IL2TAG2
 					<< IAVF_TXD_CTX_QW1_CMD_SHIFT;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index c7a868cf1d..229a2ea4dd 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -62,7 +62,7 @@
 		PKT_TX_OUTER_IPV4 |		 \
 		PKT_TX_IPV6 |			 \
 		PKT_TX_IPV4 |			 \
-		PKT_TX_VLAN_PKT |		 \
+		PKT_TX_VLAN |		 \
 		PKT_TX_IP_CKSUM |		 \
 		PKT_TX_L4_MASK |		 \
 		PKT_TX_TCP_SEG)
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index 7dee1bb0fa..97656b39fd 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -78,7 +78,7 @@
 		PKT_TX_OUTER_IPV4 |	\
 		PKT_TX_IPV6 |		\
 		PKT_TX_IPV4 |		\
-		PKT_TX_VLAN_PKT |	\
+		PKT_TX_VLAN |	\
 		PKT_TX_IP_CKSUM |	\
 		PKT_TX_L4_MASK |	\
 		PKT_TX_TCP_SEG |	\
@@ -1530,7 +1530,7 @@ igc_set_xmit_ctx(struct igc_tx_queue *txq,
 	/* Specify which HW CTX to upload. */
 	mss_l4len_idx = (ctx_curr << IGC_ADVTXD_IDX_SHIFT);
 
-	if (ol_flags & PKT_TX_VLAN_PKT)
+	if (ol_flags & PKT_TX_VLAN)
 		tx_offload_mask.vlan_tci = 0xffff;
 
 	/* check if TCP segmentation required for this packet */
@@ -1604,7 +1604,7 @@ tx_desc_vlan_flags_to_cmdtype(uint64_t ol_flags)
 	uint32_t cmdtype;
 	static uint32_t vlan_cmd[2] = {0, IGC_ADVTXD_DCMD_VLE};
 	static uint32_t tso_cmd[2] = {0, IGC_ADVTXD_DCMD_TSE};
-	cmdtype = vlan_cmd[(ol_flags & PKT_TX_VLAN_PKT) != 0];
+	cmdtype = vlan_cmd[(ol_flags & PKT_TX_VLAN) != 0];
 	cmdtype |= tso_cmd[(ol_flags & IGC_TX_OFFLOAD_SEG) != 0];
 	return cmdtype;
 }
diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
index 67631a5813..001a368856 100644
--- a/drivers/net/ionic/ionic_rxtx.c
+++ b/drivers/net/ionic/ionic_rxtx.c
@@ -355,7 +355,7 @@ ionic_tx_tso(struct ionic_tx_qcq *txq, struct rte_mbuf *txm)
 	uint32_t offset = 0;
 	bool start, done;
 	bool encap;
-	bool has_vlan = !!(txm->ol_flags & PKT_TX_VLAN_PKT);
+	bool has_vlan = !!(txm->ol_flags & PKT_TX_VLAN);
 	uint16_t vlan_tci = txm->vlan_tci;
 	uint64_t ol_flags = txm->ol_flags;
 
@@ -494,7 +494,7 @@ ionic_tx(struct ionic_tx_qcq *txq, struct rte_mbuf *txm)
 	if (opcode == IONIC_TXQ_DESC_OPCODE_CSUM_NONE)
 		stats->no_csum++;
 
-	has_vlan = (ol_flags & PKT_TX_VLAN_PKT);
+	has_vlan = (ol_flags & PKT_TX_VLAN);
 	encap = ((ol_flags & PKT_TX_OUTER_IP_CKSUM) ||
 			(ol_flags & PKT_TX_OUTER_UDP_CKSUM)) &&
 			((ol_flags & PKT_TX_OUTER_IPV4) ||
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 349180e7c1..0e3aec9906 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -64,7 +64,7 @@
 		PKT_TX_OUTER_IPV4 |		 \
 		PKT_TX_IPV6 |			 \
 		PKT_TX_IPV4 |			 \
-		PKT_TX_VLAN_PKT |		 \
+		PKT_TX_VLAN |		 \
 		PKT_TX_IP_CKSUM |		 \
 		PKT_TX_L4_MASK |		 \
 		PKT_TX_TCP_SEG |		 \
@@ -384,7 +384,7 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 	/* Specify which HW CTX to upload. */
 	mss_l4len_idx |= (ctx_idx << IXGBE_ADVTXD_IDX_SHIFT);
 
-	if (ol_flags & PKT_TX_VLAN_PKT) {
+	if (ol_flags & PKT_TX_VLAN) {
 		tx_offload_mask.vlan_tci |= ~0;
 	}
 
@@ -543,7 +543,7 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
 {
 	uint32_t cmdtype = 0;
 
-	if (ol_flags & PKT_TX_VLAN_PKT)
+	if (ol_flags & PKT_TX_VLAN)
 		cmdtype |= IXGBE_ADVTXD_DCMD_VLE;
 	if (ol_flags & PKT_TX_TCP_SEG)
 		cmdtype |= IXGBE_ADVTXD_DCMD_TSE;
diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h
index e722738682..4f83291cc2 100644
--- a/drivers/net/mlx5/mlx5_tx.h
+++ b/drivers/net/mlx5/mlx5_tx.h
@@ -538,7 +538,7 @@ txq_mbuf_to_swp(struct mlx5_txq_local *__rte_restrict loc,
 	 * should be set regardless of HW offload.
 	 */
 	off = loc->mbuf->outer_l2_len;
-	if (MLX5_TXOFF_CONFIG(VLAN) && ol & PKT_TX_VLAN_PKT)
+	if (MLX5_TXOFF_CONFIG(VLAN) && ol & PKT_TX_VLAN)
 		off += sizeof(struct rte_vlan_hdr);
 	set = (off >> 1) << 8; /* Outer L3 offset. */
 	off += loc->mbuf->outer_l3_len;
@@ -957,7 +957,7 @@ mlx5_tx_eseg_none(struct mlx5_txq_data *__rte_restrict txq __rte_unused,
 		       0 : 0;
 	/* Engage VLAN tag insertion feature if requested. */
 	if (MLX5_TXOFF_CONFIG(VLAN) &&
-	    loc->mbuf->ol_flags & PKT_TX_VLAN_PKT) {
+	    loc->mbuf->ol_flags & PKT_TX_VLAN) {
 		/*
 		 * We should get here only if device support
 		 * this feature correctly.
@@ -1818,7 +1818,7 @@ mlx5_tx_packet_multi_tso(struct mlx5_txq_data *__rte_restrict txq,
 	 * the required space in WQE ring buffer.
 	 */
 	dlen = rte_pktmbuf_pkt_len(loc->mbuf);
-	if (MLX5_TXOFF_CONFIG(VLAN) && loc->mbuf->ol_flags & PKT_TX_VLAN_PKT)
+	if (MLX5_TXOFF_CONFIG(VLAN) && loc->mbuf->ol_flags & PKT_TX_VLAN)
 		vlan = sizeof(struct rte_vlan_hdr);
 	inlen = loc->mbuf->l2_len + vlan +
 		loc->mbuf->l3_len + loc->mbuf->l4_len;
@@ -1933,7 +1933,7 @@ mlx5_tx_packet_multi_send(struct mlx5_txq_data *__rte_restrict txq,
 	/* Update sent data bytes counter. */
 	txq->stats.obytes += rte_pktmbuf_pkt_len(loc->mbuf);
 	if (MLX5_TXOFF_CONFIG(VLAN) &&
-	    loc->mbuf->ol_flags & PKT_TX_VLAN_PKT)
+	    loc->mbuf->ol_flags & PKT_TX_VLAN)
 		txq->stats.obytes += sizeof(struct rte_vlan_hdr);
 #endif
 	/*
@@ -2032,7 +2032,7 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
 	 * to estimate the required space for WQE.
 	 */
 	dlen = rte_pktmbuf_pkt_len(loc->mbuf);
-	if (MLX5_TXOFF_CONFIG(VLAN) && loc->mbuf->ol_flags & PKT_TX_VLAN_PKT)
+	if (MLX5_TXOFF_CONFIG(VLAN) && loc->mbuf->ol_flags & PKT_TX_VLAN)
 		vlan = sizeof(struct rte_vlan_hdr);
 	inlen = dlen + vlan;
 	/* Check against minimal length. */
@@ -2295,7 +2295,7 @@ mlx5_tx_burst_tso(struct mlx5_txq_data *__rte_restrict txq,
 		}
 		dlen = rte_pktmbuf_data_len(loc->mbuf);
 		if (MLX5_TXOFF_CONFIG(VLAN) &&
-		    loc->mbuf->ol_flags & PKT_TX_VLAN_PKT) {
+		    loc->mbuf->ol_flags & PKT_TX_VLAN) {
 			vlan = sizeof(struct rte_vlan_hdr);
 		}
 		/*
@@ -2420,7 +2420,7 @@ mlx5_tx_able_to_empw(struct mlx5_txq_data *__rte_restrict txq,
 		return MLX5_TXCMP_CODE_SINGLE;
 	/* Check if eMPW can be engaged. */
 	if (MLX5_TXOFF_CONFIG(VLAN) &&
-	    unlikely(loc->mbuf->ol_flags & PKT_TX_VLAN_PKT) &&
+	    unlikely(loc->mbuf->ol_flags & PKT_TX_VLAN) &&
 		(!MLX5_TXOFF_CONFIG(INLINE) ||
 		 unlikely((rte_pktmbuf_data_len(loc->mbuf) +
 			   sizeof(struct rte_vlan_hdr)) > txq->inlen_empw))) {
@@ -2482,7 +2482,7 @@ mlx5_tx_match_empw(struct mlx5_txq_data *__rte_restrict txq,
 		return false;
 	/* There must be no VLAN packets in eMPW loop. */
 	if (MLX5_TXOFF_CONFIG(VLAN))
-		MLX5_ASSERT(!(loc->mbuf->ol_flags & PKT_TX_VLAN_PKT));
+		MLX5_ASSERT(!(loc->mbuf->ol_flags & PKT_TX_VLAN));
 	/* Check if the scheduling is requested. */
 	if (MLX5_TXOFF_CONFIG(TXPP) &&
 	    loc->mbuf->ol_flags & txq->ts_mask)
@@ -2943,7 +2943,7 @@ mlx5_tx_burst_empw_inline(struct mlx5_txq_data *__rte_restrict txq,
 			}
 			/* Inline entire packet, optional VLAN insertion. */
 			if (MLX5_TXOFF_CONFIG(VLAN) &&
-			    loc->mbuf->ol_flags & PKT_TX_VLAN_PKT) {
+			    loc->mbuf->ol_flags & PKT_TX_VLAN) {
 				/*
 				 * The packet length must be checked in
 				 * mlx5_tx_able_to_empw() and packet
@@ -3008,7 +3008,7 @@ mlx5_tx_burst_empw_inline(struct mlx5_txq_data *__rte_restrict txq,
 			MLX5_ASSERT(room >= MLX5_WQE_DSEG_SIZE);
 			if (MLX5_TXOFF_CONFIG(VLAN))
 				MLX5_ASSERT(!(loc->mbuf->ol_flags &
-					    PKT_TX_VLAN_PKT));
+					    PKT_TX_VLAN));
 			mlx5_tx_dseg_ptr(txq, loc, dseg, dptr, dlen, olx);
 			/* We have to store mbuf in elts.*/
 			txq->elts[txq->elts_head++ & txq->elts_m] = loc->mbuf;
@@ -3153,7 +3153,7 @@ mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq,
 
 			inlen = rte_pktmbuf_data_len(loc->mbuf);
 			if (MLX5_TXOFF_CONFIG(VLAN) &&
-			    loc->mbuf->ol_flags & PKT_TX_VLAN_PKT) {
+			    loc->mbuf->ol_flags & PKT_TX_VLAN) {
 				vlan = sizeof(struct rte_vlan_hdr);
 				inlen += vlan;
 			}
@@ -3384,7 +3384,7 @@ mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq,
 			/* Update sent data bytes counter. */
 			txq->stats.obytes += rte_pktmbuf_data_len(loc->mbuf);
 			if (MLX5_TXOFF_CONFIG(VLAN) &&
-			    loc->mbuf->ol_flags & PKT_TX_VLAN_PKT)
+			    loc->mbuf->ol_flags & PKT_TX_VLAN)
 				txq->stats.obytes +=
 					sizeof(struct rte_vlan_hdr);
 #endif
diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c
index e880dc2bb2..acae68e082 100644
--- a/drivers/net/netvsc/hn_rxtx.c
+++ b/drivers/net/netvsc/hn_rxtx.c
@@ -1331,7 +1331,7 @@ static void hn_encap(struct rndis_packet_msg *pkt,
 					  NDIS_PKTINFO_TYPE_HASHVAL);
 	*pi_data = queue_id;
 
-	if (m->ol_flags & PKT_TX_VLAN_PKT) {
+	if (m->ol_flags & PKT_TX_VLAN) {
 		pi_data = hn_rndis_pktinfo_append(pkt, NDIS_VLAN_INFO_SIZE,
 						  NDIS_PKTINFO_TYPE_VLAN);
 		*pi_data = m->vlan_tci;
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index feeacb5614..0df300fe0d 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -935,7 +935,7 @@ nfp_net_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		nfp_net_tx_tso(txq, &txd, pkt);
 		nfp_net_tx_cksum(txq, &txd, pkt);
 
-		if ((pkt->ol_flags & PKT_TX_VLAN_PKT) &&
+		if ((pkt->ol_flags & PKT_TX_VLAN) &&
 		    (hw->cap & NFP_NET_CFG_CTRL_TXVLAN)) {
 			txd.flags |= PCIE_DESC_TX_VLAN;
 			txd.vlan = pkt->vlan_tci;
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 35cde561ba..050c6f5c32 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -2587,7 +2587,7 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Descriptor based VLAN insertion */
-		if (tx_ol_flags & PKT_TX_VLAN_PKT) {
+		if (tx_ol_flags & PKT_TX_VLAN) {
 			vlan = rte_cpu_to_le_16(mbuf->vlan_tci);
 			bd1_bd_flags_bf |=
 			    1 << ETH_TX_1ST_BD_FLAGS_VLAN_INSERTION_SHIFT;
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index c9334448c8..025ed6fff2 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -153,7 +153,7 @@
 				   PKT_TX_IPV6)
 
 #define QEDE_TX_OFFLOAD_MASK (QEDE_TX_CSUM_OFFLOAD_MASK | \
-			      PKT_TX_VLAN_PKT		| \
+			      PKT_TX_VLAN		| \
 			      PKT_TX_TUNNEL_MASK)
 
 #define QEDE_TX_OFFLOAD_NOTSUP_MASK \
diff --git a/drivers/net/sfc/sfc_ef100_tx.c b/drivers/net/sfc/sfc_ef100_tx.c
index 522e9a0d34..53d01612d1 100644
--- a/drivers/net/sfc/sfc_ef100_tx.c
+++ b/drivers/net/sfc/sfc_ef100_tx.c
@@ -382,7 +382,7 @@ sfc_ef100_tx_qdesc_send_create(const struct rte_mbuf *m, efx_oword_t *tx_desc)
 			ESF_GZ_TX_SEND_CSO_OUTER_L4, outer_l4,
 			ESF_GZ_TX_DESC_TYPE, ESE_GZ_TX_DESC_TYPE_SEND);
 
-	if (m->ol_flags & PKT_TX_VLAN_PKT) {
+	if (m->ol_flags & PKT_TX_VLAN) {
 		efx_oword_t tx_desc_extra_fields;
 
 		EFX_POPULATE_OWORD_2(tx_desc_extra_fields,
@@ -464,7 +464,7 @@ sfc_ef100_tx_qdesc_tso_create(const struct rte_mbuf *m,
 
 	EFX_OR_OWORD(*tx_desc, tx_desc_extra_fields);
 
-	if (m->ol_flags & PKT_TX_VLAN_PKT) {
+	if (m->ol_flags & PKT_TX_VLAN) {
 		EFX_POPULATE_OWORD_2(tx_desc_extra_fields,
 				ESF_GZ_TX_TSO_VLAN_INSERT_EN, 1,
 				ESF_GZ_TX_TSO_VLAN_INSERT_TCI, m->vlan_tci);
diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index ed43adb4ca..277fe6c6ca 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -805,7 +805,7 @@ sfc_ef10_simple_prepare_pkts(__rte_unused void *tx_queue,
 
 		/* ef10_simple does not support TSO and VLAN insertion */
 		if (unlikely(m->ol_flags &
-			     (PKT_TX_TCP_SEG | PKT_TX_VLAN_PKT))) {
+			     (PKT_TX_TCP_SEG | PKT_TX_VLAN))) {
 			rte_errno = ENOTSUP;
 			break;
 		}
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 49b239f4d2..936ae815ea 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -766,7 +766,7 @@ static unsigned int
 sfc_efx_tx_maybe_insert_tag(struct sfc_efx_txq *txq, struct rte_mbuf *m,
 			    efx_desc_t **pend)
 {
-	uint16_t this_tag = ((m->ol_flags & PKT_TX_VLAN_PKT) ?
+	uint16_t this_tag = ((m->ol_flags & PKT_TX_VLAN) ?
 			     m->vlan_tci : 0);
 
 	if (this_tag == txq->hw_vlan_tci)
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index b6339fe50b..f8c1ad3937 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -54,7 +54,7 @@ static const u64 TXGBE_TX_OFFLOAD_MASK = (PKT_TX_IP_CKSUM |
 		PKT_TX_OUTER_IPV4 |
 		PKT_TX_IPV6 |
 		PKT_TX_IPV4 |
-		PKT_TX_VLAN_PKT |
+		PKT_TX_VLAN |
 		PKT_TX_L4_MASK |
 		PKT_TX_TCP_SEG |
 		PKT_TX_TUNNEL_MASK |
@@ -408,7 +408,7 @@ txgbe_set_xmit_ctx(struct txgbe_tx_queue *txq,
 		vlan_macip_lens |= TXGBE_TXD_MACLEN(tx_offload.l2_len);
 	}
 
-	if (ol_flags & PKT_TX_VLAN_PKT) {
+	if (ol_flags & PKT_TX_VLAN) {
 		tx_offload_mask.vlan_tci |= ~0;
 		vlan_macip_lens |= TXGBE_TXD_VLAN(tx_offload.vlan_tci);
 	}
@@ -496,7 +496,7 @@ tx_desc_cksum_flags_to_olinfo(uint64_t ol_flags)
 			tmp |= TXGBE_TXD_IPCS;
 		tmp |= TXGBE_TXD_L4CS;
 	}
-	if (ol_flags & PKT_TX_VLAN_PKT)
+	if (ol_flags & PKT_TX_VLAN)
 		tmp |= TXGBE_TXD_CC;
 
 	return tmp;
@@ -507,7 +507,7 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
 {
 	uint32_t cmdtype = 0;
 
-	if (ol_flags & PKT_TX_VLAN_PKT)
+	if (ol_flags & PKT_TX_VLAN)
 		cmdtype |= TXGBE_TXD_VLE;
 	if (ol_flags & PKT_TX_TCP_SEG)
 		cmdtype |= TXGBE_TXD_TSE;
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index 2e24e5f7ff..2be5edea86 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -444,7 +444,7 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 		struct rte_mbuf *m = bufs[i];
 
 		/* Do VLAN tag insertion */
-		if (m->ol_flags & PKT_TX_VLAN_PKT) {
+		if (m->ol_flags & PKT_TX_VLAN) {
 			int error = rte_vlan_insert(&m);
 			if (unlikely(error)) {
 				rte_pktmbuf_free(m);
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index e8e6ed20a5..63f70fc13d 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -1744,7 +1744,7 @@ virtio_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts,
 #endif
 
 		/* Do VLAN tag insertion */
-		if (unlikely(m->ol_flags & PKT_TX_VLAN_PKT)) {
+		if (unlikely(m->ol_flags & PKT_TX_VLAN)) {
 			error = rte_vlan_insert(&m);
 			/* rte_vlan_insert() may change pointer
 			 * even in the case of failure
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index b01c4c01f9..0c9f881d8a 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -49,7 +49,7 @@
 #include "vmxnet3_ethdev.h"
 
 #define	VMXNET3_TX_OFFLOAD_MASK	( \
-		PKT_TX_VLAN_PKT | \
+		PKT_TX_VLAN | \
 		PKT_TX_IPV6 |     \
 		PKT_TX_IPV4 |     \
 		PKT_TX_L4_MASK |  \
@@ -520,7 +520,7 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 		/* Add VLAN tag if present */
 		gdesc = txq->cmd_ring.base + first2fill;
-		if (txm->ol_flags & PKT_TX_VLAN_PKT) {
+		if (txm->ol_flags & PKT_TX_VLAN) {
 			gdesc->txd.ti = 1;
 			gdesc->txd.tci = txm->vlan_tci;
 		}
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index b24fd82a6e..d2254733bc 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -1116,7 +1116,7 @@ virtio_tx_route(struct vhost_dev *vdev, struct rte_mbuf *m, uint16_t vlan_tag)
 			(vh->vlan_tci != vlan_tag_be))
 			vh->vlan_tci = vlan_tag_be;
 	} else {
-		m->ol_flags |= PKT_TX_VLAN_PKT;
+		m->ol_flags |= PKT_TX_VLAN;
 
 		/*
 		 * Find the right seg to adjust the data len when offset is
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index e1e0f4b51b..8e7eef319b 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -265,7 +265,7 @@ extern "C" {
  */
 #define PKT_TX_QINQ        (1ULL << 49)
 /** This old name is deprecated. */
-#define PKT_TX_QINQ_PKT    PKT_TX_QINQ
+#define PKT_TX_QINQ_PKT RTE_DEPRECATED(PKT_TX_QINQ_PKT) PKT_TX_QINQ
 
 /**
  * TCP segmentation offload. To enable this offload feature for a
@@ -334,7 +334,7 @@ extern "C" {
  */
 #define PKT_TX_VLAN          (1ULL << 57)
 /* this old name is deprecated */
-#define PKT_TX_VLAN_PKT      PKT_TX_VLAN
+#define PKT_TX_VLAN_PKT RTE_DEPRECATED(PKT_TX_VLAN_PKT) PKT_TX_VLAN
 
 /**
  * Offload the IP checksum of an external header in the hardware. The
@@ -366,14 +366,14 @@ extern "C" {
 		PKT_TX_OUTER_IPV6 |	 \
 		PKT_TX_OUTER_IPV4 |	 \
 		PKT_TX_OUTER_IP_CKSUM |  \
-		PKT_TX_VLAN_PKT |        \
+		PKT_TX_VLAN |        \
 		PKT_TX_IPV6 |		 \
 		PKT_TX_IPV4 |		 \
 		PKT_TX_IP_CKSUM |        \
 		PKT_TX_L4_MASK |         \
 		PKT_TX_IEEE1588_TMST |	 \
 		PKT_TX_TCP_SEG |         \
-		PKT_TX_QINQ_PKT |        \
+		PKT_TX_QINQ |        \
 		PKT_TX_TUNNEL_MASK |	 \
 		PKT_TX_MACSEC |		 \
 		PKT_TX_SEC_OFFLOAD |	 \
-- 
2.30.2


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [dpdk-dev] [PATCH v2 3/4] cocci: add cocci script to rename mbuf offload flags
  2021-10-15 19:24 ` [dpdk-dev] [PATCH v2 0/4] mbuf: offload flags namespace Olivier Matz
  2021-10-15 19:24   ` [dpdk-dev] [PATCH v2 1/4] mbuf: remove duplicate definition of cksum offload flags Olivier Matz
  2021-10-15 19:24   ` [dpdk-dev] [PATCH v2 2/4] mbuf: mark old VLAN offload flags as deprecated Olivier Matz
@ 2021-10-15 19:24   ` Olivier Matz
  2021-10-15 19:24   ` [dpdk-dev] [PATCH v2 4/4] mbuf: add rte prefix to " Olivier Matz
  2021-10-24 11:43   ` [dpdk-dev] [PATCH v2 0/4] mbuf: offload flags namespace David Marchand
  4 siblings, 0 replies; 20+ messages in thread
From: Olivier Matz @ 2021-10-15 19:24 UTC (permalink / raw)
  To: dev; +Cc: David Marchand

The mbuf offload flags do not match the DPDK namespace (they are not
prefixed by RTE_). This coccinelle script is used in the next commit to
do the replacement in the code.

A draft script was initially submitted in commit d7595795b760 ("doc:
announce renaming of mbuf offload flags"), but dropped by mistake at
commit.

Link: http://inbox.dpdk.org/dev/20210730155700.32574-1-olivier.matz@6wind.com
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 .../cocci/prefix_mbuf_offload_flags.cocci     | 441 ++++++++++++++++++
 1 file changed, 441 insertions(+)
 create mode 100644 devtools/cocci/prefix_mbuf_offload_flags.cocci

diff --git a/devtools/cocci/prefix_mbuf_offload_flags.cocci b/devtools/cocci/prefix_mbuf_offload_flags.cocci
new file mode 100644
index 0000000000..cac83032b4
--- /dev/null
+++ b/devtools/cocci/prefix_mbuf_offload_flags.cocci
@@ -0,0 +1,441 @@
+//
+// Rename mbuf offload flags (and some other defines) to have
+// an RTE_ prefix.
+// This only replaces usages in C code, so it is advised to
+// replace missing occurences with:
+// for f in $(git ls-tree --full-tree -r --name-only HEAD); do
+//         if [ "$(file -b --mime-encoding $f)" = "binary" ]; then
+//                 continue
+//         fi
+//         sed -i \
+//             -e 's,EXT_ATTACHED_MBUF,RTE_MBUF_F_EXTERNAL,g' \
+//             -e 's,IND_ATTACHED_MBUF,RTE_MBUF_F_INDIRECT,g' \
+//             -e 's,EXT_ATTACHED_MBUF,RTE_MBUF_F_EXTERNAL,g' \
+//             -e 's,PKT_FIRST_FREE,RTE_MBUF_F_FIRST_FREE,g' \
+//             -e 's,PKT_LAST_FREE,RTE_MBUF_F_LAST_FREE,g' \
+//             -e 's,PKT_TX_DYNF_METADATA,RTE_MBUF_DYNFLAG_TX_METADATA,g' \
+//             -e 's,PKT_RX_DYNF_METADATA,RTE_MBUF_DYNFLAG_RX_METADATA,g' \
+//             -e 's,\<PKT_RX_,RTE_MBUF_F_RX_,g' \
+//             -e 's,\<PKT_TX_,RTE_MBUF_F_TX_,g' \
+//             $f
+// done
+//
+@@
+@@
+
+- PKT_RX_VLAN
++ RTE_MBUF_F_RX_VLAN
+
+@@
+@@
+
+- PKT_RX_RSS_HASH
++ RTE_MBUF_F_RX_RSS_HASH
+
+@@
+@@
+
+- PKT_RX_FDIR
++ RTE_MBUF_F_RX_FDIR
+
+@@
+@@
+
+- PKT_RX_L4_CKSUM_BAD
++ RTE_MBUF_F_RX_L4_CKSUM_BAD
+
+@@
+@@
+
+- PKT_RX_IP_CKSUM_BAD
++ RTE_MBUF_F_RX_IP_CKSUM_BAD
+
+@@
+@@
+
+- PKT_RX_OUTER_IP_CKSUM_BAD
++ RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD
+
+@@
+@@
+
+- PKT_RX_EIP_CKSUM_BAD
++ RTE_MBUF_F_RX_EIP_CKSUM_BAD
+
+@@
+@@
+
+- PKT_RX_VLAN_STRIPPED
++ RTE_MBUF_F_RX_VLAN_STRIPPED
+
+@@
+@@
+
+- PKT_RX_IP_CKSUM_MASK
++ RTE_MBUF_F_RX_IP_CKSUM_MASK
+
+@@
+@@
+
+- PKT_RX_IP_CKSUM_UNKNOWN
++ RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN
+
+@@
+@@
+
+- PKT_RX_IP_CKSUM_BAD
++ RTE_MBUF_F_RX_IP_CKSUM_BAD
+
+@@
+@@
+
+- PKT_RX_IP_CKSUM_GOOD
++ RTE_MBUF_F_RX_IP_CKSUM_GOOD
+
+@@
+@@
+
+- PKT_RX_IP_CKSUM_NONE
++ RTE_MBUF_F_RX_IP_CKSUM_NONE
+
+@@
+@@
+
+- PKT_RX_L4_CKSUM_MASK
++ RTE_MBUF_F_RX_L4_CKSUM_MASK
+
+@@
+@@
+
+- PKT_RX_L4_CKSUM_UNKNOWN
++ RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN
+
+@@
+@@
+
+- PKT_RX_L4_CKSUM_BAD
++ RTE_MBUF_F_RX_L4_CKSUM_BAD
+
+@@
+@@
+
+- PKT_RX_L4_CKSUM_GOOD
++ RTE_MBUF_F_RX_L4_CKSUM_GOOD
+
+@@
+@@
+
+- PKT_RX_L4_CKSUM_NONE
++ RTE_MBUF_F_RX_L4_CKSUM_NONE
+
+@@
+@@
+
+- PKT_RX_IEEE1588_PTP
++ RTE_MBUF_F_RX_IEEE1588_PTP
+
+@@
+@@
+
+- PKT_RX_IEEE1588_TMST
++ RTE_MBUF_F_RX_IEEE1588_TMST
+
+@@
+@@
+
+- PKT_RX_FDIR_ID
++ RTE_MBUF_F_RX_FDIR_ID
+
+@@
+@@
+
+- PKT_RX_FDIR_FLX
++ RTE_MBUF_F_RX_FDIR_FLX
+
+@@
+@@
+
+- PKT_RX_QINQ_STRIPPED
++ RTE_MBUF_F_RX_QINQ_STRIPPED
+
+@@
+@@
+
+- PKT_RX_LRO
++ RTE_MBUF_F_RX_LRO
+
+@@
+@@
+
+- PKT_RX_SEC_OFFLOAD
++ RTE_MBUF_F_RX_SEC_OFFLOAD
+
+@@
+@@
+
+- PKT_RX_SEC_OFFLOAD_FAILED
++ RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED
+
+@@
+@@
+
+- PKT_RX_QINQ
++ RTE_MBUF_F_RX_QINQ
+
+@@
+@@
+
+- PKT_RX_OUTER_L4_CKSUM_MASK
++ RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK
+
+@@
+@@
+
+- PKT_RX_OUTER_L4_CKSUM_UNKNOWN
++ RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
+
+@@
+@@
+
+- PKT_RX_OUTER_L4_CKSUM_BAD
++ RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD
+
+@@
+@@
+
+- PKT_RX_OUTER_L4_CKSUM_GOOD
++ RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD
+
+@@
+@@
+
+- PKT_RX_OUTER_L4_CKSUM_INVALID
++ RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID
+
+@@
+@@
+
+- PKT_FIRST_FREE
++ RTE_MBUF_F_FIRST_FREE
+
+@@
+@@
+
+- PKT_LAST_FREE
++ RTE_MBUF_F_LAST_FREE
+
+@@
+@@
+
+- PKT_TX_OUTER_UDP_CKSUM
++ RTE_MBUF_F_TX_OUTER_UDP_CKSUM
+
+@@
+@@
+
+- PKT_TX_UDP_SEG
++ RTE_MBUF_F_TX_UDP_SEG
+
+@@
+@@
+
+- PKT_TX_SEC_OFFLOAD
++ RTE_MBUF_F_TX_SEC_OFFLOAD
+
+@@
+@@
+
+- PKT_TX_MACSEC
++ RTE_MBUF_F_TX_MACSEC
+
+@@
+@@
+
+- PKT_TX_TUNNEL_VXLAN
++ RTE_MBUF_F_TX_TUNNEL_VXLAN
+
+@@
+@@
+
+- PKT_TX_TUNNEL_GRE
++ RTE_MBUF_F_TX_TUNNEL_GRE
+
+@@
+@@
+
+- PKT_TX_TUNNEL_IPIP
++ RTE_MBUF_F_TX_TUNNEL_IPIP
+
+@@
+@@
+
+- PKT_TX_TUNNEL_GENEVE
++ RTE_MBUF_F_TX_TUNNEL_GENEVE
+
+@@
+@@
+
+- PKT_TX_TUNNEL_MPLSINUDP
++ RTE_MBUF_F_TX_TUNNEL_MPLSINUDP
+
+@@
+@@
+
+- PKT_TX_TUNNEL_VXLAN_GPE
++ RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE
+
+@@
+@@
+
+- PKT_TX_TUNNEL_GTP
++ RTE_MBUF_F_TX_TUNNEL_GTP
+
+@@
+@@
+
+- PKT_TX_TUNNEL_IP
++ RTE_MBUF_F_TX_TUNNEL_IP
+
+@@
+@@
+
+- PKT_TX_TUNNEL_UDP
++ RTE_MBUF_F_TX_TUNNEL_UDP
+
+@@
+@@
+
+- PKT_TX_TUNNEL_MASK
++ RTE_MBUF_F_TX_TUNNEL_MASK
+
+@@
+@@
+
+- PKT_TX_QINQ
++ RTE_MBUF_F_TX_QINQ
+
+@@
+@@
+
+- PKT_TX_QINQ_PKT
++ RTE_MBUF_F_TX_QINQ_PKT
+
+@@
+@@
+
+- PKT_TX_TCP_SEG
++ RTE_MBUF_F_TX_TCP_SEG
+
+@@
+@@
+
+- PKT_TX_IEEE1588_TMST
++ RTE_MBUF_F_TX_IEEE1588_TMST
+
+@@
+@@
+
+- PKT_TX_L4_NO_CKSUM
++ RTE_MBUF_F_TX_L4_NO_CKSUM
+
+@@
+@@
+
+- PKT_TX_TCP_CKSUM
++ RTE_MBUF_F_TX_TCP_CKSUM
+
+@@
+@@
+
+- PKT_TX_SCTP_CKSUM
++ RTE_MBUF_F_TX_SCTP_CKSUM
+
+@@
+@@
+
+- PKT_TX_UDP_CKSUM
++ RTE_MBUF_F_TX_UDP_CKSUM
+
+@@
+@@
+
+- PKT_TX_L4_MASK
++ RTE_MBUF_F_TX_L4_MASK
+
+@@
+@@
+
+- PKT_TX_IP_CKSUM
++ RTE_MBUF_F_TX_IP_CKSUM
+
+@@
+@@
+
+- PKT_TX_IPV4
++ RTE_MBUF_F_TX_IPV4
+
+@@
+@@
+
+- PKT_TX_IPV6
++ RTE_MBUF_F_TX_IPV6
+
+@@
+@@
+
+- PKT_TX_VLAN
++ RTE_MBUF_F_TX_VLAN
+
+@@
+@@
+
+- PKT_TX_VLAN_PKT
++ RTE_MBUF_F_TX_VLAN_PKT
+
+@@
+@@
+
+- PKT_TX_OUTER_IP_CKSUM
++ RTE_MBUF_F_TX_OUTER_IP_CKSUM
+
+@@
+@@
+
+- PKT_TX_OUTER_IPV4
++ RTE_MBUF_F_TX_OUTER_IPV4
+
+@@
+@@
+
+- PKT_TX_OUTER_IPV6
++ RTE_MBUF_F_TX_OUTER_IPV6
+
+@@
+@@
+
+- PKT_TX_OFFLOAD_MASK
++ RTE_MBUF_F_TX_OFFLOAD_MASK
+
+@@
+@@
+
+- EXT_ATTACHED_MBUF
++ RTE_MBUF_F_EXTERNAL
+
+@@
+@@
+
+- IND_ATTACHED_MBUF
++ RTE_MBUF_F_INDIRECT
+
+@@
+@@
+
+- PKT_TX_DYNF_METADATA
++ RTE_MBUF_DYNFLAG_TX_METADATA
+
+@@
+@@
+
+- PKT_RX_DYNF_METADATA
++ RTE_MBUF_DYNFLAG_RX_METADATA
-- 
2.30.2


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [dpdk-dev] [PATCH v2 4/4] mbuf: add rte prefix to offload flags
  2021-10-15 19:24 ` [dpdk-dev] [PATCH v2 0/4] mbuf: offload flags namespace Olivier Matz
                     ` (2 preceding siblings ...)
  2021-10-15 19:24   ` [dpdk-dev] [PATCH v2 3/4] cocci: add cocci script to rename mbuf offload flags Olivier Matz
@ 2021-10-15 19:24   ` Olivier Matz
  2021-10-16  7:57     ` Andrew Rybchenko
  2021-10-24 11:43   ` [dpdk-dev] [PATCH v2 0/4] mbuf: offload flags namespace David Marchand
  4 siblings, 1 reply; 20+ messages in thread
From: Olivier Matz @ 2021-10-15 19:24 UTC (permalink / raw)
  To: dev; +Cc: David Marchand

Fix the mbuf offload flags namespace by adding an RTE_ prefix to the
name. The old flags remain usable, but a deprecation warning is issued
at compilation.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 app/test-pmd/csumonly.c                       |  62 +--
 app/test-pmd/flowgen.c                        |   8 +-
 app/test-pmd/ieee1588fwd.c                    |   6 +-
 app/test-pmd/macfwd.c                         |   8 +-
 app/test-pmd/macswap_common.h                 |  12 +-
 app/test-pmd/txonly.c                         |   8 +-
 app/test-pmd/util.c                           |  18 +-
 app/test/test_cryptodev_security_ipsec.c      |   4 +-
 app/test/test_ipsec.c                         |   4 +-
 app/test/test_mbuf.c                          | 144 ++++---
 doc/guides/nics/bnxt.rst                      |   8 +-
 doc/guides/nics/enic.rst                      |   8 +-
 doc/guides/nics/features.rst                  |  70 +--
 doc/guides/nics/ixgbe.rst                     |   2 +-
 doc/guides/nics/mlx5.rst                      |   6 +-
 .../generic_segmentation_offload_lib.rst      |   4 +-
 doc/guides/prog_guide/mbuf_lib.rst            |  18 +-
 doc/guides/prog_guide/metrics_lib.rst         |   2 +-
 doc/guides/prog_guide/rte_flow.rst            |  14 +-
 doc/guides/rel_notes/deprecation.rst          |   5 -
 doc/guides/rel_notes/release_21_11.rst        |   3 +
 drivers/compress/mlx5/mlx5_compress.c         |   2 +-
 drivers/crypto/cnxk/cn10k_cryptodev_ops.c     |  12 +-
 drivers/crypto/cnxk/cn10k_ipsec_la_ops.h      |   4 +-
 drivers/crypto/mlx5/mlx5_crypto.c             |   2 +-
 drivers/event/cnxk/cn9k_worker.h              |   2 +-
 drivers/event/octeontx/ssovf_worker.c         |  36 +-
 drivers/event/octeontx/ssovf_worker.h         |   2 +-
 drivers/event/octeontx2/otx2_worker.h         |   2 +-
 drivers/net/af_packet/rte_eth_af_packet.c     |   4 +-
 drivers/net/atlantic/atl_rxtx.c               |  46 +-
 drivers/net/avp/avp_ethdev.c                  |   8 +-
 drivers/net/axgbe/axgbe_rxtx.c                |  64 +--
 drivers/net/axgbe/axgbe_rxtx_vec_sse.c        |   2 +-
 drivers/net/bnx2x/bnx2x.c                     |   2 +-
 drivers/net/bnx2x/bnx2x_rxtx.c                |   2 +-
 drivers/net/bnxt/bnxt_rxr.c                   |  50 +--
 drivers/net/bnxt/bnxt_rxr.h                   |  32 +-
 drivers/net/bnxt/bnxt_txr.c                   |  40 +-
 drivers/net/bnxt/bnxt_txr.h                   |  38 +-
 drivers/net/bonding/rte_eth_bond_pmd.c        |   2 +-
 drivers/net/cnxk/cn10k_ethdev.c               |  18 +-
 drivers/net/cnxk/cn10k_rx.h                   |  38 +-
 drivers/net/cnxk/cn10k_tx.h                   | 178 ++++----
 drivers/net/cnxk/cn9k_ethdev.c                |  18 +-
 drivers/net/cnxk/cn9k_rx.h                    |  32 +-
 drivers/net/cnxk/cn9k_tx.h                    | 170 ++++----
 drivers/net/cnxk/cnxk_ethdev.h                |  10 +-
 drivers/net/cnxk/cnxk_lookup.c                |  40 +-
 drivers/net/cxgbe/sge.c                       |  46 +-
 drivers/net/dpaa/dpaa_ethdev.h                |   7 +-
 drivers/net/dpaa/dpaa_rxtx.c                  |  10 +-
 drivers/net/dpaa2/dpaa2_rxtx.c                |  30 +-
 drivers/net/e1000/em_rxtx.c                   |  39 +-
 drivers/net/e1000/igb_rxtx.c                  |  81 ++--
 drivers/net/ena/ena_ethdev.c                  |  53 ++-
 drivers/net/enetc/enetc_rxtx.c                |  44 +-
 drivers/net/enic/enic_main.c                  |  10 +-
 drivers/net/enic/enic_res.c                   |  12 +-
 drivers/net/enic/enic_rxtx.c                  |  24 +-
 drivers/net/enic/enic_rxtx_common.h           |  18 +-
 drivers/net/enic/enic_rxtx_vec_avx2.c         |  80 ++--
 drivers/net/fm10k/fm10k_rxtx.c                |  43 +-
 drivers/net/fm10k/fm10k_rxtx_vec.c            |  25 +-
 drivers/net/hinic/hinic_pmd_rx.c              |  22 +-
 drivers/net/hinic/hinic_pmd_tx.c              |  56 +--
 drivers/net/hinic/hinic_pmd_tx.h              |  13 +-
 drivers/net/hns3/hns3_ethdev.h                |   2 +-
 drivers/net/hns3/hns3_rxtx.c                  | 108 ++---
 drivers/net/hns3/hns3_rxtx.h                  |  25 +-
 drivers/net/hns3/hns3_rxtx_vec_neon.h         |   2 +-
 drivers/net/hns3/hns3_rxtx_vec_sve.c          |   2 +-
 drivers/net/i40e/i40e_rxtx.c                  | 157 ++++---
 drivers/net/i40e/i40e_rxtx_vec_altivec.c      |  22 +-
 drivers/net/i40e/i40e_rxtx_vec_avx2.c         |  70 +--
 drivers/net/i40e/i40e_rxtx_vec_avx512.c       |  62 +--
 drivers/net/i40e/i40e_rxtx_vec_neon.c         |  50 +--
 drivers/net/i40e/i40e_rxtx_vec_sse.c          |  60 +--
 drivers/net/iavf/iavf_rxtx.c                  |  90 ++--
 drivers/net/iavf/iavf_rxtx.h                  |  28 +-
 drivers/net/iavf/iavf_rxtx_vec_avx2.c         | 140 +++---
 drivers/net/iavf/iavf_rxtx_vec_avx512.c       | 140 +++---
 drivers/net/iavf/iavf_rxtx_vec_common.h       |  16 +-
 drivers/net/iavf/iavf_rxtx_vec_sse.c          | 112 ++---
 drivers/net/ice/ice_rxtx.c                    | 117 +++--
 drivers/net/ice/ice_rxtx_vec_avx2.c           | 158 +++----
 drivers/net/ice/ice_rxtx_vec_avx512.c         | 158 +++----
 drivers/net/ice/ice_rxtx_vec_common.h         |  16 +-
 drivers/net/ice/ice_rxtx_vec_sse.c            | 112 ++---
 drivers/net/igc/igc_txrx.c                    |  67 +--
 drivers/net/ionic/ionic_rxtx.c                |  59 ++-
 drivers/net/ixgbe/ixgbe_ethdev.c              |   4 +-
 drivers/net/ixgbe/ixgbe_rxtx.c                | 113 +++--
 drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c       |  38 +-
 drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c        |  44 +-
 drivers/net/liquidio/lio_rxtx.c               |  16 +-
 drivers/net/mlx4/mlx4_rxtx.c                  |  22 +-
 drivers/net/mlx5/mlx5_flow.c                  |   2 +-
 drivers/net/mlx5/mlx5_rx.c                    |  18 +-
 drivers/net/mlx5/mlx5_rx.h                    |   4 +-
 drivers/net/mlx5/mlx5_rxq.c                   |   2 +-
 drivers/net/mlx5/mlx5_rxtx.c                  |  18 +-
 drivers/net/mlx5/mlx5_rxtx_vec_altivec.h      |  76 ++--
 drivers/net/mlx5/mlx5_rxtx_vec_neon.h         |  36 +-
 drivers/net/mlx5/mlx5_rxtx_vec_sse.h          |  38 +-
 drivers/net/mlx5/mlx5_tx.h                    | 104 ++---
 drivers/net/mvneta/mvneta_ethdev.h            |   6 +-
 drivers/net/mvneta/mvneta_rxtx.c              |  16 +-
 drivers/net/mvpp2/mrvl_ethdev.c               |  22 +-
 drivers/net/netvsc/hn_rxtx.c                  |  28 +-
 drivers/net/nfp/nfp_rxtx.c                    |  26 +-
 drivers/net/octeontx/octeontx_rxtx.h          |  38 +-
 drivers/net/octeontx2/otx2_ethdev.c           |  18 +-
 drivers/net/octeontx2/otx2_lookup.c           |  40 +-
 drivers/net/octeontx2/otx2_rx.c               |  12 +-
 drivers/net/octeontx2/otx2_rx.h               |  22 +-
 drivers/net/octeontx2/otx2_tx.c               |  86 ++--
 drivers/net/octeontx2/otx2_tx.h               |  70 +--
 drivers/net/qede/qede_rxtx.c                  | 104 ++---
 drivers/net/qede/qede_rxtx.h                  |  20 +-
 drivers/net/sfc/sfc_dp_tx.h                   |  14 +-
 drivers/net/sfc/sfc_ef100_rx.c                |  18 +-
 drivers/net/sfc/sfc_ef100_tx.c                |  52 +--
 drivers/net/sfc/sfc_ef10_essb_rx.c            |   6 +-
 drivers/net/sfc/sfc_ef10_rx.c                 |   6 +-
 drivers/net/sfc/sfc_ef10_rx_ev.h              |  16 +-
 drivers/net/sfc/sfc_ef10_tx.c                 |  18 +-
 drivers/net/sfc/sfc_rx.c                      |  22 +-
 drivers/net/sfc/sfc_tso.c                     |   2 +-
 drivers/net/sfc/sfc_tso.h                     |   2 +-
 drivers/net/sfc/sfc_tx.c                      |   4 +-
 drivers/net/tap/rte_eth_tap.c                 |  28 +-
 drivers/net/thunderx/nicvf_rxtx.c             |  24 +-
 drivers/net/thunderx/nicvf_rxtx.h             |   2 +-
 drivers/net/txgbe/txgbe_ethdev.c              |   4 +-
 drivers/net/txgbe/txgbe_rxtx.c                | 172 ++++----
 drivers/net/vhost/rte_eth_vhost.c             |   2 +-
 drivers/net/virtio/virtio_rxtx.c              |  14 +-
 drivers/net/virtio/virtio_rxtx_packed.h       |   6 +-
 drivers/net/virtio/virtqueue.h                |  14 +-
 drivers/net/vmxnet3/vmxnet3_rxtx.c            |  59 ++-
 drivers/regex/mlx5/mlx5_regex_fastpath.c      |   2 +-
 examples/bpf/t2.c                             |   4 +-
 examples/ip_fragmentation/main.c              |   2 +-
 examples/ip_reassembly/main.c                 |   2 +-
 examples/ipsec-secgw/esp.c                    |   6 +-
 examples/ipsec-secgw/ipsec-secgw.c            |  20 +-
 examples/ipsec-secgw/ipsec_worker.c           |  12 +-
 examples/ipsec-secgw/sa.c                     |   2 +-
 examples/ptpclient/ptpclient.c                |   4 +-
 examples/qos_meter/main.c                     |  12 +-
 examples/vhost/main.c                         |  12 +-
 lib/ethdev/rte_ethdev.h                       |   4 +-
 lib/ethdev/rte_flow.h                         |  33 +-
 lib/eventdev/rte_event_eth_rx_adapter.c       |   4 +-
 lib/gso/gso_common.h                          |  40 +-
 lib/gso/gso_tunnel_tcp4.c                     |   2 +-
 lib/gso/rte_gso.c                             |  10 +-
 lib/gso/rte_gso.h                             |   4 +-
 lib/ipsec/esp_inb.c                           |  10 +-
 lib/ipsec/esp_outb.c                          |   4 +-
 lib/ipsec/misc.h                              |   2 +-
 lib/ipsec/rte_ipsec_group.h                   |   6 +-
 lib/ipsec/sa.c                                |   2 +-
 lib/mbuf/rte_mbuf.c                           | 220 +++++-----
 lib/mbuf/rte_mbuf.h                           |  30 +-
 lib/mbuf/rte_mbuf_core.h                      | 404 +++++++++++-------
 lib/mbuf/rte_mbuf_dyn.c                       |   2 +-
 lib/net/rte_ether.h                           |   6 +-
 lib/net/rte_ip.h                              |   4 +-
 lib/net/rte_net.h                             |  22 +-
 lib/pipeline/rte_table_action.c               |  10 +-
 lib/security/rte_security.h                   |  10 +-
 lib/vhost/virtio_net.c                        |  42 +-
 174 files changed, 3120 insertions(+), 3013 deletions(-)

diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 090797318a..1faa508f83 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -481,12 +481,12 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 	if (info->ethertype == _htons(RTE_ETHER_TYPE_IPV4)) {
 		ipv4_hdr = l3_hdr;
 
-		ol_flags |= PKT_TX_IPV4;
+		ol_flags |= RTE_MBUF_F_TX_IPV4;
 		if (info->l4_proto == IPPROTO_TCP && tso_segsz) {
-			ol_flags |= PKT_TX_IP_CKSUM;
+			ol_flags |= RTE_MBUF_F_TX_IP_CKSUM;
 		} else {
 			if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) {
-				ol_flags |= PKT_TX_IP_CKSUM;
+				ol_flags |= RTE_MBUF_F_TX_IP_CKSUM;
 			} else {
 				ipv4_hdr->hdr_checksum = 0;
 				ipv4_hdr->hdr_checksum =
@@ -494,7 +494,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 			}
 		}
 	} else if (info->ethertype == _htons(RTE_ETHER_TYPE_IPV6))
-		ol_flags |= PKT_TX_IPV6;
+		ol_flags |= RTE_MBUF_F_TX_IPV6;
 	else
 		return 0; /* packet type not supported, nothing to do */
 
@@ -503,7 +503,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 		/* do not recalculate udp cksum if it was 0 */
 		if (udp_hdr->dgram_cksum != 0) {
 			if (tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
-				ol_flags |= PKT_TX_UDP_CKSUM;
+				ol_flags |= RTE_MBUF_F_TX_UDP_CKSUM;
 			} else {
 				udp_hdr->dgram_cksum = 0;
 				udp_hdr->dgram_cksum =
@@ -512,13 +512,13 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 			}
 		}
 		if (info->gso_enable)
-			ol_flags |= PKT_TX_UDP_SEG;
+			ol_flags |= RTE_MBUF_F_TX_UDP_SEG;
 	} else if (info->l4_proto == IPPROTO_TCP) {
 		tcp_hdr = (struct rte_tcp_hdr *)((char *)l3_hdr + info->l3_len);
 		if (tso_segsz)
-			ol_flags |= PKT_TX_TCP_SEG;
+			ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 		else if (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) {
-			ol_flags |= PKT_TX_TCP_CKSUM;
+			ol_flags |= RTE_MBUF_F_TX_TCP_CKSUM;
 		} else {
 			tcp_hdr->cksum = 0;
 			tcp_hdr->cksum =
@@ -526,7 +526,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 					info->ethertype);
 		}
 		if (info->gso_enable)
-			ol_flags |= PKT_TX_TCP_SEG;
+			ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 	} else if (info->l4_proto == IPPROTO_SCTP) {
 		sctp_hdr = (struct rte_sctp_hdr *)
 			((char *)l3_hdr + info->l3_len);
@@ -534,7 +534,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 		 * offloaded */
 		if ((tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
 			((ipv4_hdr->total_length & 0x3) == 0)) {
-			ol_flags |= PKT_TX_SCTP_CKSUM;
+			ol_flags |= RTE_MBUF_F_TX_SCTP_CKSUM;
 		} else {
 			sctp_hdr->cksum = 0;
 			/* XXX implement CRC32c, example available in
@@ -557,14 +557,14 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
 
 	if (info->outer_ethertype == _htons(RTE_ETHER_TYPE_IPV4)) {
 		ipv4_hdr->hdr_checksum = 0;
-		ol_flags |= PKT_TX_OUTER_IPV4;
+		ol_flags |= RTE_MBUF_F_TX_OUTER_IPV4;
 
 		if (tx_offloads	& DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
-			ol_flags |= PKT_TX_OUTER_IP_CKSUM;
+			ol_flags |= RTE_MBUF_F_TX_OUTER_IP_CKSUM;
 		else
 			ipv4_hdr->hdr_checksum = rte_ipv4_cksum(ipv4_hdr);
 	} else
-		ol_flags |= PKT_TX_OUTER_IPV6;
+		ol_flags |= RTE_MBUF_F_TX_OUTER_IPV6;
 
 	if (info->outer_l4_proto != IPPROTO_UDP)
 		return ol_flags;
@@ -573,7 +573,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
 		((char *)outer_l3_hdr + info->outer_l3_len);
 
 	if (tso_enabled)
-		ol_flags |= PKT_TX_TCP_SEG;
+		ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 
 	/* Skip SW outer UDP checksum generation if HW supports it */
 	if (tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) {
@@ -584,7 +584,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
 			udp_hdr->dgram_cksum
 				= rte_ipv6_phdr_cksum(ipv6_hdr, ol_flags);
 
-		ol_flags |= PKT_TX_OUTER_UDP_CKSUM;
+		ol_flags |= RTE_MBUF_F_TX_OUTER_UDP_CKSUM;
 		return ol_flags;
 	}
 
@@ -855,17 +855,17 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 		info.is_tunnel = 0;
 		info.pkt_len = rte_pktmbuf_pkt_len(m);
 		tx_ol_flags = m->ol_flags &
-			      (IND_ATTACHED_MBUF | EXT_ATTACHED_MBUF);
+			      (RTE_MBUF_F_INDIRECT | RTE_MBUF_F_EXTERNAL);
 		rx_ol_flags = m->ol_flags;
 
 		/* Update the L3/L4 checksum error packet statistics */
-		if ((rx_ol_flags & PKT_RX_IP_CKSUM_MASK) == PKT_RX_IP_CKSUM_BAD)
+		if ((rx_ol_flags & RTE_MBUF_F_RX_IP_CKSUM_MASK) == RTE_MBUF_F_RX_IP_CKSUM_BAD)
 			rx_bad_ip_csum += 1;
-		if ((rx_ol_flags & PKT_RX_L4_CKSUM_MASK) == PKT_RX_L4_CKSUM_BAD)
+		if ((rx_ol_flags & RTE_MBUF_F_RX_L4_CKSUM_MASK) == RTE_MBUF_F_RX_L4_CKSUM_BAD)
 			rx_bad_l4_csum += 1;
-		if (rx_ol_flags & PKT_RX_OUTER_L4_CKSUM_BAD)
+		if (rx_ol_flags & RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD)
 			rx_bad_outer_l4_csum += 1;
-		if (rx_ol_flags & PKT_RX_OUTER_IP_CKSUM_BAD)
+		if (rx_ol_flags & RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD)
 			rx_bad_outer_ip_csum += 1;
 
 		/* step 1: dissect packet, parsing optional vlan, ip4/ip6, vxlan
@@ -888,26 +888,26 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 					((char *)l3_hdr + info.l3_len);
 				parse_gtp(udp_hdr, &info);
 				if (info.is_tunnel) {
-					tx_ol_flags |= PKT_TX_TUNNEL_GTP;
+					tx_ol_flags |= RTE_MBUF_F_TX_TUNNEL_GTP;
 					goto tunnel_update;
 				}
 				parse_vxlan_gpe(udp_hdr, &info);
 				if (info.is_tunnel) {
 					tx_ol_flags |=
-						PKT_TX_TUNNEL_VXLAN_GPE;
+						RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE;
 					goto tunnel_update;
 				}
 				parse_vxlan(udp_hdr, &info,
 					    m->packet_type);
 				if (info.is_tunnel) {
 					tx_ol_flags |=
-						PKT_TX_TUNNEL_VXLAN;
+						RTE_MBUF_F_TX_TUNNEL_VXLAN;
 					goto tunnel_update;
 				}
 				parse_geneve(udp_hdr, &info);
 				if (info.is_tunnel) {
 					tx_ol_flags |=
-						PKT_TX_TUNNEL_GENEVE;
+						RTE_MBUF_F_TX_TUNNEL_GENEVE;
 					goto tunnel_update;
 				}
 			} else if (info.l4_proto == IPPROTO_GRE) {
@@ -917,14 +917,14 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 					((char *)l3_hdr + info.l3_len);
 				parse_gre(gre_hdr, &info);
 				if (info.is_tunnel)
-					tx_ol_flags |= PKT_TX_TUNNEL_GRE;
+					tx_ol_flags |= RTE_MBUF_F_TX_TUNNEL_GRE;
 			} else if (info.l4_proto == IPPROTO_IPIP) {
 				void *encap_ip_hdr;
 
 				encap_ip_hdr = (char *)l3_hdr + info.l3_len;
 				parse_encap_ip(encap_ip_hdr, &info);
 				if (info.is_tunnel)
-					tx_ol_flags |= PKT_TX_TUNNEL_IPIP;
+					tx_ol_flags |= RTE_MBUF_F_TX_TUNNEL_IPIP;
 			}
 		}
 
@@ -950,7 +950,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 		if (info.is_tunnel == 1) {
 			tx_ol_flags |= process_outer_cksums(outer_l3_hdr, &info,
 					tx_offloads,
-					!!(tx_ol_flags & PKT_TX_TCP_SEG));
+					!!(tx_ol_flags & RTE_MBUF_F_TX_TCP_SEG));
 		}
 
 		/* step 3: fill the mbuf meta data (flags and header lengths) */
@@ -1014,7 +1014,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 				"l4_proto=%d l4_len=%d flags=%s\n",
 				info.l2_len, rte_be_to_cpu_16(info.ethertype),
 				info.l3_len, info.l4_proto, info.l4_len, buf);
-			if (rx_ol_flags & PKT_RX_LRO)
+			if (rx_ol_flags & RTE_MBUF_F_RX_LRO)
 				printf("rx: m->lro_segsz=%u\n", m->tso_segsz);
 			if (info.is_tunnel == 1)
 				printf("rx: outer_l2_len=%d outer_ethertype=%x "
@@ -1035,17 +1035,17 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 				    DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
 				    (tx_offloads &
 				    DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) ||
-				    (tx_ol_flags & PKT_TX_OUTER_IPV6))
+				    (tx_ol_flags & RTE_MBUF_F_TX_OUTER_IPV6))
 					printf("tx: m->outer_l2_len=%d "
 						"m->outer_l3_len=%d\n",
 						m->outer_l2_len,
 						m->outer_l3_len);
 				if (info.tunnel_tso_segsz != 0 &&
-						(m->ol_flags & PKT_TX_TCP_SEG))
+						(m->ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 					printf("tx: m->tso_segsz=%d\n",
 						m->tso_segsz);
 			} else if (info.tso_segsz != 0 &&
-					(m->ol_flags & PKT_TX_TCP_SEG))
+					(m->ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 				printf("tx: m->tso_segsz=%d\n", m->tso_segsz);
 			rte_get_tx_ol_flag_list(m->ol_flags, buf, sizeof(buf));
 			printf("tx: flags=%s", buf);
diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index 0ce2afbea5..2da20d5309 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -100,11 +100,11 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
 
 	tx_offloads = ports[fs->tx_port].dev_conf.txmode.offloads;
 	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
-		ol_flags |= PKT_TX_VLAN;
+		ol_flags |= RTE_MBUF_F_TX_VLAN;
 	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
-		ol_flags |= PKT_TX_QINQ;
+		ol_flags |= RTE_MBUF_F_TX_QINQ;
 	if (tx_offloads	& DEV_TX_OFFLOAD_MACSEC_INSERT)
-		ol_flags |= PKT_TX_MACSEC;
+		ol_flags |= RTE_MBUF_F_TX_MACSEC;
 
 	for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) {
 		if (!nb_pkt || !nb_clones) {
@@ -152,7 +152,7 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
 								   sizeof(*ip_hdr));
 			pkt->nb_segs		= 1;
 			pkt->pkt_len		= pkt_size;
-			pkt->ol_flags		&= EXT_ATTACHED_MBUF;
+			pkt->ol_flags		&= RTE_MBUF_F_EXTERNAL;
 			pkt->ol_flags		|= ol_flags;
 			pkt->vlan_tci		= vlan_tci;
 			pkt->vlan_tci_outer	= vlan_tci_outer;
diff --git a/app/test-pmd/ieee1588fwd.c b/app/test-pmd/ieee1588fwd.c
index 9cf10c1c50..3ff98c3455 100644
--- a/app/test-pmd/ieee1588fwd.c
+++ b/app/test-pmd/ieee1588fwd.c
@@ -114,7 +114,7 @@ ieee1588_packet_fwd(struct fwd_stream *fs)
 	eth_hdr = rte_pktmbuf_mtod(mb, struct rte_ether_hdr *);
 	eth_type = rte_be_to_cpu_16(eth_hdr->ether_type);
 
-	if (! (mb->ol_flags & PKT_RX_IEEE1588_PTP)) {
+	if (! (mb->ol_flags & RTE_MBUF_F_RX_IEEE1588_PTP)) {
 		if (eth_type == RTE_ETHER_TYPE_1588) {
 			printf("Port %u Received PTP packet not filtered"
 			       " by hardware\n",
@@ -163,7 +163,7 @@ ieee1588_packet_fwd(struct fwd_stream *fs)
 	 * Check that the received PTP packet has been timestamped by the
 	 * hardware.
 	 */
-	if (! (mb->ol_flags & PKT_RX_IEEE1588_TMST)) {
+	if (! (mb->ol_flags & RTE_MBUF_F_RX_IEEE1588_TMST)) {
 		printf("Port %u Received PTP packet not timestamped"
 		       " by hardware\n",
 		       fs->rx_port);
@@ -183,7 +183,7 @@ ieee1588_packet_fwd(struct fwd_stream *fs)
 	rte_ether_addr_copy(&addr, &eth_hdr->src_addr);
 
 	/* Forward PTP packet with hardware TX timestamp */
-	mb->ol_flags |= PKT_TX_IEEE1588_TMST;
+	mb->ol_flags |= RTE_MBUF_F_TX_IEEE1588_TMST;
 	fs->tx_packets += 1;
 	if (rte_eth_tx_burst(fs->rx_port, fs->tx_queue, &mb, 1) == 0) {
 		printf("Port %u sent PTP packet dropped\n", fs->rx_port);
diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c
index a67907b449..333998580b 100644
--- a/app/test-pmd/macfwd.c
+++ b/app/test-pmd/macfwd.c
@@ -73,11 +73,11 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
 	txp = &ports[fs->tx_port];
 	tx_offloads = txp->dev_conf.txmode.offloads;
 	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
-		ol_flags = PKT_TX_VLAN;
+		ol_flags = RTE_MBUF_F_TX_VLAN;
 	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
-		ol_flags |= PKT_TX_QINQ;
+		ol_flags |= RTE_MBUF_F_TX_QINQ;
 	if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
-		ol_flags |= PKT_TX_MACSEC;
+		ol_flags |= RTE_MBUF_F_TX_MACSEC;
 	for (i = 0; i < nb_rx; i++) {
 		if (likely(i < nb_rx - 1))
 			rte_prefetch0(rte_pktmbuf_mtod(pkts_burst[i + 1],
@@ -88,7 +88,7 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
 				&eth_hdr->dst_addr);
 		rte_ether_addr_copy(&ports[fs->tx_port].eth_addr,
 				&eth_hdr->src_addr);
-		mb->ol_flags &= IND_ATTACHED_MBUF | EXT_ATTACHED_MBUF;
+		mb->ol_flags &= RTE_MBUF_F_INDIRECT | RTE_MBUF_F_EXTERNAL;
 		mb->ol_flags |= ol_flags;
 		mb->l2_len = sizeof(struct rte_ether_hdr);
 		mb->l3_len = sizeof(struct rte_ipv4_hdr);
diff --git a/app/test-pmd/macswap_common.h b/app/test-pmd/macswap_common.h
index 7e9a3590a4..0d43d5cceb 100644
--- a/app/test-pmd/macswap_common.h
+++ b/app/test-pmd/macswap_common.h
@@ -11,11 +11,11 @@ ol_flags_init(uint64_t tx_offload)
 	uint64_t ol_flags = 0;
 
 	ol_flags |= (tx_offload & DEV_TX_OFFLOAD_VLAN_INSERT) ?
-			PKT_TX_VLAN : 0;
+			RTE_MBUF_F_TX_VLAN : 0;
 	ol_flags |= (tx_offload & DEV_TX_OFFLOAD_QINQ_INSERT) ?
-			PKT_TX_QINQ : 0;
+			RTE_MBUF_F_TX_QINQ : 0;
 	ol_flags |= (tx_offload & DEV_TX_OFFLOAD_MACSEC_INSERT) ?
-			PKT_TX_MACSEC : 0;
+			RTE_MBUF_F_TX_MACSEC : 0;
 
 	return ol_flags;
 }
@@ -26,10 +26,10 @@ vlan_qinq_set(struct rte_mbuf *pkts[], uint16_t nb,
 {
 	int i;
 
-	if (ol_flags & PKT_TX_VLAN)
+	if (ol_flags & RTE_MBUF_F_TX_VLAN)
 		for (i = 0; i < nb; i++)
 			pkts[i]->vlan_tci = vlan;
-	if (ol_flags & PKT_TX_QINQ)
+	if (ol_flags & RTE_MBUF_F_TX_QINQ)
 		for (i = 0; i < nb; i++)
 			pkts[i]->vlan_tci_outer = outer_vlan;
 }
@@ -37,7 +37,7 @@ vlan_qinq_set(struct rte_mbuf *pkts[], uint16_t nb,
 static inline void
 mbuf_field_set(struct rte_mbuf *mb, uint64_t ol_flags)
 {
-	mb->ol_flags &= IND_ATTACHED_MBUF | EXT_ATTACHED_MBUF;
+	mb->ol_flags &= RTE_MBUF_F_INDIRECT | RTE_MBUF_F_EXTERNAL;
 	mb->ol_flags |= ol_flags;
 	mb->l2_len = sizeof(struct rte_ether_hdr);
 	mb->l3_len = sizeof(struct rte_ipv4_hdr);
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index 0e44bc4d3b..7c34ef4541 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -207,7 +207,7 @@ pkt_burst_prepare(struct rte_mbuf *pkt, struct rte_mempool *mbp,
 
 	rte_pktmbuf_reset_headroom(pkt);
 	pkt->data_len = tx_pkt_seg_lengths[0];
-	pkt->ol_flags &= EXT_ATTACHED_MBUF;
+	pkt->ol_flags &= RTE_MBUF_F_EXTERNAL;
 	pkt->ol_flags |= ol_flags;
 	pkt->vlan_tci = vlan_tci;
 	pkt->vlan_tci_outer = vlan_tci_outer;
@@ -353,11 +353,11 @@ pkt_burst_transmit(struct fwd_stream *fs)
 	vlan_tci = txp->tx_vlan_id;
 	vlan_tci_outer = txp->tx_vlan_id_outer;
 	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
-		ol_flags = PKT_TX_VLAN;
+		ol_flags = RTE_MBUF_F_TX_VLAN;
 	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
-		ol_flags |= PKT_TX_QINQ;
+		ol_flags |= RTE_MBUF_F_TX_QINQ;
 	if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
-		ol_flags |= PKT_TX_MACSEC;
+		ol_flags |= RTE_MBUF_F_TX_MACSEC;
 
 	/*
 	 * Initialize Ethernet header.
diff --git a/app/test-pmd/util.c b/app/test-pmd/util.c
index 51506e4940..34ad0a09ca 100644
--- a/app/test-pmd/util.c
+++ b/app/test-pmd/util.c
@@ -151,20 +151,20 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[],
 			  eth_type, (unsigned int) mb->pkt_len,
 			  (int)mb->nb_segs);
 		ol_flags = mb->ol_flags;
-		if (ol_flags & PKT_RX_RSS_HASH) {
+		if (ol_flags & RTE_MBUF_F_RX_RSS_HASH) {
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - RSS hash=0x%x",
 				  (unsigned int) mb->hash.rss);
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - RSS queue=0x%x", (unsigned int) queue);
 		}
-		if (ol_flags & PKT_RX_FDIR) {
+		if (ol_flags & RTE_MBUF_F_RX_FDIR) {
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - FDIR matched ");
-			if (ol_flags & PKT_RX_FDIR_ID)
+			if (ol_flags & RTE_MBUF_F_RX_FDIR_ID)
 				MKDUMPSTR(print_buf, buf_size, cur_len,
 					  "ID=0x%x", mb->hash.fdir.hi);
-			else if (ol_flags & PKT_RX_FDIR_FLX)
+			else if (ol_flags & RTE_MBUF_F_RX_FDIR_FLX)
 				MKDUMPSTR(print_buf, buf_size, cur_len,
 					  "flex bytes=0x%08x %08x",
 					  mb->hash.fdir.hi, mb->hash.fdir.lo);
@@ -176,18 +176,18 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[],
 		if (is_timestamp_enabled(mb))
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - timestamp %"PRIu64" ", get_timestamp(mb));
-		if (ol_flags & PKT_RX_QINQ)
+		if (ol_flags & RTE_MBUF_F_RX_QINQ)
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - QinQ VLAN tci=0x%x, VLAN tci outer=0x%x",
 				  mb->vlan_tci, mb->vlan_tci_outer);
-		else if (ol_flags & PKT_RX_VLAN)
+		else if (ol_flags & RTE_MBUF_F_RX_VLAN)
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - VLAN tci=0x%x", mb->vlan_tci);
-		if (!is_rx && (ol_flags & PKT_TX_DYNF_METADATA))
+		if (!is_rx && (ol_flags & RTE_MBUF_DYNFLAG_TX_METADATA))
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - Tx metadata: 0x%x",
 				  *RTE_FLOW_DYNF_METADATA(mb));
-		if (is_rx && (ol_flags & PKT_RX_DYNF_METADATA))
+		if (is_rx && (ol_flags & RTE_MBUF_DYNFLAG_RX_METADATA))
 			MKDUMPSTR(print_buf, buf_size, cur_len,
 				  " - Rx metadata: 0x%x",
 				  *RTE_FLOW_DYNF_METADATA(mb));
@@ -325,7 +325,7 @@ tx_pkt_set_md(uint16_t port_id, __rte_unused uint16_t queue,
 		for (i = 0; i < nb_pkts; i++) {
 			*RTE_FLOW_DYNF_METADATA(pkts[i]) =
 						ports[port_id].tx_metadata;
-			pkts[i]->ol_flags |= PKT_TX_DYNF_METADATA;
+			pkts[i]->ol_flags |= RTE_MBUF_DYNFLAG_TX_METADATA;
 		}
 	return nb_pkts;
 }
diff --git a/app/test/test_cryptodev_security_ipsec.c b/app/test/test_cryptodev_security_ipsec.c
index bcd9746c98..4708803bd2 100644
--- a/app/test/test_cryptodev_security_ipsec.c
+++ b/app/test/test_cryptodev_security_ipsec.c
@@ -524,7 +524,7 @@ test_ipsec_td_verify(struct rte_mbuf *m, const struct ipsec_test_data *td,
 
 	if ((td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) &&
 				flags->ip_csum) {
-		if (m->ol_flags & PKT_RX_IP_CKSUM_GOOD)
+		if (m->ol_flags & RTE_MBUF_F_RX_IP_CKSUM_GOOD)
 			ret = test_ipsec_l3_csum_verify(m);
 		else
 			ret = TEST_FAILED;
@@ -537,7 +537,7 @@ test_ipsec_td_verify(struct rte_mbuf *m, const struct ipsec_test_data *td,
 
 	if ((td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) &&
 				flags->l4_csum) {
-		if (m->ol_flags & PKT_RX_L4_CKSUM_GOOD)
+		if (m->ol_flags & RTE_MBUF_F_RX_L4_CKSUM_GOOD)
 			ret = test_ipsec_l4_csum_verify(m);
 		else
 			ret = TEST_FAILED;
diff --git a/app/test/test_ipsec.c b/app/test/test_ipsec.c
index c6d6b88d6d..1bec63b0e8 100644
--- a/app/test/test_ipsec.c
+++ b/app/test/test_ipsec.c
@@ -1622,8 +1622,8 @@ inline_outb_burst_null_null_check(struct ipsec_unitest_params *ut_params,
 			"ibuf pkt_len is not equal to obuf pkt_len");
 
 		/* check mbuf ol_flags */
-		TEST_ASSERT(ut_params->ibuf[j]->ol_flags & PKT_TX_SEC_OFFLOAD,
-			"ibuf PKT_TX_SEC_OFFLOAD is not set");
+		TEST_ASSERT(ut_params->ibuf[j]->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD,
+			    "ibuf RTE_MBUF_F_TX_SEC_OFFLOAD is not set");
 	}
 	return 0;
 }
diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
index 82777109dc..05e72ad123 100644
--- a/app/test/test_mbuf.c
+++ b/app/test/test_mbuf.c
@@ -1495,7 +1495,7 @@ test_get_rx_ol_flag_list(void)
 		GOTO_FAIL("%s expected: -1, received = %d\n", __func__, ret);
 
 	/* Test case to check with zero buffer len */
-	ret = rte_get_rx_ol_flag_list(PKT_RX_L4_CKSUM_MASK, buf, 0);
+	ret = rte_get_rx_ol_flag_list(RTE_MBUF_F_RX_L4_CKSUM_MASK, buf, 0);
 	if (ret != -1)
 		GOTO_FAIL("%s expected: -1, received = %d\n", __func__, ret);
 
@@ -1526,7 +1526,8 @@ test_get_rx_ol_flag_list(void)
 				"non-zero, buffer should not be empty");
 
 	/* Test case to check with valid mask value */
-	ret = rte_get_rx_ol_flag_list(PKT_RX_SEC_OFFLOAD, buf, sizeof(buf));
+	ret = rte_get_rx_ol_flag_list(RTE_MBUF_F_RX_SEC_OFFLOAD, buf,
+				      sizeof(buf));
 	if (ret != 0)
 		GOTO_FAIL("%s expected: 0, received = %d\n", __func__, ret);
 
@@ -1553,7 +1554,7 @@ test_get_tx_ol_flag_list(void)
 		GOTO_FAIL("%s expected: -1, received = %d\n", __func__, ret);
 
 	/* Test case to check with zero buffer len */
-	ret = rte_get_tx_ol_flag_list(PKT_TX_IP_CKSUM, buf, 0);
+	ret = rte_get_tx_ol_flag_list(RTE_MBUF_F_TX_IP_CKSUM, buf, 0);
 	if (ret != -1)
 		GOTO_FAIL("%s expected: -1, received = %d\n", __func__, ret);
 
@@ -1585,7 +1586,8 @@ test_get_tx_ol_flag_list(void)
 				"non-zero, buffer should not be empty");
 
 	/* Test case to check with valid mask value */
-	ret = rte_get_tx_ol_flag_list(PKT_TX_UDP_CKSUM, buf, sizeof(buf));
+	ret = rte_get_tx_ol_flag_list(RTE_MBUF_F_TX_UDP_CKSUM, buf,
+				      sizeof(buf));
 	if (ret != 0)
 		GOTO_FAIL("%s expected: 0, received = %d\n", __func__, ret);
 
@@ -1611,28 +1613,28 @@ test_get_rx_ol_flag_name(void)
 	uint16_t i;
 	const char *flag_str = NULL;
 	const struct flag_name rx_flags[] = {
-		VAL_NAME(PKT_RX_VLAN),
-		VAL_NAME(PKT_RX_RSS_HASH),
-		VAL_NAME(PKT_RX_FDIR),
-		VAL_NAME(PKT_RX_L4_CKSUM_BAD),
-		VAL_NAME(PKT_RX_L4_CKSUM_GOOD),
-		VAL_NAME(PKT_RX_L4_CKSUM_NONE),
-		VAL_NAME(PKT_RX_IP_CKSUM_BAD),
-		VAL_NAME(PKT_RX_IP_CKSUM_GOOD),
-		VAL_NAME(PKT_RX_IP_CKSUM_NONE),
-		VAL_NAME(PKT_RX_OUTER_IP_CKSUM_BAD),
-		VAL_NAME(PKT_RX_VLAN_STRIPPED),
-		VAL_NAME(PKT_RX_IEEE1588_PTP),
-		VAL_NAME(PKT_RX_IEEE1588_TMST),
-		VAL_NAME(PKT_RX_FDIR_ID),
-		VAL_NAME(PKT_RX_FDIR_FLX),
-		VAL_NAME(PKT_RX_QINQ_STRIPPED),
-		VAL_NAME(PKT_RX_LRO),
-		VAL_NAME(PKT_RX_SEC_OFFLOAD),
-		VAL_NAME(PKT_RX_SEC_OFFLOAD_FAILED),
-		VAL_NAME(PKT_RX_OUTER_L4_CKSUM_BAD),
-		VAL_NAME(PKT_RX_OUTER_L4_CKSUM_GOOD),
-		VAL_NAME(PKT_RX_OUTER_L4_CKSUM_INVALID),
+		VAL_NAME(RTE_MBUF_F_RX_VLAN),
+		VAL_NAME(RTE_MBUF_F_RX_RSS_HASH),
+		VAL_NAME(RTE_MBUF_F_RX_FDIR),
+		VAL_NAME(RTE_MBUF_F_RX_L4_CKSUM_BAD),
+		VAL_NAME(RTE_MBUF_F_RX_L4_CKSUM_GOOD),
+		VAL_NAME(RTE_MBUF_F_RX_L4_CKSUM_NONE),
+		VAL_NAME(RTE_MBUF_F_RX_IP_CKSUM_BAD),
+		VAL_NAME(RTE_MBUF_F_RX_IP_CKSUM_GOOD),
+		VAL_NAME(RTE_MBUF_F_RX_IP_CKSUM_NONE),
+		VAL_NAME(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD),
+		VAL_NAME(RTE_MBUF_F_RX_VLAN_STRIPPED),
+		VAL_NAME(RTE_MBUF_F_RX_IEEE1588_PTP),
+		VAL_NAME(RTE_MBUF_F_RX_IEEE1588_TMST),
+		VAL_NAME(RTE_MBUF_F_RX_FDIR_ID),
+		VAL_NAME(RTE_MBUF_F_RX_FDIR_FLX),
+		VAL_NAME(RTE_MBUF_F_RX_QINQ_STRIPPED),
+		VAL_NAME(RTE_MBUF_F_RX_LRO),
+		VAL_NAME(RTE_MBUF_F_RX_SEC_OFFLOAD),
+		VAL_NAME(RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED),
+		VAL_NAME(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD),
+		VAL_NAME(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD),
+		VAL_NAME(RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID),
 	};
 
 	/* Test case to check with valid flag */
@@ -1663,31 +1665,31 @@ test_get_tx_ol_flag_name(void)
 	uint16_t i;
 	const char *flag_str = NULL;
 	const struct flag_name tx_flags[] = {
-		VAL_NAME(PKT_TX_VLAN),
-		VAL_NAME(PKT_TX_IP_CKSUM),
-		VAL_NAME(PKT_TX_TCP_CKSUM),
-		VAL_NAME(PKT_TX_SCTP_CKSUM),
-		VAL_NAME(PKT_TX_UDP_CKSUM),
-		VAL_NAME(PKT_TX_IEEE1588_TMST),
-		VAL_NAME(PKT_TX_TCP_SEG),
-		VAL_NAME(PKT_TX_IPV4),
-		VAL_NAME(PKT_TX_IPV6),
-		VAL_NAME(PKT_TX_OUTER_IP_CKSUM),
-		VAL_NAME(PKT_TX_OUTER_IPV4),
-		VAL_NAME(PKT_TX_OUTER_IPV6),
-		VAL_NAME(PKT_TX_TUNNEL_VXLAN),
-		VAL_NAME(PKT_TX_TUNNEL_GRE),
-		VAL_NAME(PKT_TX_TUNNEL_IPIP),
-		VAL_NAME(PKT_TX_TUNNEL_GENEVE),
-		VAL_NAME(PKT_TX_TUNNEL_MPLSINUDP),
-		VAL_NAME(PKT_TX_TUNNEL_VXLAN_GPE),
-		VAL_NAME(PKT_TX_TUNNEL_IP),
-		VAL_NAME(PKT_TX_TUNNEL_UDP),
-		VAL_NAME(PKT_TX_QINQ),
-		VAL_NAME(PKT_TX_MACSEC),
-		VAL_NAME(PKT_TX_SEC_OFFLOAD),
-		VAL_NAME(PKT_TX_UDP_SEG),
-		VAL_NAME(PKT_TX_OUTER_UDP_CKSUM),
+		VAL_NAME(RTE_MBUF_F_TX_VLAN),
+		VAL_NAME(RTE_MBUF_F_TX_IP_CKSUM),
+		VAL_NAME(RTE_MBUF_F_TX_TCP_CKSUM),
+		VAL_NAME(RTE_MBUF_F_TX_SCTP_CKSUM),
+		VAL_NAME(RTE_MBUF_F_TX_UDP_CKSUM),
+		VAL_NAME(RTE_MBUF_F_TX_IEEE1588_TMST),
+		VAL_NAME(RTE_MBUF_F_TX_TCP_SEG),
+		VAL_NAME(RTE_MBUF_F_TX_IPV4),
+		VAL_NAME(RTE_MBUF_F_TX_IPV6),
+		VAL_NAME(RTE_MBUF_F_TX_OUTER_IP_CKSUM),
+		VAL_NAME(RTE_MBUF_F_TX_OUTER_IPV4),
+		VAL_NAME(RTE_MBUF_F_TX_OUTER_IPV6),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_VXLAN),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_GRE),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_IPIP),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_GENEVE),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_MPLSINUDP),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_IP),
+		VAL_NAME(RTE_MBUF_F_TX_TUNNEL_UDP),
+		VAL_NAME(RTE_MBUF_F_TX_QINQ),
+		VAL_NAME(RTE_MBUF_F_TX_MACSEC),
+		VAL_NAME(RTE_MBUF_F_TX_SEC_OFFLOAD),
+		VAL_NAME(RTE_MBUF_F_TX_UDP_SEG),
+		VAL_NAME(RTE_MBUF_F_TX_OUTER_UDP_CKSUM),
 	};
 
 	/* Test case to check with valid flag */
@@ -1755,8 +1757,8 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 
 	/* test to validate if IP checksum is counted only for IPV4 packet */
 	/* set both IP checksum and IPV6 flags */
-	ol_flags |= PKT_TX_IP_CKSUM;
-	ol_flags |= PKT_TX_IPV6;
+	ol_flags |= RTE_MBUF_F_TX_IP_CKSUM;
+	ol_flags |= RTE_MBUF_F_TX_IPV6;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_IP_CKSUM_IPV6_SET",
 				pktmbuf_pool,
 				ol_flags, 0, -EINVAL) < 0)
@@ -1765,14 +1767,14 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 	ol_flags = 0;
 
 	/* test to validate if IP type is set when required */
-	ol_flags |= PKT_TX_L4_MASK;
+	ol_flags |= RTE_MBUF_F_TX_L4_MASK;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_IP_TYPE_NOT_SET",
 				pktmbuf_pool,
 				ol_flags, 0, -EINVAL) < 0)
 		GOTO_FAIL("%s failed: IP type is not set.\n", __func__);
 
 	/* test if IP type is set when TCP SEG is on */
-	ol_flags |= PKT_TX_TCP_SEG;
+	ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_IP_TYPE_NOT_SET",
 				pktmbuf_pool,
 				ol_flags, 0, -EINVAL) < 0)
@@ -1780,8 +1782,8 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 
 	ol_flags = 0;
 	/* test to confirm IP type (IPV4/IPV6) is set */
-	ol_flags = PKT_TX_L4_MASK;
-	ol_flags |= PKT_TX_IPV6;
+	ol_flags = RTE_MBUF_F_TX_L4_MASK;
+	ol_flags |= RTE_MBUF_F_TX_IPV6;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_IP_TYPE_SET",
 				pktmbuf_pool,
 				ol_flags, 0, 0) < 0)
@@ -1789,15 +1791,15 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 
 	ol_flags = 0;
 	/* test to check TSO segment size is non-zero */
-	ol_flags |= PKT_TX_IPV4;
-	ol_flags |= PKT_TX_TCP_SEG;
+	ol_flags |= RTE_MBUF_F_TX_IPV4;
+	ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 	/* set 0 tso segment size */
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_NULL_TSO_SEGSZ",
 				pktmbuf_pool,
 				ol_flags, 0, -EINVAL) < 0)
 		GOTO_FAIL("%s failed: tso segment size is null.\n", __func__);
 
-	/* retain IPV4 and PKT_TX_TCP_SEG mask */
+	/* retain IPV4 and RTE_MBUF_F_TX_TCP_SEG mask */
 	/* set valid tso segment size but IP CKSUM not set */
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_TSO_IP_CKSUM_NOT_SET",
 				pktmbuf_pool,
@@ -1806,7 +1808,7 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 
 	/* test to validate if IP checksum is set for TSO capability */
 	/* retain IPV4, TCP_SEG, tso_seg size */
-	ol_flags |= PKT_TX_IP_CKSUM;
+	ol_flags |= RTE_MBUF_F_TX_IP_CKSUM;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_TSO_IP_CKSUM_SET",
 				pktmbuf_pool,
 				ol_flags, 512, 0) < 0)
@@ -1814,8 +1816,8 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 
 	/* test to confirm TSO for IPV6 type */
 	ol_flags = 0;
-	ol_flags |= PKT_TX_IPV6;
-	ol_flags |= PKT_TX_TCP_SEG;
+	ol_flags |= RTE_MBUF_F_TX_IPV6;
+	ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_TSO_IPV6_SET",
 				pktmbuf_pool,
 				ol_flags, 512, 0) < 0)
@@ -1823,8 +1825,8 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 
 	ol_flags = 0;
 	/* test if outer IP checksum set for non outer IPv4 packet */
-	ol_flags |= PKT_TX_IPV6;
-	ol_flags |= PKT_TX_OUTER_IP_CKSUM;
+	ol_flags |= RTE_MBUF_F_TX_IPV6;
+	ol_flags |= RTE_MBUF_F_TX_OUTER_IP_CKSUM;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_OUTER_IPV4_NOT_SET",
 				pktmbuf_pool,
 				ol_flags, 512, -EINVAL) < 0)
@@ -1832,8 +1834,8 @@ test_mbuf_validate_tx_offload_one(struct rte_mempool *pktmbuf_pool)
 
 	ol_flags = 0;
 	/* test to confirm outer IP checksum is set for outer IPV4 packet */
-	ol_flags |= PKT_TX_OUTER_IP_CKSUM;
-	ol_flags |= PKT_TX_OUTER_IPV4;
+	ol_flags |= RTE_MBUF_F_TX_OUTER_IP_CKSUM;
+	ol_flags |= RTE_MBUF_F_TX_OUTER_IPV4;
 	if (test_mbuf_validate_tx_offload("MBUF_TEST_OUTER_IPV4_SET",
 				pktmbuf_pool,
 				ol_flags, 512, 0) < 0)
@@ -2366,7 +2368,7 @@ test_pktmbuf_ext_shinfo_init_helper(struct rte_mempool *pktmbuf_pool)
 	buf_iova = rte_mem_virt2iova(ext_buf_addr);
 	rte_pktmbuf_attach_extbuf(m, ext_buf_addr, buf_iova, buf_len,
 		ret_shinfo);
-	if (m->ol_flags != EXT_ATTACHED_MBUF)
+	if (m->ol_flags != RTE_MBUF_F_EXTERNAL)
 		GOTO_FAIL("%s: External buffer is not attached to mbuf\n",
 				__func__);
 
@@ -2380,7 +2382,7 @@ test_pktmbuf_ext_shinfo_init_helper(struct rte_mempool *pktmbuf_pool)
 	/* attach the same external buffer to the cloned mbuf */
 	rte_pktmbuf_attach_extbuf(clone, ext_buf_addr, buf_iova, buf_len,
 			ret_shinfo);
-	if (clone->ol_flags != EXT_ATTACHED_MBUF)
+	if (clone->ol_flags != RTE_MBUF_F_EXTERNAL)
 		GOTO_FAIL("%s: External buffer is not attached to mbuf\n",
 				__func__);
 
@@ -2672,8 +2674,8 @@ test_mbuf_dyn(struct rte_mempool *pktmbuf_pool)
 			flag2, strerror(errno));
 
 	flag3 = rte_mbuf_dynflag_register_bitnum(&dynflag3,
-						rte_bsf64(PKT_LAST_FREE));
-	if (flag3 != rte_bsf64(PKT_LAST_FREE))
+						rte_bsf64(RTE_MBUF_F_LAST_FREE));
+	if (flag3 != rte_bsf64(RTE_MBUF_F_LAST_FREE))
 		GOTO_FAIL("failed to register dynamic flag 3, flag3=%d: %s",
 			flag3, strerror(errno));
 
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index f2f5eff48d..72f4b53109 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -512,9 +512,9 @@ configured TPID.
     // enable VLAN insert offload
     testpmd> port config (port_id) rx_offload vlan_insert|qinq_insert (on|off)
 
-    if (mbuf->ol_flags && PKT_TX_QINQ)       // case-1: insert VLAN to single-tagged packet
+    if (mbuf->ol_flags && RTE_MBUF_F_TX_QINQ)       // case-1: insert VLAN to single-tagged packet
         tci_value = mbuf->vlan_tci_outer
-    else if (mbuf->ol_flags && PKT_TX_VLAN)  // case-2: insert VLAN to untagged packet
+    else if (mbuf->ol_flags && RTE_MBUF_F_TX_VLAN)  // case-2: insert VLAN to untagged packet
         tci_value = mbuf->vlan_tci
 
 VLAN Strip
@@ -528,7 +528,7 @@ The application configures the per-port VLAN strip offload.
     testpmd> port config (port_id) tx_offload vlan_strip (on|off)
 
     // notify application VLAN strip via mbuf
-    mbuf->ol_flags |= PKT_RX_VLAN | PKT_RX_STRIPPED // outer VLAN is found and stripped
+    mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_STRIPPED // outer VLAN is found and stripped
     mbuf->vlan_tci = tci_value                      // TCI of the stripped VLAN
 
 Time Synchronization
@@ -552,7 +552,7 @@ packets to application via mbuf.
 .. code-block:: console
 
     // RX packet completion will indicate whether the packet is PTP
-    mbuf->ol_flags |= PKT_RX_IEEE1588_PTP
+    mbuf->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP
 
 Statistics Collection
 ~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst
index 91bdcd065a..d5ffd51dea 100644
--- a/doc/guides/nics/enic.rst
+++ b/doc/guides/nics/enic.rst
@@ -279,9 +279,9 @@ inner and outer packets can be IPv4 or IPv6.
 - Rx checksum offloads.
 
   The NIC validates IPv4/UDP/TCP checksums of both inner and outer packets.
-  Good checksum flags (e.g. ``PKT_RX_L4_CKSUM_GOOD``) indicate that the inner
+  Good checksum flags (e.g. ``RTE_MBUF_F_RX_L4_CKSUM_GOOD``) indicate that the inner
   packet has the correct checksum, and if applicable, the outer packet also
-  has the correct checksum. Bad checksum flags (e.g. ``PKT_RX_L4_CKSUM_BAD``)
+  has the correct checksum. Bad checksum flags (e.g. ``RTE_MBUF_F_RX_L4_CKSUM_BAD``)
   indicate that the inner and/or outer packets have invalid checksum values.
 
 - Inner Rx packet type classification
@@ -437,8 +437,8 @@ Limitations
 
 Another alternative is modify the adapter's ingress VLAN rewrite mode so that
 packets with the default VLAN tag are stripped by the adapter and presented to
-DPDK as untagged packets. In this case mbuf->vlan_tci and the PKT_RX_VLAN and
-PKT_RX_VLAN_STRIPPED mbuf flags would not be set. This mode is enabled with the
+DPDK as untagged packets. In this case mbuf->vlan_tci and the RTE_MBUF_F_RX_VLAN and
+RTE_MBUF_F_RX_VLAN_STRIPPED mbuf flags would not be set. This mode is enabled with the
 ``devargs`` parameter ``ig-vlan-rewrite=untag``. For example::
 
     -a 12:00.0,ig-vlan-rewrite=untag
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index e346018e4b..fe830338ec 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -210,7 +210,7 @@ Supports Large Receive Offload.
   ``dev_conf.rxmode.max_lro_pkt_size``.
 * **[implements] datapath**: ``LRO functionality``.
 * **[implements] rte_eth_dev_data**: ``lro``.
-* **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
+* **[provides]   mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_LRO``, ``mbuf.tso_segsz``.
 * **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
 * **[provides]   rte_eth_dev_info**: ``max_lro_pkt_size``.
 
@@ -224,7 +224,7 @@ Supports TCP Segmentation Offloading.
 
 * **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_TCP_TSO``.
 * **[uses]       rte_eth_desc_lim**: ``nb_seg_max``, ``nb_mtu_seg_max``.
-* **[uses]       mbuf**: ``mbuf.ol_flags:`` ``PKT_TX_TCP_SEG``, ``PKT_TX_IPV4``, ``PKT_TX_IPV6``, ``PKT_TX_IP_CKSUM``.
+* **[uses]       mbuf**: ``mbuf.ol_flags:`` ``RTE_MBUF_F_TX_TCP_SEG``, ``RTE_MBUF_F_TX_IPV4``, ``RTE_MBUF_F_TX_IPV6``, ``RTE_MBUF_F_TX_IP_CKSUM``.
 * **[uses]       mbuf**: ``mbuf.tso_segsz``, ``mbuf.l2_len``, ``mbuf.l3_len``, ``mbuf.l4_len``.
 * **[implements] datapath**: ``TSO functionality``.
 * **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_TCP_TSO,DEV_TX_OFFLOAD_UDP_TSO``.
@@ -292,7 +292,7 @@ Supports RSS hashing on RX.
 * **[uses]     user config**: ``dev_conf.rx_adv_conf.rss_conf``.
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_RSS_HASH``.
 * **[provides] rte_eth_dev_info**: ``flow_type_rss_offloads``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_RSS_HASH``, ``mbuf.rss``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_RSS_HASH``, ``mbuf.rss``.
 
 
 .. _nic_features_inner_rss:
@@ -304,7 +304,7 @@ Supports RX RSS hashing on Inner headers.
 
 * **[uses]    rte_flow_action_rss**: ``level``.
 * **[uses]    rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_RSS_HASH``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_RSS_HASH``, ``mbuf.rss``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_RSS_HASH``, ``mbuf.rss``.
 
 
 .. _nic_features_rss_key_update:
@@ -424,8 +424,8 @@ of protocol operations. See Security library and PMD documentation for more deta
   ``session_stats_get``, ``session_destroy``, ``set_pkt_metadata``, ``capabilities_get``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_SECURITY``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_SECURITY``.
-* **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD``,
-  ``mbuf.ol_flags:PKT_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD_FAILED``.
+* **[provides]   mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_SEC_OFFLOAD``,
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED``.
 * **[provides]   rte_security_ops, capabilities_get**:  ``action: RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO``
 
 
@@ -447,8 +447,8 @@ protocol operations. See security library and PMD documentation for more details
   ``capabilities_get``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_SECURITY``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_SECURITY``.
-* **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD``,
-  ``mbuf.ol_flags:PKT_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD_FAILED``.
+* **[provides]   mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_SEC_OFFLOAD``,
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED``.
 * **[provides]   rte_security_ops, capabilities_get**:  ``action: RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL``
 
 
@@ -472,9 +472,9 @@ Supports VLAN offload to hardware.
 
 * **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
 * **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_VLAN_INSERT``.
-* **[uses]       mbuf**: ``mbuf.ol_flags:PKT_TX_VLAN``, ``mbuf.vlan_tci``.
+* **[uses]       mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_TX_VLAN``, ``mbuf.vlan_tci``.
 * **[implements] eth_dev_ops**: ``vlan_offload_set``.
-* **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:PKT_RX_VLAN`` ``mbuf.vlan_tci``.
+* **[provides]   mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:RTE_MBUF_F_RX_VLAN`` ``mbuf.vlan_tci``.
 * **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
 * **[related]    API**: ``rte_eth_dev_set_vlan_offload()``,
@@ -490,9 +490,9 @@ Supports QinQ (queue in queue) offload.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
 * **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_QINQ_INSERT``.
-* **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ``, ``mbuf.vlan_tci_outer``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.ol_flags:PKT_RX_QINQ``,
-  ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:PKT_RX_VLAN``
+* **[uses]     mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_TX_QINQ``, ``mbuf.vlan_tci_outer``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_QINQ_STRIPPED``, ``mbuf.ol_flags:RTE_MBUF_F_RX_QINQ``,
+  ``mbuf.ol_flags:RTE_MBUF_F_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:RTE_MBUF_F_RX_VLAN``
   ``mbuf.vlan_tci``, ``mbuf.vlan_tci_outer``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
@@ -522,12 +522,12 @@ Supports L3 checksum offload.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
 * **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_IPV4_CKSUM``.
-* **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
-  ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
+* **[uses]     mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_TX_IP_CKSUM``,
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_IPV4`` | ``RTE_MBUF_F_TX_IPV6``.
 * **[uses]     mbuf**: ``mbuf.l2_len``, ``mbuf.l3_len``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
-  ``PKT_RX_IP_CKSUM_BAD`` | ``PKT_RX_IP_CKSUM_GOOD`` |
-  ``PKT_RX_IP_CKSUM_NONE``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN`` |
+  ``RTE_MBUF_F_RX_IP_CKSUM_BAD`` | ``RTE_MBUF_F_RX_IP_CKSUM_GOOD`` |
+  ``RTE_MBUF_F_RX_IP_CKSUM_NONE``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
 
@@ -541,13 +541,13 @@ Supports L4 checksum offload.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM,DEV_RX_OFFLOAD_SCTP_CKSUM``.
 * **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
-* **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
-  ``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
-  ``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
+* **[uses]     mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_TX_IPV4`` | ``RTE_MBUF_F_TX_IPV6``,
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_L4_NO_CKSUM`` | ``RTE_MBUF_F_TX_TCP_CKSUM`` |
+  ``RTE_MBUF_F_TX_SCTP_CKSUM`` | ``RTE_MBUF_F_TX_UDP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.l2_len``, ``mbuf.l3_len``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_L4_CKSUM_UNKNOWN`` |
-  ``PKT_RX_L4_CKSUM_BAD`` | ``PKT_RX_L4_CKSUM_GOOD`` |
-  ``PKT_RX_L4_CKSUM_NONE``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN`` |
+  ``RTE_MBUF_F_RX_L4_CKSUM_BAD`` | ``RTE_MBUF_F_RX_L4_CKSUM_GOOD`` |
+  ``RTE_MBUF_F_RX_L4_CKSUM_NONE``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM,DEV_RX_OFFLOAD_SCTP_CKSUM``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
 
@@ -559,7 +559,7 @@ Timestamp offload
 Supports Timestamp.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TIMESTAMP``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_TIMESTAMP``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_TIMESTAMP``.
 * **[provides] mbuf**: ``mbuf.timestamp``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa: DEV_RX_OFFLOAD_TIMESTAMP``.
 * **[related] eth_dev_ops**: ``read_clock``.
@@ -573,7 +573,7 @@ Supports MACsec.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
 * **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MACSEC_INSERT``.
-* **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
+* **[uses]     mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_TX_MACSEC``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
 
@@ -587,12 +587,12 @@ Supports inner packet L3 checksum.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
 * **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
-* **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
-  ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
-  ``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
-  ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
+* **[uses]     mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_TX_IP_CKSUM``,
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_IPV4`` | ``RTE_MBUF_F_TX_IPV6``,
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_OUTER_IP_CKSUM``,
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_OUTER_IPV4`` | ``RTE_MBUF_F_TX_OUTER_IPV6``.
 * **[uses]     mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_OUTER_IP_CKSUM_BAD``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 
@@ -605,11 +605,11 @@ Inner L4 checksum
 Supports inner packet L4 checksum.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_UDP_CKSUM``.
-* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_OUTER_L4_CKSUM_UNKNOWN`` |
-  ``PKT_RX_OUTER_L4_CKSUM_BAD`` | ``PKT_RX_OUTER_L4_CKSUM_GOOD`` | ``PKT_RX_OUTER_L4_CKSUM_INVALID``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN`` |
+  ``RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD`` | ``RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD`` | ``RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID``.
 * **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``.
-* **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
-  ``mbuf.ol_flags:PKT_TX_OUTER_UDP_CKSUM``.
+* **[uses]     mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_TX_OUTER_IPV4`` | ``RTE_MBUF_F_TX_OUTER_IPV6``.
+  ``mbuf.ol_flags:RTE_MBUF_F_TX_OUTER_UDP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_UDP_CKSUM``,
   ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``.
diff --git a/doc/guides/nics/ixgbe.rst b/doc/guides/nics/ixgbe.rst
index 20a74b9b5b..437662aa05 100644
--- a/doc/guides/nics/ixgbe.rst
+++ b/doc/guides/nics/ixgbe.rst
@@ -284,7 +284,7 @@ Intel 82599 10 Gigabit Ethernet Controller Specification Update (Revision 2.87)
 Errata: 44 Integrity Error Reported for IPv4/UDP Packets With Zero Checksum
 
 To support UDP zero checksum, the zero and bad UDP checksum packet is marked as
-PKT_RX_L4_CKSUM_UNKNOWN, so the application needs to recompute the checksum to
+RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN, so the application needs to recompute the checksum to
 validate it.
 
 Inline crypto processing support
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index bae73f42d8..9324ce7818 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -255,7 +255,7 @@ Limitations
   no MPRQ feature or vectorized code can be engaged.
 
 - When Multi-Packet Rx queue is configured (``mprq_en``), a Rx packet can be
-  externally attached to a user-provided mbuf with having EXT_ATTACHED_MBUF in
+  externally attached to a user-provided mbuf with having RTE_MBUF_F_EXTERNAL in
   ol_flags. As the mempool for the external buffer is managed by PMD, all the
   Rx mbufs must be freed before the device is closed. Otherwise, the mempool of
   the external buffers will be freed by PMD and the application which still
@@ -263,7 +263,7 @@ Limitations
 
 - If Multi-Packet Rx queue is configured (``mprq_en``) and Rx CQE compression is
   enabled (``rxq_cqe_comp_en``) at the same time, RSS hash result is not fully
-  supported. Some Rx packets may not have PKT_RX_RSS_HASH.
+  supported. Some Rx packets may not have RTE_MBUF_F_RX_RSS_HASH.
 
 - IPv6 Multicast messages are not supported on VM, while promiscuous mode
   and allmulticast mode are both set to off.
@@ -644,7 +644,7 @@ Driver options
   the mbuf by external buffer attachment - ``rte_pktmbuf_attach_extbuf()``.
   A mempool for external buffers will be allocated and managed by PMD. If Rx
   packet is externally attached, ol_flags field of the mbuf will have
-  EXT_ATTACHED_MBUF and this flag must be preserved. ``RTE_MBUF_HAS_EXTBUF()``
+  RTE_MBUF_F_EXTERNAL and this flag must be preserved. ``RTE_MBUF_HAS_EXTBUF()``
   checks the flag. The default value is 128, valid only if ``mprq_en`` is set.
 
 - ``rxqs_min_mprq`` parameter [int]
diff --git a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
index 7bff0aef0b..6537f3d5d6 100644
--- a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
+++ b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
@@ -211,11 +211,11 @@ To segment an outgoing packet, an application must:
      responsibility to ensure that these flags are set.
 
    - For example, in order to segment TCP/IPv4 packets, the application should
-     add the ``PKT_TX_IPV4`` and ``PKT_TX_TCP_SEG`` flags to the mbuf's
+     add the ``RTE_MBUF_F_TX_IPV4`` and ``RTE_MBUF_F_TX_TCP_SEG`` flags to the mbuf's
      ol_flags.
 
    - If checksum calculation in hardware is required, the application should
-     also add the ``PKT_TX_TCP_CKSUM`` and ``PKT_TX_IP_CKSUM`` flags.
+     also add the ``RTE_MBUF_F_TX_TCP_CKSUM`` and ``RTE_MBUF_F_TX_IP_CKSUM`` flags.
 
 #. Check if the packet should be processed. Packets with one of the
    following properties are not processed and are returned immediately:
diff --git a/doc/guides/prog_guide/mbuf_lib.rst b/doc/guides/prog_guide/mbuf_lib.rst
index 2f190b40e4..15b266c295 100644
--- a/doc/guides/prog_guide/mbuf_lib.rst
+++ b/doc/guides/prog_guide/mbuf_lib.rst
@@ -123,7 +123,7 @@ timestamp mechanism, the VLAN tagging and the IP checksum computation.
 
 On TX side, it is also possible for an application to delegate some
 processing to the hardware if it supports it. For instance, the
-PKT_TX_IP_CKSUM flag allows to offload the computation of the IPv4
+RTE_MBUF_F_TX_IP_CKSUM flag allows to offload the computation of the IPv4
 checksum.
 
 The following examples explain how to configure different TX offloads on
@@ -134,7 +134,7 @@ a vxlan-encapsulated tcp packet:
 
     mb->l2_len = len(out_eth)
     mb->l3_len = len(out_ip)
-    mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM
+    mb->ol_flags |= RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CSUM
     set out_ip checksum to 0 in the packet
 
   This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM.
@@ -143,7 +143,7 @@ a vxlan-encapsulated tcp packet:
 
     mb->l2_len = len(out_eth)
     mb->l3_len = len(out_ip)
-    mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM | PKT_TX_UDP_CKSUM
+    mb->ol_flags |= RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CSUM | RTE_MBUF_F_TX_UDP_CKSUM
     set out_ip checksum to 0 in the packet
     set out_udp checksum to pseudo header using rte_ipv4_phdr_cksum()
 
@@ -154,7 +154,7 @@ a vxlan-encapsulated tcp packet:
 
     mb->l2_len = len(out_eth + out_ip + out_udp + vxlan + in_eth)
     mb->l3_len = len(in_ip)
-    mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM
+    mb->ol_flags |= RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CSUM
     set in_ip checksum to 0 in the packet
 
   This is similar to case 1), but l2_len is different. It is supported
@@ -165,7 +165,7 @@ a vxlan-encapsulated tcp packet:
 
     mb->l2_len = len(out_eth + out_ip + out_udp + vxlan + in_eth)
     mb->l3_len = len(in_ip)
-    mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM | PKT_TX_TCP_CKSUM
+    mb->ol_flags |= RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CSUM | RTE_MBUF_F_TX_TCP_CKSUM
     set in_ip checksum to 0 in the packet
     set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()
 
@@ -179,8 +179,8 @@ a vxlan-encapsulated tcp packet:
     mb->l2_len = len(out_eth + out_ip + out_udp + vxlan + in_eth)
     mb->l3_len = len(in_ip)
     mb->l4_len = len(in_tcp)
-    mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM |
-      PKT_TX_TCP_SEG;
+    mb->ol_flags |= RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM |
+      RTE_MBUF_F_TX_TCP_SEG;
     set in_ip checksum to 0 in the packet
     set in_tcp checksum to pseudo header without including the IP
       payload length using rte_ipv4_phdr_cksum()
@@ -194,8 +194,8 @@ a vxlan-encapsulated tcp packet:
     mb->outer_l3_len = len(out_ip)
     mb->l2_len = len(out_udp + vxlan + in_eth)
     mb->l3_len = len(in_ip)
-    mb->ol_flags |= PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IP_CKSUM  | \
-      PKT_TX_IP_CKSUM |  PKT_TX_TCP_CKSUM;
+    mb->ol_flags |= RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IP_CKSUM  | \
+      RTE_MBUF_F_TX_IP_CKSUM |  RTE_MBUF_F_TX_TCP_CKSUM;
     set out_ip checksum to 0 in the packet
     set in_ip checksum to 0 in the packet
     set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()
diff --git a/doc/guides/prog_guide/metrics_lib.rst b/doc/guides/prog_guide/metrics_lib.rst
index eca855d601..f8416eaa02 100644
--- a/doc/guides/prog_guide/metrics_lib.rst
+++ b/doc/guides/prog_guide/metrics_lib.rst
@@ -290,7 +290,7 @@ Timestamp and latency calculation
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 The Latency stats library marks the time in the timestamp field of the
-mbuf for the ingress packets and sets the ``PKT_RX_TIMESTAMP`` flag of
+mbuf for the ingress packets and sets the ``RTE_MBUF_F_RX_TIMESTAMP`` flag of
 ``ol_flags`` for the mbuf to indicate the marked time as a valid one.
 At the egress, the mbufs with the flag set are considered having valid
 timestamp and are used for the latency calculation.
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 2b42d5ec8c..8f9251953d 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -687,9 +687,9 @@ Item: ``META``
 Matches 32 bit metadata item set.
 
 On egress, metadata can be set either by mbuf metadata field with
-PKT_TX_DYNF_METADATA flag or ``SET_META`` action. On ingress, ``SET_META``
+RTE_MBUF_DYNFLAG_TX_METADATA flag or ``SET_META`` action. On ingress, ``SET_META``
 action sets metadata for a packet and the metadata will be reported via
-``metadata`` dynamic field of ``rte_mbuf`` with PKT_RX_DYNF_METADATA flag.
+``metadata`` dynamic field of ``rte_mbuf`` with RTE_MBUF_DYNFLAG_RX_METADATA flag.
 
 - Default ``mask`` matches the specified Rx metadata value.
 
@@ -1656,8 +1656,8 @@ flows to loop between groups.
 Action: ``MARK``
 ^^^^^^^^^^^^^^^^
 
-Attaches an integer value to packets and sets ``PKT_RX_FDIR`` and
-``PKT_RX_FDIR_ID`` mbuf flags.
+Attaches an integer value to packets and sets ``RTE_MBUF_F_RX_FDIR`` and
+``RTE_MBUF_F_RX_FDIR_ID`` mbuf flags.
 
 This value is arbitrary and application-defined. Maximum allowed value
 depends on the underlying implementation. It is returned in the
@@ -1677,7 +1677,7 @@ Action: ``FLAG``
 ^^^^^^^^^^^^^^^^
 
 Flags packets. Similar to `Action: MARK`_ without a specific value; only
-sets the ``PKT_RX_FDIR`` mbuf flag.
+sets the ``RTE_MBUF_F_RX_FDIR`` mbuf flag.
 
 - No configurable properties.
 
@@ -2635,10 +2635,10 @@ Action: ``SET_META``
 
 Set metadata. Item ``META`` matches metadata.
 
-Metadata set by mbuf metadata field with PKT_TX_DYNF_METADATA flag on egress
+Metadata set by mbuf metadata field with RTE_MBUF_DYNFLAG_TX_METADATA flag on egress
 will be overridden by this action. On ingress, the metadata will be carried by
 ``metadata`` dynamic field of ``rte_mbuf`` which can be accessed by
-``RTE_FLOW_DYNF_METADATA()``. PKT_RX_DYNF_METADATA flag will be set along
+``RTE_FLOW_DYNF_METADATA()``. RTE_MBUF_DYNFLAG_RX_METADATA flag will be set along
 with the data.
 
 The mbuf dynamic field must be registered by calling
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 45239ca56e..07bed606a6 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -39,11 +39,6 @@ Deprecation Notices
   ``__atomic_thread_fence`` must be used for patches that need to be merged in
   20.08 onwards. This change will not introduce any performance degradation.
 
-* mbuf: The mbuf offload flags ``PKT_*`` will be renamed as ``RTE_MBUF_F_*``.
-  A compatibility layer will be kept until DPDK 22.11, except for the flags
-  that are already deprecated (``PKT_RX_L4_CKSUM_BAD``, ``PKT_RX_IP_CKSUM_BAD``,
-  ``PKT_RX_EIP_CKSUM_BAD``, ``PKT_TX_QINQ_PKT``) which will be removed.
-
 * pci: To reduce unnecessary ABIs exposed by DPDK bus driver, "rte_bus_pci.h"
   will be made internal in 21.11 and macros/data structures/functions defined
   in the header will not be considered as ABI anymore. This change is inspired
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 4c56cdfeaa..df1a3053a2 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -229,6 +229,9 @@ API Changes
   the crypto/security operation. This field will be used to communicate
   events such as soft expiry with IPsec in lookaside mode.
 
+* mbuf: The mbuf offload flags ``PKT_*`` are renamed as ``RTE_MBUF_F_*``. A
+  compatibility layer will be kept until DPDK 22.11.
+
 
 ABI Changes
 -----------
diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c
index 5c5aa87a18..ed822412ec 100644
--- a/drivers/compress/mlx5/mlx5_compress.c
+++ b/drivers/compress/mlx5/mlx5_compress.c
@@ -470,7 +470,7 @@ mlx5_compress_addr2mr(struct mlx5_compress_priv *priv, uintptr_t addr,
 		return lkey;
 	/* Take slower bottom-half on miss. */
 	return mlx5_mr_addr2mr_bh(priv->pd, 0, &priv->mr_scache, mr_ctrl, addr,
-				  !!(ol_flags & EXT_ATTACHED_MBUF));
+				  !!(ol_flags & RTE_MBUF_F_EXTERNAL));
 }
 
 static __rte_always_inline uint32_t
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index c25c8e67b2..a16f75337b 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -368,20 +368,20 @@ cn10k_cpt_sec_ucc_process(struct rte_crypto_op *cop,
 	switch (uc_compcode) {
 	case ROC_IE_OT_UCC_SUCCESS:
 		if (sa->ip_csum_enable)
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 		break;
 	case ROC_IE_OT_UCC_SUCCESS_PKT_IP_BADCSUM:
-		mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		break;
 	case ROC_IE_OT_UCC_SUCCESS_PKT_L4_GOODCSUM:
-		mbuf->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		if (sa->ip_csum_enable)
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 		break;
 	case ROC_IE_OT_UCC_SUCCESS_PKT_L4_BADCSUM:
-		mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		if (sa->ip_csum_enable)
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 		break;
 	default:
 		break;
diff --git a/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h b/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
index df1b0a3678..881fbd19b3 100644
--- a/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
+++ b/drivers/crypto/cnxk/cn10k_ipsec_la_ops.h
@@ -69,10 +69,10 @@ process_outb_sa(struct rte_crypto_op *cop, struct cn10k_ipsec_sa *sess,
 	}
 #endif
 
-	if (m_src->ol_flags & PKT_TX_IP_CKSUM)
+	if (m_src->ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 		inst_w4_u64 &= ~BIT_ULL(33);
 
-	if (m_src->ol_flags & PKT_TX_L4_MASK)
+	if (m_src->ol_flags & RTE_MBUF_F_TX_L4_MASK)
 		inst_w4_u64 &= ~BIT_ULL(32);
 
 	/* Prepare CPT instruction */
diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c
index 6a2f8b6ac6..714ff539ca 100644
--- a/drivers/crypto/mlx5/mlx5_crypto.c
+++ b/drivers/crypto/mlx5/mlx5_crypto.c
@@ -334,7 +334,7 @@ mlx5_crypto_addr2mr(struct mlx5_crypto_priv *priv, uintptr_t addr,
 		return lkey;
 	/* Take slower bottom-half on miss. */
 	return mlx5_mr_addr2mr_bh(priv->pd, 0, &priv->mr_scache, mr_ctrl, addr,
-				  !!(ol_flags & EXT_ATTACHED_MBUF));
+				  !!(ol_flags & RTE_MBUF_F_EXTERNAL));
 }
 
 static __rte_always_inline uint32_t
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
index 6be9be0b47..d536c0a8ca 100644
--- a/drivers/event/cnxk/cn9k_worker.h
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -642,7 +642,7 @@ cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
 	if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
 		uint64_t ol_flags = m->ol_flags;
 
-		if (ol_flags & PKT_TX_SEC_OFFLOAD) {
+		if (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) {
 			uintptr_t ssow_base = base;
 
 			if (ev->sched_type)
diff --git a/drivers/event/octeontx/ssovf_worker.c b/drivers/event/octeontx/ssovf_worker.c
index 8b056ddc5a..1300c4f155 100644
--- a/drivers/event/octeontx/ssovf_worker.c
+++ b/drivers/event/octeontx/ssovf_worker.c
@@ -428,53 +428,53 @@ octeontx_create_rx_ol_flags_array(void *mem)
 		errcode = idx & 0xff;
 		errlev = (idx & 0x700) >> 8;
 
-		val = PKT_RX_IP_CKSUM_UNKNOWN;
-		val |= PKT_RX_L4_CKSUM_UNKNOWN;
-		val |= PKT_RX_OUTER_L4_CKSUM_UNKNOWN;
+		val = RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
+		val |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
+		val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN;
 
 		switch (errlev) {
 		case OCCTX_ERRLEV_RE:
 			if (errcode) {
-				val |= PKT_RX_IP_CKSUM_BAD;
-				val |= PKT_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 			break;
 		case OCCTX_ERRLEV_LC:
 			if (errcode == OCCTX_EC_IP4_CSUM) {
-				val |= PKT_RX_IP_CKSUM_BAD;
-				val |= PKT_RX_OUTER_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			}
 			break;
 		case OCCTX_ERRLEV_LD:
 			/* Check if parsed packet is neither IPv4 or IPV6 */
 			if (errcode == OCCTX_EC_IP4_NOT)
 				break;
-			val |= PKT_RX_IP_CKSUM_GOOD;
+			val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			if (errcode == OCCTX_EC_L4_CSUM)
-				val |= PKT_RX_OUTER_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 			else
-				val |= PKT_RX_L4_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			break;
 		case OCCTX_ERRLEV_LE:
 			if (errcode == OCCTX_EC_IP4_CSUM)
-				val |= PKT_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			else
-				val |= PKT_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			break;
 		case OCCTX_ERRLEV_LF:
 			/* Check if parsed packet is neither IPv4 or IPV6 */
 			if (errcode == OCCTX_EC_IP4_NOT)
 				break;
-			val |= PKT_RX_IP_CKSUM_GOOD;
+			val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			if (errcode == OCCTX_EC_L4_CSUM)
-				val |= PKT_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			else
-				val |= PKT_RX_L4_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			break;
 		}
 
diff --git a/drivers/event/octeontx/ssovf_worker.h b/drivers/event/octeontx/ssovf_worker.h
index f609b296ed..ccc6de588e 100644
--- a/drivers/event/octeontx/ssovf_worker.h
+++ b/drivers/event/octeontx/ssovf_worker.h
@@ -126,7 +126,7 @@ ssovf_octeontx_wqe_to_pkt(uint64_t work, uint16_t port_info,
 
 	if (!!(flag & OCCTX_RX_VLAN_FLTR_F)) {
 		if (likely(wqe->s.w2.vv)) {
-			mbuf->ol_flags |= PKT_RX_VLAN;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN;
 			mbuf->vlan_tci =
 				ntohs(*((uint16_t *)((char *)mbuf->buf_addr +
 					mbuf->data_off + wqe->s.w4.vlptr + 2)));
diff --git a/drivers/event/octeontx2/otx2_worker.h b/drivers/event/octeontx2/otx2_worker.h
index 3e36dcece1..aa766c6602 100644
--- a/drivers/event/octeontx2/otx2_worker.h
+++ b/drivers/event/octeontx2/otx2_worker.h
@@ -277,7 +277,7 @@ otx2_ssogws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
 	uint16_t ref_cnt = m->refcnt;
 
 	if ((flags & NIX_TX_OFFLOAD_SECURITY_F) &&
-	    (m->ol_flags & PKT_TX_SEC_OFFLOAD)) {
+	    (m->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD)) {
 		txq = otx2_ssogws_xtract_meta(m, txq_data);
 		return otx2_sec_event_tx(base, ev, m, txq, flags);
 	}
diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
index 931fc230e5..ad8506344a 100644
--- a/drivers/net/af_packet/rte_eth_af_packet.c
+++ b/drivers/net/af_packet/rte_eth_af_packet.c
@@ -149,7 +149,7 @@ eth_af_packet_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		/* check for vlan info */
 		if (ppd->tp_status & TP_STATUS_VLAN_VALID) {
 			mbuf->vlan_tci = ppd->tp_vlan_tci;
-			mbuf->ol_flags |= (PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+			mbuf->ol_flags |= (RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 
 			if (!pkt_q->vlan_strip && rte_vlan_insert(&mbuf))
 				PMD_LOG(ERR, "Failed to reinsert VLAN tag");
@@ -229,7 +229,7 @@ eth_af_packet_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		}
 
 		/* insert vlan info if necessary */
-		if (mbuf->ol_flags & PKT_TX_VLAN) {
+		if (mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) {
 			if (rte_vlan_insert(&mbuf)) {
 				rte_pktmbuf_free(mbuf);
 				continue;
diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c
index fca682d8b0..e7805ac2b2 100644
--- a/drivers/net/atlantic/atl_rxtx.c
+++ b/drivers/net/atlantic/atl_rxtx.c
@@ -15,20 +15,20 @@
 #include "hw_atl/hw_atl_b0_internal.h"
 
 #define ATL_TX_CKSUM_OFFLOAD_MASK (			 \
-	PKT_TX_IP_CKSUM |				 \
-	PKT_TX_L4_MASK |				 \
-	PKT_TX_TCP_SEG)
+	RTE_MBUF_F_TX_IP_CKSUM |				 \
+	RTE_MBUF_F_TX_L4_MASK |				 \
+	RTE_MBUF_F_TX_TCP_SEG)
 
 #define ATL_TX_OFFLOAD_MASK (				 \
-	PKT_TX_VLAN |					 \
-	PKT_TX_IPV6 |					 \
-	PKT_TX_IPV4 |					 \
-	PKT_TX_IP_CKSUM |				 \
-	PKT_TX_L4_MASK |				 \
-	PKT_TX_TCP_SEG)
+	RTE_MBUF_F_TX_VLAN |					 \
+	RTE_MBUF_F_TX_IPV6 |					 \
+	RTE_MBUF_F_TX_IPV4 |					 \
+	RTE_MBUF_F_TX_IP_CKSUM |				 \
+	RTE_MBUF_F_TX_L4_MASK |				 \
+	RTE_MBUF_F_TX_TCP_SEG)
 
 #define ATL_TX_OFFLOAD_NOTSUP_MASK \
-	(PKT_TX_OFFLOAD_MASK ^ ATL_TX_OFFLOAD_MASK)
+	(RTE_MBUF_F_TX_OFFLOAD_MASK ^ ATL_TX_OFFLOAD_MASK)
 
 /**
  * Structure associated with each descriptor of the RX ring of a RX queue.
@@ -850,21 +850,21 @@ atl_desc_to_offload_flags(struct atl_rx_queue *rxq,
 	if (rxq->l3_csum_enabled && ((rxd_wb->pkt_type & 0x3) == 0)) {
 		/* IPv4 csum error ? */
 		if (rxd_wb->rx_stat & BIT(1))
-			mbuf_flags |= PKT_RX_IP_CKSUM_BAD;
+			mbuf_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		else
-			mbuf_flags |= PKT_RX_IP_CKSUM_GOOD;
+			mbuf_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 	} else {
-		mbuf_flags |= PKT_RX_IP_CKSUM_UNKNOWN;
+		mbuf_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 	}
 
 	/* CSUM calculated ? */
 	if (rxq->l4_csum_enabled && (rxd_wb->rx_stat & BIT(3))) {
 		if (rxd_wb->rx_stat & BIT(2))
-			mbuf_flags |= PKT_RX_L4_CKSUM_BAD;
+			mbuf_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		else
-			mbuf_flags |= PKT_RX_L4_CKSUM_GOOD;
+			mbuf_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 	} else {
-		mbuf_flags |= PKT_RX_L4_CKSUM_UNKNOWN;
+		mbuf_flags |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 	}
 
 	return mbuf_flags;
@@ -1044,12 +1044,12 @@ atl_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 			rx_mbuf->packet_type = atl_desc_to_pkt_type(&rxd_wb);
 
 			if (rx_mbuf->packet_type & RTE_PTYPE_L2_ETHER_VLAN) {
-				rx_mbuf->ol_flags |= PKT_RX_VLAN;
+				rx_mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN;
 				rx_mbuf->vlan_tci = rxd_wb.vlan;
 
 				if (cfg->vlan_strip)
 					rx_mbuf->ol_flags |=
-						PKT_RX_VLAN_STRIPPED;
+						RTE_MBUF_F_RX_VLAN_STRIPPED;
 			}
 
 			if (!rx_mbuf_first)
@@ -1179,12 +1179,12 @@ atl_tso_setup(struct rte_mbuf *tx_pkt, union hw_atl_txc_s *txc)
 	uint32_t tx_cmd = 0;
 	uint64_t ol_flags = tx_pkt->ol_flags;
 
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		tx_cmd |= tx_desc_cmd_lso | tx_desc_cmd_l4cs;
 
 		txc->cmd = 0x4;
 
-		if (ol_flags & PKT_TX_IPV6)
+		if (ol_flags & RTE_MBUF_F_TX_IPV6)
 			txc->cmd |= 0x2;
 
 		txc->l2_len = tx_pkt->l2_len;
@@ -1194,7 +1194,7 @@ atl_tso_setup(struct rte_mbuf *tx_pkt, union hw_atl_txc_s *txc)
 		txc->mss_len = tx_pkt->tso_segsz;
 	}
 
-	if (ol_flags & PKT_TX_VLAN) {
+	if (ol_flags & RTE_MBUF_F_TX_VLAN) {
 		tx_cmd |= tx_desc_cmd_vlan;
 		txc->vlan_tag = tx_pkt->vlan_tci;
 	}
@@ -1212,9 +1212,9 @@ atl_setup_csum_offload(struct rte_mbuf *mbuf, struct hw_atl_txd_s *txd,
 		       uint32_t tx_cmd)
 {
 	txd->cmd |= tx_desc_cmd_fcs;
-	txd->cmd |= (mbuf->ol_flags & PKT_TX_IP_CKSUM) ? tx_desc_cmd_ipv4 : 0;
+	txd->cmd |= (mbuf->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) ? tx_desc_cmd_ipv4 : 0;
 	/* L4 csum requested */
-	txd->cmd |= (mbuf->ol_flags & PKT_TX_L4_MASK) ? tx_desc_cmd_l4cs : 0;
+	txd->cmd |= (mbuf->ol_flags & RTE_MBUF_F_TX_L4_MASK) ? tx_desc_cmd_l4cs : 0;
 	txd->cmd |= tx_cmd;
 }
 
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 6f0dafc287..3898e8299d 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -1310,7 +1310,7 @@ avp_dev_copy_from_buffers(struct avp_dev *avp,
 	src_offset = 0;
 
 	if (pkt_buf->ol_flags & RTE_AVP_RX_VLAN_PKT) {
-		ol_flags = PKT_RX_VLAN;
+		ol_flags = RTE_MBUF_F_RX_VLAN;
 		vlan_tci = pkt_buf->vlan_tci;
 	} else {
 		ol_flags = 0;
@@ -1568,7 +1568,7 @@ avp_recv_pkts(void *rx_queue,
 		m->port = avp->port_id;
 
 		if (pkt_buf->ol_flags & RTE_AVP_RX_VLAN_PKT) {
-			m->ol_flags = PKT_RX_VLAN;
+			m->ol_flags = RTE_MBUF_F_RX_VLAN;
 			m->vlan_tci = pkt_buf->vlan_tci;
 		}
 
@@ -1674,7 +1674,7 @@ avp_dev_copy_to_buffers(struct avp_dev *avp,
 	first_buf->nb_segs = count;
 	first_buf->pkt_len = total_length;
 
-	if (mbuf->ol_flags & PKT_TX_VLAN) {
+	if (mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		first_buf->ol_flags |= RTE_AVP_TX_VLAN_PKT;
 		first_buf->vlan_tci = mbuf->vlan_tci;
 	}
@@ -1905,7 +1905,7 @@ avp_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		pkt_buf->nb_segs = 1;
 		pkt_buf->next = NULL;
 
-		if (m->ol_flags & PKT_TX_VLAN) {
+		if (m->ol_flags & RTE_MBUF_F_TX_VLAN) {
 			pkt_buf->ol_flags |= RTE_AVP_TX_VLAN_PKT;
 			pkt_buf->vlan_tci = m->vlan_tci;
 		}
diff --git a/drivers/net/axgbe/axgbe_rxtx.c b/drivers/net/axgbe/axgbe_rxtx.c
index c9d5800b01..f0fd3c6eb8 100644
--- a/drivers/net/axgbe/axgbe_rxtx.c
+++ b/drivers/net/axgbe/axgbe_rxtx.c
@@ -260,17 +260,17 @@ axgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		}
 		if (rxq->pdata->rx_csum_enable) {
 			mbuf->ol_flags = 0;
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			if (unlikely(error_status == AXGBE_L3_CSUM_ERR)) {
-				mbuf->ol_flags &= ~PKT_RX_IP_CKSUM_GOOD;
-				mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
-				mbuf->ol_flags &= ~PKT_RX_L4_CKSUM_GOOD;
-				mbuf->ol_flags |= PKT_RX_L4_CKSUM_UNKNOWN;
+				mbuf->ol_flags &= ~RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				mbuf->ol_flags &= ~RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 			} else if (
 				unlikely(error_status == AXGBE_L4_CSUM_ERR)) {
-				mbuf->ol_flags &= ~PKT_RX_L4_CKSUM_GOOD;
-				mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+				mbuf->ol_flags &= ~RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			}
 		}
 		rte_prefetch1(rte_pktmbuf_mtod(mbuf, void *));
@@ -282,25 +282,25 @@ axgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		offloads = rxq->pdata->eth_dev->data->dev_conf.rxmode.offloads;
 		if (!err || !etlt) {
 			if (etlt == RX_CVLAN_TAG_PRESENT) {
-				mbuf->ol_flags |= PKT_RX_VLAN;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN;
 				mbuf->vlan_tci =
 					AXGMAC_GET_BITS_LE(desc->write.desc0,
 							RX_NORMAL_DESC0, OVT);
 				if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-					mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED;
+					mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN_STRIPPED;
 				else
-					mbuf->ol_flags &= ~PKT_RX_VLAN_STRIPPED;
+					mbuf->ol_flags &= ~RTE_MBUF_F_RX_VLAN_STRIPPED;
 				} else {
 					mbuf->ol_flags &=
-						~(PKT_RX_VLAN
-							| PKT_RX_VLAN_STRIPPED);
+						~(RTE_MBUF_F_RX_VLAN
+							| RTE_MBUF_F_RX_VLAN_STRIPPED);
 					mbuf->vlan_tci = 0;
 				}
 		}
 		/* Indicate if a Context Descriptor is next */
 		if (AXGMAC_GET_BITS_LE(desc->write.desc3, RX_NORMAL_DESC3, CDA))
-			mbuf->ol_flags |= PKT_RX_IEEE1588_PTP
-					| PKT_RX_IEEE1588_TMST;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP
+					| RTE_MBUF_F_RX_IEEE1588_TMST;
 		pkt_len = AXGMAC_GET_BITS_LE(desc->write.desc3, RX_NORMAL_DESC3,
 					     PL) - rxq->crc_len;
 		/* Mbuf populate */
@@ -426,17 +426,17 @@ uint16_t eth_axgbe_recv_scattered_pkts(void *rx_queue,
 		offloads = rxq->pdata->eth_dev->data->dev_conf.rxmode.offloads;
 		if (!err || !etlt) {
 			if (etlt == RX_CVLAN_TAG_PRESENT) {
-				mbuf->ol_flags |= PKT_RX_VLAN;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN;
 				mbuf->vlan_tci =
 					AXGMAC_GET_BITS_LE(desc->write.desc0,
 							RX_NORMAL_DESC0, OVT);
 				if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-					mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED;
+					mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN_STRIPPED;
 				else
-					mbuf->ol_flags &= ~PKT_RX_VLAN_STRIPPED;
+					mbuf->ol_flags &= ~RTE_MBUF_F_RX_VLAN_STRIPPED;
 			} else {
 				mbuf->ol_flags &=
-					~(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+					~(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 				mbuf->vlan_tci = 0;
 			}
 		}
@@ -465,17 +465,17 @@ uint16_t eth_axgbe_recv_scattered_pkts(void *rx_queue,
 		first_seg->port = rxq->port_id;
 		if (rxq->pdata->rx_csum_enable) {
 			mbuf->ol_flags = 0;
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			if (unlikely(error_status == AXGBE_L3_CSUM_ERR)) {
-				mbuf->ol_flags &= ~PKT_RX_IP_CKSUM_GOOD;
-				mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
-				mbuf->ol_flags &= ~PKT_RX_L4_CKSUM_GOOD;
-				mbuf->ol_flags |= PKT_RX_L4_CKSUM_UNKNOWN;
+				mbuf->ol_flags &= ~RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				mbuf->ol_flags &= ~RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 			} else if (unlikely(error_status
 						== AXGBE_L4_CSUM_ERR)) {
-				mbuf->ol_flags &= ~PKT_RX_L4_CKSUM_GOOD;
-				mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+				mbuf->ol_flags &= ~RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+				mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			}
 		}
 
@@ -795,7 +795,7 @@ static int axgbe_xmit_hw(struct axgbe_tx_queue *txq,
 	AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, FL,
 			   mbuf->pkt_len);
 	/* Timestamp enablement check */
-	if (mbuf->ol_flags & PKT_TX_IEEE1588_TMST)
+	if (mbuf->ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 		AXGMAC_SET_BITS_LE(desc->desc2, TX_NORMAL_DESC2, TTSE, 1);
 	rte_wmb();
 	/* Mark it as First and Last Descriptor */
@@ -804,14 +804,14 @@ static int axgbe_xmit_hw(struct axgbe_tx_queue *txq,
 	/* Mark it as a NORMAL descriptor */
 	AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, CTXT, 0);
 	/* configure h/w Offload */
-	mask = mbuf->ol_flags & PKT_TX_L4_MASK;
-	if ((mask == PKT_TX_TCP_CKSUM) || (mask == PKT_TX_UDP_CKSUM))
+	mask = mbuf->ol_flags & RTE_MBUF_F_TX_L4_MASK;
+	if ((mask == RTE_MBUF_F_TX_TCP_CKSUM) || (mask == RTE_MBUF_F_TX_UDP_CKSUM))
 		AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, CIC, 0x3);
-	else if (mbuf->ol_flags & PKT_TX_IP_CKSUM)
+	else if (mbuf->ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 		AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, CIC, 0x1);
 	rte_wmb();
 
-	if (mbuf->ol_flags & (PKT_TX_VLAN | PKT_TX_QINQ)) {
+	if (mbuf->ol_flags & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) {
 		/* Mark it as a CONTEXT descriptor */
 		AXGMAC_SET_BITS_LE(desc->desc3, TX_CONTEXT_DESC3,
 				  CTXT, 1);
diff --git a/drivers/net/axgbe/axgbe_rxtx_vec_sse.c b/drivers/net/axgbe/axgbe_rxtx_vec_sse.c
index 1c962b9333..816371cd79 100644
--- a/drivers/net/axgbe/axgbe_rxtx_vec_sse.c
+++ b/drivers/net/axgbe/axgbe_rxtx_vec_sse.c
@@ -23,7 +23,7 @@ axgbe_vec_tx(volatile struct axgbe_tx_desc *desc,
 {
 	uint64_t tmst_en = 0;
 	/* Timestamp enablement check */
-	if (mbuf->ol_flags & PKT_TX_IEEE1588_TMST)
+	if (mbuf->ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 		tmst_en = TX_DESC_CTRL_FLAG_TMST;
 	__m128i descriptor = _mm_set_epi64x((uint64_t)mbuf->pkt_len << 32 |
 					    TX_DESC_CTRL_FLAGS | mbuf->data_len
diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c
index 819e54044b..f67db015b5 100644
--- a/drivers/net/bnx2x/bnx2x.c
+++ b/drivers/net/bnx2x/bnx2x.c
@@ -2189,7 +2189,7 @@ int bnx2x_tx_encap(struct bnx2x_tx_queue *txq, struct rte_mbuf *m0)
 
 	tx_start_bd->nbd = rte_cpu_to_le_16(2);
 
-	if (m0->ol_flags & PKT_TX_VLAN) {
+	if (m0->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		tx_start_bd->vlan_or_ethertype =
 		    rte_cpu_to_le_16(m0->vlan_tci);
 		tx_start_bd->bd_flags.as_bitfield |=
diff --git a/drivers/net/bnx2x/bnx2x_rxtx.c b/drivers/net/bnx2x/bnx2x_rxtx.c
index fea7a34e7d..66b0512c86 100644
--- a/drivers/net/bnx2x/bnx2x_rxtx.c
+++ b/drivers/net/bnx2x/bnx2x_rxtx.c
@@ -435,7 +435,7 @@ bnx2x_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		 */
 		if (cqe_fp->pars_flags.flags & PARSING_FLAGS_VLAN) {
 			rx_mb->vlan_tci = cqe_fp->vlan_tag;
-			rx_mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+			rx_mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		}
 
 		rx_pkts[nb_rx] = rx_mb;
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index 4c1ee4294e..18eda482ef 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -260,25 +260,25 @@ static void bnxt_tpa_start(struct bnxt_rx_queue *rxq,
 	mbuf->pkt_len = rte_le_to_cpu_32(tpa_start->len);
 	mbuf->data_len = mbuf->pkt_len;
 	mbuf->port = rxq->port_id;
-	mbuf->ol_flags = PKT_RX_LRO;
+	mbuf->ol_flags = RTE_MBUF_F_RX_LRO;
 
 	bnxt_tpa_get_metadata(rxq->bp, tpa_info, tpa_start, tpa_start1);
 
 	if (likely(tpa_info->hash_valid)) {
 		mbuf->hash.rss = tpa_info->rss_hash;
-		mbuf->ol_flags |= PKT_RX_RSS_HASH;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 	} else if (tpa_info->cfa_code_valid) {
 		mbuf->hash.fdir.id = tpa_info->cfa_code;
-		mbuf->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 	}
 
 	if (tpa_info->vlan_valid && BNXT_RX_VLAN_STRIP_EN(rxq->bp)) {
 		mbuf->vlan_tci = tpa_info->vlan;
-		mbuf->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 	}
 
 	if (likely(tpa_info->l4_csum_valid))
-		mbuf->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	/* recycle next mbuf */
 	data_cons = RING_NEXT(data_cons);
@@ -576,34 +576,34 @@ bnxt_init_ol_flags_tables(struct bnxt_rx_queue *rxq)
 
 		if (BNXT_RX_VLAN_STRIP_EN(rxq->bp)) {
 			if (i & RX_PKT_CMPL_FLAGS2_META_FORMAT_VLAN)
-				pt[i] |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+				pt[i] |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		}
 
 		if (i & (RX_PKT_CMPL_FLAGS2_T_IP_CS_CALC << 3)) {
 			/* Tunnel case. */
 			if (outer_cksum_enabled) {
 				if (i & RX_PKT_CMPL_FLAGS2_IP_CS_CALC)
-					pt[i] |= PKT_RX_IP_CKSUM_GOOD;
+					pt[i] |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 				if (i & RX_PKT_CMPL_FLAGS2_L4_CS_CALC)
-					pt[i] |= PKT_RX_L4_CKSUM_GOOD;
+					pt[i] |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 				if (i & RX_PKT_CMPL_FLAGS2_T_L4_CS_CALC)
-					pt[i] |= PKT_RX_OUTER_L4_CKSUM_GOOD;
+					pt[i] |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
 			} else {
 				if (i & RX_PKT_CMPL_FLAGS2_T_IP_CS_CALC)
-					pt[i] |= PKT_RX_IP_CKSUM_GOOD;
+					pt[i] |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 				if (i & RX_PKT_CMPL_FLAGS2_T_L4_CS_CALC)
-					pt[i] |= PKT_RX_L4_CKSUM_GOOD;
+					pt[i] |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 		} else {
 			/* Non-tunnel case. */
 			if (i & RX_PKT_CMPL_FLAGS2_IP_CS_CALC)
-				pt[i] |= PKT_RX_IP_CKSUM_GOOD;
+				pt[i] |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 			if (i & RX_PKT_CMPL_FLAGS2_L4_CS_CALC)
-				pt[i] |= PKT_RX_L4_CKSUM_GOOD;
+				pt[i] |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		}
 	}
 
@@ -616,30 +616,30 @@ bnxt_init_ol_flags_tables(struct bnxt_rx_queue *rxq)
 			/* Tunnel case. */
 			if (outer_cksum_enabled) {
 				if (i & (RX_PKT_CMPL_ERRORS_IP_CS_ERROR >> 4))
-					pt[i] |= PKT_RX_IP_CKSUM_BAD;
+					pt[i] |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 
 				if (i & (RX_PKT_CMPL_ERRORS_T_IP_CS_ERROR >> 4))
-					pt[i] |= PKT_RX_OUTER_IP_CKSUM_BAD;
+					pt[i] |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 
 				if (i & (RX_PKT_CMPL_ERRORS_L4_CS_ERROR >> 4))
-					pt[i] |= PKT_RX_L4_CKSUM_BAD;
+					pt[i] |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 
 				if (i & (RX_PKT_CMPL_ERRORS_T_L4_CS_ERROR >> 4))
-					pt[i] |= PKT_RX_OUTER_L4_CKSUM_BAD;
+					pt[i] |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 			} else {
 				if (i & (RX_PKT_CMPL_ERRORS_T_IP_CS_ERROR >> 4))
-					pt[i] |= PKT_RX_IP_CKSUM_BAD;
+					pt[i] |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 
 				if (i & (RX_PKT_CMPL_ERRORS_T_L4_CS_ERROR >> 4))
-					pt[i] |= PKT_RX_L4_CKSUM_BAD;
+					pt[i] |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			}
 		} else {
 			/* Non-tunnel case. */
 			if (i & (RX_PKT_CMPL_ERRORS_IP_CS_ERROR >> 4))
-				pt[i] |= PKT_RX_IP_CKSUM_BAD;
+				pt[i] |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 
 			if (i & (RX_PKT_CMPL_ERRORS_L4_CS_ERROR >> 4))
-				pt[i] |= PKT_RX_L4_CKSUM_BAD;
+				pt[i] |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		}
 	}
 }
@@ -677,13 +677,13 @@ bnxt_set_ol_flags(struct bnxt_rx_ring_info *rxr, struct rx_pkt_cmpl *rxcmp,
 
 	if (flags_type & RX_PKT_CMPL_FLAGS_RSS_VALID) {
 		mbuf->hash.rss = rte_le_to_cpu_32(rxcmp->rss_hash);
-		ol_flags |= PKT_RX_RSS_HASH;
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 	}
 
 #ifdef RTE_LIBRTE_IEEE1588
 	if (unlikely((flags_type & RX_PKT_CMPL_FLAGS_MASK) ==
 		     RX_PKT_CMPL_FLAGS_ITYPE_PTP_W_TIMESTAMP))
-		ol_flags |= PKT_RX_IEEE1588_PTP | PKT_RX_IEEE1588_TMST;
+		ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP | RTE_MBUF_F_RX_IEEE1588_TMST;
 #endif
 
 	mbuf->ol_flags = ol_flags;
@@ -807,7 +807,7 @@ bnxt_ulp_set_mark_in_mbuf(struct bnxt *bp, struct rx_pkt_cmpl_hi *rxcmp1,
 		mbuf->hash.fdir.hi = mark_id;
 		*bnxt_cfa_code_dynfield(mbuf) = cfa_code & 0xffffffffull;
 		mbuf->hash.fdir.id = rxcmp1->cfa_code;
-		mbuf->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		return mark_id;
 	}
 
@@ -854,7 +854,7 @@ void bnxt_set_mark_in_mbuf(struct bnxt *bp,
 	}
 
 	mbuf->hash.fdir.hi = bp->mark_table[cfa_code].mark_id;
-	mbuf->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+	mbuf->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 }
 
 static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
diff --git a/drivers/net/bnxt/bnxt_rxr.h b/drivers/net/bnxt/bnxt_rxr.h
index 59adb7242c..a84f016609 100644
--- a/drivers/net/bnxt/bnxt_rxr.h
+++ b/drivers/net/bnxt/bnxt_rxr.h
@@ -212,7 +212,7 @@ static inline void bnxt_rx_vlan_v2(struct rte_mbuf *mbuf,
 {
 	if (RX_CMP_VLAN_VALID(rxcmp)) {
 		mbuf->vlan_tci = RX_CMP_METADATA0_VID(rxcmp1);
-		mbuf->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 	}
 }
 
@@ -276,47 +276,47 @@ static inline void bnxt_parse_csum_v2(struct rte_mbuf *mbuf,
 			t_pkt = 1;
 
 		if (unlikely(RX_CMP_V2_L4_CS_ERR(error_v2)))
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		else if (flags2 & RX_CMP_FLAGS2_L4_CSUM_ALL_OK_MASK)
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		else
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_UNKNOWN;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 
 		if (unlikely(RX_CMP_V2_L3_CS_ERR(error_v2)))
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		else if (flags2 & RX_CMP_FLAGS2_IP_CSUM_ALL_OK_MASK)
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 		else
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_UNKNOWN;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 	} else {
 		hdr_cnt = RX_CMP_V2_L4_CS_OK(flags2);
 		if (hdr_cnt > 1)
 			t_pkt = 1;
 
 		if (RX_CMP_V2_L4_CS_OK(flags2))
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		else if (RX_CMP_V2_L4_CS_ERR(error_v2))
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		else
-			mbuf->ol_flags |= PKT_RX_L4_CKSUM_UNKNOWN;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 
 		if (RX_CMP_V2_L3_CS_OK(flags2))
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 		else if (RX_CMP_V2_L3_CS_ERR(error_v2))
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		else
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_UNKNOWN;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 	}
 
 	if (t_pkt) {
 		if (unlikely(RX_CMP_V2_OT_L4_CS_ERR(error_v2) ||
 					RX_CMP_V2_T_L4_CS_ERR(error_v2)))
-			mbuf->ol_flags |= PKT_RX_OUTER_L4_CKSUM_BAD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 		else
-			mbuf->ol_flags |= PKT_RX_OUTER_L4_CKSUM_GOOD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
 
 		if (unlikely(RX_CMP_V2_T_IP_CS_ERR(error_v2)))
-			mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	}
 }
 
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index e36da59fce..31f5e856fb 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -111,12 +111,12 @@ int bnxt_init_tx_ring_struct(struct bnxt_tx_queue *txq, unsigned int socket_id)
 static bool
 bnxt_xmit_need_long_bd(struct rte_mbuf *tx_pkt, struct bnxt_tx_queue *txq)
 {
-	if (tx_pkt->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_TCP_CKSUM |
-				PKT_TX_UDP_CKSUM | PKT_TX_IP_CKSUM |
-				PKT_TX_VLAN | PKT_TX_OUTER_IP_CKSUM |
-				PKT_TX_TUNNEL_GRE | PKT_TX_TUNNEL_VXLAN |
-				PKT_TX_TUNNEL_GENEVE | PKT_TX_IEEE1588_TMST |
-				PKT_TX_QINQ) ||
+	if (tx_pkt->ol_flags & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_TCP_CKSUM |
+				RTE_MBUF_F_TX_UDP_CKSUM | RTE_MBUF_F_TX_IP_CKSUM |
+				RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_OUTER_IP_CKSUM |
+				RTE_MBUF_F_TX_TUNNEL_GRE | RTE_MBUF_F_TX_TUNNEL_VXLAN |
+				RTE_MBUF_F_TX_TUNNEL_GENEVE | RTE_MBUF_F_TX_IEEE1588_TMST |
+				RTE_MBUF_F_TX_QINQ) ||
 	     (BNXT_TRUFLOW_EN(txq->bp) &&
 	      (txq->bp->tx_cfa_action || txq->vfr_tx_cfa_action)))
 		return true;
@@ -203,13 +203,13 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 		vlan_tag_flags = 0;
 
 		/* HW can accelerate only outer vlan in QinQ mode */
-		if (tx_pkt->ol_flags & PKT_TX_QINQ) {
+		if (tx_pkt->ol_flags & RTE_MBUF_F_TX_QINQ) {
 			vlan_tag_flags = TX_BD_LONG_CFA_META_KEY_VLAN_TAG |
 				tx_pkt->vlan_tci_outer;
 			outer_tpid_bd = txq->bp->outer_tpid_bd &
 				BNXT_OUTER_TPID_BD_MASK;
 			vlan_tag_flags |= outer_tpid_bd;
-		} else if (tx_pkt->ol_flags & PKT_TX_VLAN) {
+		} else if (tx_pkt->ol_flags & RTE_MBUF_F_TX_VLAN) {
 			/* shurd: Should this mask at
 			 * TX_BD_LONG_CFA_META_VLAN_VID_MASK?
 			 */
@@ -239,7 +239,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 		else
 			txbd1->cfa_action = txq->bp->tx_cfa_action;
 
-		if (tx_pkt->ol_flags & PKT_TX_TCP_SEG) {
+		if (tx_pkt->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			uint16_t hdr_size;
 
 			/* TSO */
@@ -247,7 +247,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 					 TX_BD_LONG_LFLAGS_T_IPID;
 			hdr_size = tx_pkt->l2_len + tx_pkt->l3_len +
 					tx_pkt->l4_len;
-			hdr_size += (tx_pkt->ol_flags & PKT_TX_TUNNEL_MASK) ?
+			hdr_size += (tx_pkt->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 				    tx_pkt->outer_l2_len +
 				    tx_pkt->outer_l3_len : 0;
 			/* The hdr_size is multiple of 16bit units not 8bit.
@@ -302,24 +302,24 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 			   PKT_TX_TCP_UDP_CKSUM) {
 			/* TCP/UDP CSO */
 			txbd1->lflags |= TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM;
-		} else if ((tx_pkt->ol_flags & PKT_TX_TCP_CKSUM) ==
-			   PKT_TX_TCP_CKSUM) {
+		} else if ((tx_pkt->ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) ==
+			   RTE_MBUF_F_TX_TCP_CKSUM) {
 			/* TCP/UDP CSO */
 			txbd1->lflags |= TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM;
-		} else if ((tx_pkt->ol_flags & PKT_TX_UDP_CKSUM) ==
-			   PKT_TX_UDP_CKSUM) {
+		} else if ((tx_pkt->ol_flags & RTE_MBUF_F_TX_UDP_CKSUM) ==
+			   RTE_MBUF_F_TX_UDP_CKSUM) {
 			/* TCP/UDP CSO */
 			txbd1->lflags |= TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM;
-		} else if ((tx_pkt->ol_flags & PKT_TX_IP_CKSUM) ==
-			   PKT_TX_IP_CKSUM) {
+		} else if ((tx_pkt->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) ==
+			   RTE_MBUF_F_TX_IP_CKSUM) {
 			/* IP CSO */
 			txbd1->lflags |= TX_BD_LONG_LFLAGS_IP_CHKSUM;
-		} else if ((tx_pkt->ol_flags & PKT_TX_OUTER_IP_CKSUM) ==
-			   PKT_TX_OUTER_IP_CKSUM) {
+		} else if ((tx_pkt->ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) ==
+			   RTE_MBUF_F_TX_OUTER_IP_CKSUM) {
 			/* IP CSO */
 			txbd1->lflags |= TX_BD_LONG_LFLAGS_T_IP_CHKSUM;
-		} else if ((tx_pkt->ol_flags & PKT_TX_IEEE1588_TMST) ==
-			   PKT_TX_IEEE1588_TMST) {
+		} else if ((tx_pkt->ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST) ==
+			   RTE_MBUF_F_TX_IEEE1588_TMST) {
 			/* PTP */
 			txbd1->lflags |= TX_BD_LONG_LFLAGS_STAMP;
 		}
diff --git a/drivers/net/bnxt/bnxt_txr.h b/drivers/net/bnxt/bnxt_txr.h
index 6bfdc6d01a..e11343c082 100644
--- a/drivers/net/bnxt/bnxt_txr.h
+++ b/drivers/net/bnxt/bnxt_txr.h
@@ -60,25 +60,25 @@ int bnxt_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int bnxt_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int bnxt_flush_tx_cmp(struct bnxt_cp_ring_info *cpr);
 
-#define PKT_TX_OIP_IIP_TCP_UDP_CKSUM	(PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM | \
-					PKT_TX_IP_CKSUM | PKT_TX_OUTER_IP_CKSUM)
-#define PKT_TX_OIP_IIP_UDP_CKSUM	(PKT_TX_UDP_CKSUM | \
-					PKT_TX_IP_CKSUM | PKT_TX_OUTER_IP_CKSUM)
-#define PKT_TX_OIP_IIP_TCP_CKSUM	(PKT_TX_TCP_CKSUM | \
-					PKT_TX_IP_CKSUM | PKT_TX_OUTER_IP_CKSUM)
-#define PKT_TX_IIP_TCP_UDP_CKSUM	(PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM | \
-					PKT_TX_IP_CKSUM)
-#define PKT_TX_IIP_TCP_CKSUM		(PKT_TX_TCP_CKSUM | PKT_TX_IP_CKSUM)
-#define PKT_TX_IIP_UDP_CKSUM		(PKT_TX_UDP_CKSUM | PKT_TX_IP_CKSUM)
-#define PKT_TX_OIP_TCP_UDP_CKSUM	(PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM | \
-					PKT_TX_OUTER_IP_CKSUM)
-#define PKT_TX_OIP_UDP_CKSUM		(PKT_TX_UDP_CKSUM | \
-					PKT_TX_OUTER_IP_CKSUM)
-#define PKT_TX_OIP_TCP_CKSUM		(PKT_TX_TCP_CKSUM | \
-					PKT_TX_OUTER_IP_CKSUM)
-#define PKT_TX_OIP_IIP_CKSUM		(PKT_TX_IP_CKSUM |	\
-					 PKT_TX_OUTER_IP_CKSUM)
-#define PKT_TX_TCP_UDP_CKSUM		(PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM)
+#define PKT_TX_OIP_IIP_TCP_UDP_CKSUM	(RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM | \
+					RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+#define PKT_TX_OIP_IIP_UDP_CKSUM	(RTE_MBUF_F_TX_UDP_CKSUM | \
+					RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+#define PKT_TX_OIP_IIP_TCP_CKSUM	(RTE_MBUF_F_TX_TCP_CKSUM | \
+					RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+#define PKT_TX_IIP_TCP_UDP_CKSUM	(RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM | \
+					RTE_MBUF_F_TX_IP_CKSUM)
+#define PKT_TX_IIP_TCP_CKSUM		(RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_IP_CKSUM)
+#define PKT_TX_IIP_UDP_CKSUM		(RTE_MBUF_F_TX_UDP_CKSUM | RTE_MBUF_F_TX_IP_CKSUM)
+#define PKT_TX_OIP_TCP_UDP_CKSUM	(RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM | \
+					RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+#define PKT_TX_OIP_UDP_CKSUM		(RTE_MBUF_F_TX_UDP_CKSUM | \
+					RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+#define PKT_TX_OIP_TCP_CKSUM		(RTE_MBUF_F_TX_TCP_CKSUM | \
+					RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+#define PKT_TX_OIP_IIP_CKSUM		(RTE_MBUF_F_TX_IP_CKSUM |	\
+					 RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+#define PKT_TX_TCP_UDP_CKSUM		(RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM)
 
 
 #define TX_BD_FLG_TIP_IP_TCP_UDP_CHKSUM	(TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM | \
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 542c6633b5..ce40eef28a 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -112,7 +112,7 @@ is_lacp_packets(uint16_t ethertype, uint8_t subtype, struct rte_mbuf *mbuf)
 	const uint16_t ether_type_slow_be =
 		rte_be_to_cpu_16(RTE_ETHER_TYPE_SLOW);
 
-	return !((mbuf->ol_flags & PKT_RX_VLAN) ? mbuf->vlan_tci : 0) &&
+	return !((mbuf->ol_flags & RTE_MBUF_F_RX_VLAN) ? mbuf->vlan_tci : 0) &&
 		(ethertype == ether_type_slow_be &&
 		(subtype == SLOW_SUBTYPE_MARKER || subtype == SLOW_SUBTYPE_LACP));
 }
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index 9dfea99db9..6a86998c88 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -50,15 +50,15 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	uint16_t flags = 0;
 
 	/* Fastpath is dependent on these enums */
-	RTE_BUILD_BUG_ON(PKT_TX_TCP_CKSUM != (1ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_SCTP_CKSUM != (2ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_UDP_CKSUM != (3ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_IP_CKSUM != (1ULL << 54));
-	RTE_BUILD_BUG_ON(PKT_TX_IPV4 != (1ULL << 55));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IP_CKSUM != (1ULL << 58));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IPV4 != (1ULL << 59));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IPV6 != (1ULL << 60));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_UDP_CKSUM != (1ULL << 41));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_TCP_CKSUM != (1ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_SCTP_CKSUM != (2ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_UDP_CKSUM != (3ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IP_CKSUM != (1ULL << 54));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IPV4 != (1ULL << 55));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IP_CKSUM != (1ULL << 58));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV4 != (1ULL << 59));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV6 != (1ULL << 60));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_UDP_CKSUM != (1ULL << 41));
 	RTE_BUILD_BUG_ON(RTE_MBUF_L2_LEN_BITS != 7);
 	RTE_BUILD_BUG_ON(RTE_MBUF_L3_LEN_BITS != 9);
 	RTE_BUILD_BUG_ON(RTE_MBUF_OUTL2_LEN_BITS != 7);
diff --git a/drivers/net/cnxk/cn10k_rx.h b/drivers/net/cnxk/cn10k_rx.h
index fcc451aa36..5afc188e96 100644
--- a/drivers/net/cnxk/cn10k_rx.h
+++ b/drivers/net/cnxk/cn10k_rx.h
@@ -163,10 +163,10 @@ nix_sec_meta_to_mbuf(uint64_t cq_w1, uintptr_t sa_base, uintptr_t laddr,
 		res_w1 = sg[10];
 
 		/* Clear checksum flags and update security flag */
-		*ol_flags &= ~(PKT_RX_L4_CKSUM_MASK | PKT_RX_IP_CKSUM_MASK);
+		*ol_flags &= ~(RTE_MBUF_F_RX_L4_CKSUM_MASK | RTE_MBUF_F_RX_IP_CKSUM_MASK);
 		*ol_flags |= (((res_w1 & 0xFF) == CPT_COMP_WARN) ?
-			      PKT_RX_SEC_OFFLOAD :
-			      (PKT_RX_SEC_OFFLOAD | PKT_RX_SEC_OFFLOAD_FAILED));
+			      RTE_MBUF_F_RX_SEC_OFFLOAD :
+			      (RTE_MBUF_F_RX_SEC_OFFLOAD | RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED));
 		/* Calculate inner packet length */
 		len = ((res_w1 >> 16) & 0xFFFF) + hdr->w2.il3_off -
 			sizeof(struct cpt_parse_hdr_s) - (w0 & 0x7);
@@ -229,9 +229,9 @@ nix_update_match_id(const uint16_t match_id, uint64_t ol_flags,
 	 * 0 to CNXK_FLOW_ACTION_FLAG_DEFAULT - 2
 	 */
 	if (likely(match_id)) {
-		ol_flags |= PKT_RX_FDIR;
+		ol_flags |= RTE_MBUF_F_RX_FDIR;
 		if (match_id != CNXK_FLOW_ACTION_FLAG_DEFAULT) {
-			ol_flags |= PKT_RX_FDIR_ID;
+			ol_flags |= RTE_MBUF_F_RX_FDIR_ID;
 			mbuf->hash.fdir.hi = match_id - 1;
 		}
 	}
@@ -315,7 +315,7 @@ cn10k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 
 	if (flag & NIX_RX_OFFLOAD_RSS_F) {
 		mbuf->hash.rss = tag;
-		ol_flags |= PKT_RX_RSS_HASH;
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 	}
 
 	/* Process Security packets */
@@ -331,9 +331,9 @@ cn10k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 			/* Rlen */
 			len = ((res_w1 >> 16) & 0xFFFF) + mbuf->pkt_len;
 			ol_flags |= ((uc_cc == CPT_COMP_WARN) ?
-						   PKT_RX_SEC_OFFLOAD :
-						   (PKT_RX_SEC_OFFLOAD |
-					      PKT_RX_SEC_OFFLOAD_FAILED));
+						   RTE_MBUF_F_RX_SEC_OFFLOAD :
+						   (RTE_MBUF_F_RX_SEC_OFFLOAD |
+					      RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED));
 		} else {
 			if (flag & NIX_RX_OFFLOAD_CHECKSUM_F)
 				ol_flags |= nix_rx_olflags_get(lookup_mem, w1);
@@ -345,11 +345,11 @@ cn10k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 
 	if (flag & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
 		if (rx->vtag0_gone) {
-			ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+			ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 			mbuf->vlan_tci = rx->vtag0_tci;
 		}
 		if (rx->vtag1_gone) {
-			ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED;
+			ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
 			mbuf->vlan_tci_outer = rx->vtag1_tci;
 		}
 	}
@@ -495,7 +495,7 @@ static __rte_always_inline uint64_t
 nix_vlan_update(const uint64_t w2, uint64_t ol_flags, uint8x16_t *f)
 {
 	if (w2 & BIT_ULL(21) /* vtag0_gone */) {
-		ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		*f = vsetq_lane_u16((uint16_t)(w2 >> 32), *f, 5);
 	}
 
@@ -506,7 +506,7 @@ static __rte_always_inline uint64_t
 nix_qinq_update(const uint64_t w2, uint64_t ol_flags, struct rte_mbuf *mbuf)
 {
 	if (w2 & BIT_ULL(23) /* vtag1_gone */) {
-		ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED;
+		ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
 		mbuf->vlan_tci_outer = (uint16_t)(w2 >> 48);
 	}
 
@@ -678,10 +678,10 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 			f1 = vsetq_lane_u32(cq1_w0, f1, 3);
 			f2 = vsetq_lane_u32(cq2_w0, f2, 3);
 			f3 = vsetq_lane_u32(cq3_w0, f3, 3);
-			ol_flags0 = PKT_RX_RSS_HASH;
-			ol_flags1 = PKT_RX_RSS_HASH;
-			ol_flags2 = PKT_RX_RSS_HASH;
-			ol_flags3 = PKT_RX_RSS_HASH;
+			ol_flags0 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags1 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags2 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags3 = RTE_MBUF_F_RX_RSS_HASH;
 		} else {
 			ol_flags0 = 0;
 			ol_flags1 = 0;
@@ -778,8 +778,8 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 						  RTE_PTYPE_L2_ETHER_TIMESYNC,
 						  RTE_PTYPE_L2_ETHER_TIMESYNC,
 						  RTE_PTYPE_L2_ETHER_TIMESYNC};
-			const uint64_t ts_olf = PKT_RX_IEEE1588_PTP |
-						PKT_RX_IEEE1588_TMST |
+			const uint64_t ts_olf = RTE_MBUF_F_RX_IEEE1588_PTP |
+						RTE_MBUF_F_RX_IEEE1588_TMST |
 						tstamp->rx_tstamp_dynflag;
 			const uint32x4_t and_mask = {0x1, 0x2, 0x4, 0x8};
 			uint64x2_t ts01, ts23, mask;
diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index c6f349b352..36f6aec35e 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -458,12 +458,12 @@ cn10k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 {
 	uint64_t mask, ol_flags = m->ol_flags;
 
-	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & PKT_TX_TCP_SEG)) {
+	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		uintptr_t mdata = rte_pktmbuf_mtod(m, uintptr_t);
 		uint16_t *iplen, *oiplen, *oudplen;
 		uint16_t lso_sb, paylen;
 
-		mask = -!!(ol_flags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6));
+		mask = -!!(ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IPV6));
 		lso_sb = (mask & (m->outer_l2_len + m->outer_l3_len)) +
 			 m->l2_len + m->l3_len + m->l4_len;
 
@@ -472,18 +472,18 @@ cn10k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 
 		/* Get iplen position assuming no tunnel hdr */
 		iplen = (uint16_t *)(mdata + m->l2_len +
-				     (2 << !!(ol_flags & PKT_TX_IPV6)));
+				     (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
 		/* Handle tunnel tso */
 		if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-		    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+		    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 			const uint8_t is_udp_tun =
 				(CNXK_NIX_UDP_TUN_BITMASK >>
-				 ((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) &
+				 ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) &
 				0x1;
 
 			oiplen = (uint16_t *)(mdata + m->outer_l2_len +
 					      (2 << !!(ol_flags &
-						       PKT_TX_OUTER_IPV6)));
+						       RTE_MBUF_F_TX_OUTER_IPV6)));
 			*oiplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*oiplen) -
 						   paylen);
 
@@ -498,7 +498,7 @@ cn10k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 			/* Update iplen position to inner ip hdr */
 			iplen = (uint16_t *)(mdata + lso_sb - m->l3_len -
 					     m->l4_len +
-					     (2 << !!(ol_flags & PKT_TX_IPV6)));
+					     (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
 		}
 
 		*iplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*iplen) - paylen);
@@ -548,11 +548,11 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 
 	if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
 	    (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F)) {
-		const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+		const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
 		const uint8_t ol3type =
-			((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) +
-			((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) +
-			!!(ol_flags & PKT_TX_OUTER_IP_CKSUM);
+			((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
+			((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
+			!!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
 
 		/* Outer L3 */
 		w1.ol3type = ol3type;
@@ -564,15 +564,15 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		w1.ol4type = csum + (csum << 1);
 
 		/* Inner L3 */
-		w1.il3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) +
-			     ((!!(ol_flags & PKT_TX_IPV6)) << 2);
+		w1.il3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
+			     ((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2);
 		w1.il3ptr = w1.ol4ptr + m->l2_len;
 		w1.il4ptr = w1.il3ptr + m->l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.il3type = w1.il3type + !!(ol_flags & PKT_TX_IP_CKSUM);
+		w1.il3type = w1.il3type + !!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
 
 		/* Inner L4 */
-		w1.il4type = (ol_flags & PKT_TX_L4_MASK) >> 52;
+		w1.il4type = (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
 
 		/* In case of no tunnel header use only
 		 * shift IL3/IL4 fields a bit to use
@@ -583,16 +583,16 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		       ((w1.u & 0X00000000FFFFFFFF) >> (mask << 4));
 
 	} else if (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) {
-		const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+		const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
 		const uint8_t outer_l2_len = m->outer_l2_len;
 
 		/* Outer L3 */
 		w1.ol3ptr = outer_l2_len;
 		w1.ol4ptr = outer_l2_len + m->outer_l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.ol3type = ((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) +
-			     ((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) +
-			     !!(ol_flags & PKT_TX_OUTER_IP_CKSUM);
+		w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
+			     ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
+			     !!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
 
 		/* Outer L4 */
 		w1.ol4type = csum + (csum << 1);
@@ -608,27 +608,27 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		w1.ol3ptr = l2_len;
 		w1.ol4ptr = l2_len + m->l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.ol3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) +
-			     ((!!(ol_flags & PKT_TX_IPV6)) << 2) +
-			     !!(ol_flags & PKT_TX_IP_CKSUM);
+		w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
+			     ((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2) +
+			     !!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
 
 		/* Inner L4 */
-		w1.ol4type = (ol_flags & PKT_TX_L4_MASK) >> 52;
+		w1.ol4type = (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
 	}
 
 	if (flags & NIX_TX_NEED_EXT_HDR && flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) {
-		send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & PKT_TX_VLAN);
+		send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_VLAN);
 		/* HW will update ptr after vlan0 update */
 		send_hdr_ext->w1.vlan1_ins_ptr = 12;
 		send_hdr_ext->w1.vlan1_ins_tci = m->vlan_tci;
 
-		send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & PKT_TX_QINQ);
+		send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_QINQ);
 		/* 2B before end of l2 header */
 		send_hdr_ext->w1.vlan0_ins_ptr = 12;
 		send_hdr_ext->w1.vlan0_ins_tci = m->vlan_tci_outer;
 	}
 
-	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & PKT_TX_TCP_SEG)) {
+	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		uint16_t lso_sb;
 		uint64_t mask;
 
@@ -639,20 +639,20 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		send_hdr_ext->w0.lso = 1;
 		send_hdr_ext->w0.lso_mps = m->tso_segsz;
 		send_hdr_ext->w0.lso_format =
-			NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & PKT_TX_IPV6);
+			NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & RTE_MBUF_F_TX_IPV6);
 		w1.ol4type = NIX_SENDL4TYPE_TCP_CKSUM;
 
 		/* Handle tunnel tso */
 		if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-		    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+		    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 			const uint8_t is_udp_tun =
 				(CNXK_NIX_UDP_TUN_BITMASK >>
-				 ((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) &
+				 ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) &
 				0x1;
 			uint8_t shift = is_udp_tun ? 32 : 0;
 
-			shift += (!!(ol_flags & PKT_TX_OUTER_IPV6) << 4);
-			shift += (!!(ol_flags & PKT_TX_IPV6) << 3);
+			shift += (!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) << 4);
+			shift += (!!(ol_flags & RTE_MBUF_F_TX_IPV6) << 3);
 
 			w1.il4type = NIX_SENDL4TYPE_TCP_CKSUM;
 			w1.ol4type = is_udp_tun ? NIX_SENDL4TYPE_UDP_CKSUM : 0;
@@ -686,7 +686,7 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 	}
 
 	if (flags & NIX_TX_OFFLOAD_SECURITY_F)
-		*sec = !!(ol_flags & PKT_TX_SEC_OFFLOAD);
+		*sec = !!(ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD);
 }
 
 static __rte_always_inline void
@@ -722,7 +722,7 @@ cn10k_nix_xmit_prepare_tstamp(uintptr_t lmt_addr, const uint64_t *cmd,
 			      const uint16_t flags)
 {
 	if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
-		const uint8_t is_ol_tstamp = !(ol_flags & PKT_TX_IEEE1588_TMST);
+		const uint8_t is_ol_tstamp = !(ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST);
 		struct nix_send_ext_s *send_hdr_ext =
 			(struct nix_send_ext_s *)lmt_addr + 16;
 		uint64_t *lmt = (uint64_t *)lmt_addr;
@@ -742,7 +742,7 @@ cn10k_nix_xmit_prepare_tstamp(uintptr_t lmt_addr, const uint64_t *cmd,
 			rte_compiler_barrier();
 		}
 
-		/* Packets for which PKT_TX_IEEE1588_TMST is not set, tx tstamp
+		/* Packets for which RTE_MBUF_F_TX_IEEE1588_TMST is not set, tx tstamp
 		 * should not be recorded, hence changing the alg type to
 		 * NIX_SENDMEMALG_SET and also changing send mem addr field to
 		 * next 8 bytes as it corrpt the actual tx tstamp registered
@@ -1118,7 +1118,7 @@ cn10k_nix_prepare_tso(struct rte_mbuf *m, union nix_send_hdr_w1_u *w1,
 	uint16_t lso_sb;
 	uint64_t mask;
 
-	if (!(ol_flags & PKT_TX_TCP_SEG))
+	if (!(ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 		return;
 
 	mask = -(!w1->il3type);
@@ -1127,20 +1127,20 @@ cn10k_nix_prepare_tso(struct rte_mbuf *m, union nix_send_hdr_w1_u *w1,
 	w0->u |= BIT(14);
 	w0->lso_sb = lso_sb;
 	w0->lso_mps = m->tso_segsz;
-	w0->lso_format = NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & PKT_TX_IPV6);
+	w0->lso_format = NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & RTE_MBUF_F_TX_IPV6);
 	w1->ol4type = NIX_SENDL4TYPE_TCP_CKSUM;
 
 	/* Handle tunnel tso */
 	if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-	    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+	    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 		const uint8_t is_udp_tun =
 			(CNXK_NIX_UDP_TUN_BITMASK >>
-			 ((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) &
+			 ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) &
 			0x1;
 		uint8_t shift = is_udp_tun ? 32 : 0;
 
-		shift += (!!(ol_flags & PKT_TX_OUTER_IPV6) << 4);
-		shift += (!!(ol_flags & PKT_TX_IPV6) << 3);
+		shift += (!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) << 4);
+		shift += (!!(ol_flags & RTE_MBUF_F_TX_IPV6) << 3);
 
 		w1->il4type = NIX_SENDL4TYPE_TCP_CKSUM;
 		w1->ol4type = is_udp_tun ? NIX_SENDL4TYPE_UDP_CKSUM : 0;
@@ -1784,26 +1784,26 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			const uint8x16_t tbl = {
 				/* [0-15] = il4type:il3type */
 				0x04, /* none (IPv6 assumed) */
-				0x14, /* PKT_TX_TCP_CKSUM (IPv6 assumed) */
-				0x24, /* PKT_TX_SCTP_CKSUM (IPv6 assumed) */
-				0x34, /* PKT_TX_UDP_CKSUM (IPv6 assumed) */
-				0x03, /* PKT_TX_IP_CKSUM */
-				0x13, /* PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM */
-				0x23, /* PKT_TX_IP_CKSUM | PKT_TX_SCTP_CKSUM */
-				0x33, /* PKT_TX_IP_CKSUM | PKT_TX_UDP_CKSUM */
-				0x02, /* PKT_TX_IPV4  */
-				0x12, /* PKT_TX_IPV4 | PKT_TX_TCP_CKSUM */
-				0x22, /* PKT_TX_IPV4 | PKT_TX_SCTP_CKSUM */
-				0x32, /* PKT_TX_IPV4 | PKT_TX_UDP_CKSUM */
-				0x03, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM */
-				0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_TCP_CKSUM
+				0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6 assumed) */
+				0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6 assumed) */
+				0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6 assumed) */
+				0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
+				0x13, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM */
+				0x23, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_SCTP_CKSUM */
+				0x33, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM */
+				0x02, /* RTE_MBUF_F_TX_IPV4  */
+				0x12, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_TCP_CKSUM */
+				0x22, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_SCTP_CKSUM */
+				0x32, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_UDP_CKSUM */
+				0x03, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM */
+				0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_TCP_CKSUM
 				       */
-				0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_SCTP_CKSUM
+				0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_SCTP_CKSUM
 				       */
-				0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_UDP_CKSUM
+				0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_UDP_CKSUM
 				       */
 			};
 
@@ -1988,40 +1988,40 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 				{
 					/* [0-15] = il4type:il3type */
 					0x04, /* none (IPv6) */
-					0x14, /* PKT_TX_TCP_CKSUM (IPv6) */
-					0x24, /* PKT_TX_SCTP_CKSUM (IPv6) */
-					0x34, /* PKT_TX_UDP_CKSUM (IPv6) */
-					0x03, /* PKT_TX_IP_CKSUM */
-					0x13, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_TCP_CKSUM
+					0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6) */
+					0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6) */
+					0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6) */
+					0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
+					0x13, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x23, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_SCTP_CKSUM
+					0x23, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x33, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_UDP_CKSUM
+					0x33, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
-					0x02, /* PKT_TX_IPV4 */
-					0x12, /* PKT_TX_IPV4 |
-					       * PKT_TX_TCP_CKSUM
+					0x02, /* RTE_MBUF_F_TX_IPV4 */
+					0x12, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x22, /* PKT_TX_IPV4 |
-					       * PKT_TX_SCTP_CKSUM
+					0x22, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x32, /* PKT_TX_IPV4 |
-					       * PKT_TX_UDP_CKSUM
+					0x32, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
-					0x03, /* PKT_TX_IPV4 |
-					       * PKT_TX_IP_CKSUM
+					0x03, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_IP_CKSUM
 					       */
-					0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_TCP_CKSUM
+					0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_SCTP_CKSUM
+					0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_UDP_CKSUM
+					0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
 				},
 
@@ -2209,11 +2209,11 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 		if (flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) {
 			/* Tx ol_flag for vlan. */
-			const uint64x2_t olv = {PKT_TX_VLAN, PKT_TX_VLAN};
+			const uint64x2_t olv = {RTE_MBUF_F_TX_VLAN, RTE_MBUF_F_TX_VLAN};
 			/* Bit enable for VLAN1 */
 			const uint64x2_t mlv = {BIT_ULL(49), BIT_ULL(49)};
 			/* Tx ol_flag for QnQ. */
-			const uint64x2_t olq = {PKT_TX_QINQ, PKT_TX_QINQ};
+			const uint64x2_t olq = {RTE_MBUF_F_TX_QINQ, RTE_MBUF_F_TX_QINQ};
 			/* Bit enable for VLAN0 */
 			const uint64x2_t mlq = {BIT_ULL(48), BIT_ULL(48)};
 			/* Load vlan values from packet. outer is VLAN 0 */
@@ -2255,8 +2255,8 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 		if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
 			/* Tx ol_flag for timestam. */
-			const uint64x2_t olf = {PKT_TX_IEEE1588_TMST,
-						PKT_TX_IEEE1588_TMST};
+			const uint64x2_t olf = {RTE_MBUF_F_TX_IEEE1588_TMST,
+						RTE_MBUF_F_TX_IEEE1588_TMST};
 			/* Set send mem alg to SUB. */
 			const uint64x2_t alg = {BIT_ULL(59), BIT_ULL(59)};
 			/* Increment send mem address by 8. */
@@ -2425,8 +2425,8 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 		}
 
 		if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
-			const uint64x2_t olf = {PKT_TX_SEC_OFFLOAD,
-						PKT_TX_SEC_OFFLOAD};
+			const uint64x2_t olf = {RTE_MBUF_F_TX_SEC_OFFLOAD,
+						RTE_MBUF_F_TX_SEC_OFFLOAD};
 			uintptr_t next;
 			uint8_t dw;
 
diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index 08c86f9e6b..6cc6044f89 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -50,15 +50,15 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	uint16_t flags = 0;
 
 	/* Fastpath is dependent on these enums */
-	RTE_BUILD_BUG_ON(PKT_TX_TCP_CKSUM != (1ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_SCTP_CKSUM != (2ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_UDP_CKSUM != (3ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_IP_CKSUM != (1ULL << 54));
-	RTE_BUILD_BUG_ON(PKT_TX_IPV4 != (1ULL << 55));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IP_CKSUM != (1ULL << 58));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IPV4 != (1ULL << 59));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IPV6 != (1ULL << 60));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_UDP_CKSUM != (1ULL << 41));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_TCP_CKSUM != (1ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_SCTP_CKSUM != (2ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_UDP_CKSUM != (3ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IP_CKSUM != (1ULL << 54));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IPV4 != (1ULL << 55));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IP_CKSUM != (1ULL << 58));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV4 != (1ULL << 59));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV6 != (1ULL << 60));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_UDP_CKSUM != (1ULL << 41));
 	RTE_BUILD_BUG_ON(RTE_MBUF_L2_LEN_BITS != 7);
 	RTE_BUILD_BUG_ON(RTE_MBUF_L3_LEN_BITS != 9);
 	RTE_BUILD_BUG_ON(RTE_MBUF_OUTL2_LEN_BITS != 7);
diff --git a/drivers/net/cnxk/cn9k_rx.h b/drivers/net/cnxk/cn9k_rx.h
index 7ab415a194..03773c5436 100644
--- a/drivers/net/cnxk/cn9k_rx.h
+++ b/drivers/net/cnxk/cn9k_rx.h
@@ -103,9 +103,9 @@ nix_update_match_id(const uint16_t match_id, uint64_t ol_flags,
 	 * 0 to CNXK_FLOW_ACTION_FLAG_DEFAULT - 2
 	 */
 	if (likely(match_id)) {
-		ol_flags |= PKT_RX_FDIR;
+		ol_flags |= RTE_MBUF_F_RX_FDIR;
 		if (match_id != CNXK_FLOW_ACTION_FLAG_DEFAULT) {
-			ol_flags |= PKT_RX_FDIR_ID;
+			ol_flags |= RTE_MBUF_F_RX_FDIR_ID;
 			mbuf->hash.fdir.hi = match_id - 1;
 		}
 	}
@@ -237,7 +237,7 @@ nix_rx_sec_mbuf_update(const struct nix_cqe_hdr_s *cq, struct rte_mbuf *m,
 	rte_prefetch0((void *)data);
 
 	if (unlikely(res != (CPT_COMP_GOOD | ROC_IE_ONF_UCC_SUCCESS << 8)))
-		return PKT_RX_SEC_OFFLOAD | PKT_RX_SEC_OFFLOAD_FAILED;
+		return RTE_MBUF_F_RX_SEC_OFFLOAD | RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
 
 	data += lcptr;
 	/* 20 bits of tag would have the SPI */
@@ -258,7 +258,7 @@ nix_rx_sec_mbuf_update(const struct nix_cqe_hdr_s *cq, struct rte_mbuf *m,
 	win_sz = (uint32_t)(dw >> 64);
 	if (win_sz) {
 		if (ipsec_antireplay_check(sa, sa_priv, data, win_sz) < 0)
-			return PKT_RX_SEC_OFFLOAD | PKT_RX_SEC_OFFLOAD_FAILED;
+			return RTE_MBUF_F_RX_SEC_OFFLOAD | RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
 	}
 
 	/* Get total length from IPv4 header. We can assume only IPv4 */
@@ -272,7 +272,7 @@ nix_rx_sec_mbuf_update(const struct nix_cqe_hdr_s *cq, struct rte_mbuf *m,
 	*rearm_val |= data_off;
 
 	*len = rte_be_to_cpu_16(ipv4->total_length) + lcptr;
-	return PKT_RX_SEC_OFFLOAD;
+	return RTE_MBUF_F_RX_SEC_OFFLOAD;
 }
 
 static __rte_always_inline void
@@ -319,7 +319,7 @@ cn9k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 
 	if (flag & NIX_RX_OFFLOAD_RSS_F) {
 		mbuf->hash.rss = tag;
-		ol_flags |= PKT_RX_RSS_HASH;
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 	}
 
 	if (flag & NIX_RX_OFFLOAD_CHECKSUM_F)
@@ -328,11 +328,11 @@ cn9k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 skip_parse:
 	if (flag & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
 		if (rx->cn9k.vtag0_gone) {
-			ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+			ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 			mbuf->vlan_tci = rx->cn9k.vtag0_tci;
 		}
 		if (rx->cn9k.vtag1_gone) {
-			ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED;
+			ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
 			mbuf->vlan_tci_outer = rx->cn9k.vtag1_tci;
 		}
 	}
@@ -437,7 +437,7 @@ static __rte_always_inline uint64_t
 nix_vlan_update(const uint64_t w2, uint64_t ol_flags, uint8x16_t *f)
 {
 	if (w2 & BIT_ULL(21) /* vtag0_gone */) {
-		ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		*f = vsetq_lane_u16((uint16_t)(w2 >> 32), *f, 5);
 	}
 
@@ -448,7 +448,7 @@ static __rte_always_inline uint64_t
 nix_qinq_update(const uint64_t w2, uint64_t ol_flags, struct rte_mbuf *mbuf)
 {
 	if (w2 & BIT_ULL(23) /* vtag1_gone */) {
-		ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED;
+		ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
 		mbuf->vlan_tci_outer = (uint16_t)(w2 >> 48);
 	}
 
@@ -549,10 +549,10 @@ cn9k_nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
 			f1 = vsetq_lane_u32(cq1_w0, f1, 3);
 			f2 = vsetq_lane_u32(cq2_w0, f2, 3);
 			f3 = vsetq_lane_u32(cq3_w0, f3, 3);
-			ol_flags0 = PKT_RX_RSS_HASH;
-			ol_flags1 = PKT_RX_RSS_HASH;
-			ol_flags2 = PKT_RX_RSS_HASH;
-			ol_flags3 = PKT_RX_RSS_HASH;
+			ol_flags0 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags1 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags2 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags3 = RTE_MBUF_F_RX_RSS_HASH;
 		} else {
 			ol_flags0 = 0;
 			ol_flags1 = 0;
@@ -625,8 +625,8 @@ cn9k_nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
 						  RTE_PTYPE_L2_ETHER_TIMESYNC,
 						  RTE_PTYPE_L2_ETHER_TIMESYNC,
 						  RTE_PTYPE_L2_ETHER_TIMESYNC};
-			const uint64_t ts_olf = PKT_RX_IEEE1588_PTP |
-						PKT_RX_IEEE1588_TMST |
+			const uint64_t ts_olf = RTE_MBUF_F_RX_IEEE1588_PTP |
+						RTE_MBUF_F_RX_IEEE1588_TMST |
 						rxq->tstamp->rx_tstamp_dynflag;
 			const uint32x4_t and_mask = {0x1, 0x2, 0x4, 0x8};
 			uint64x2_t ts01, ts23, mask;
diff --git a/drivers/net/cnxk/cn9k_tx.h b/drivers/net/cnxk/cn9k_tx.h
index 44273eca90..79a70ddcdd 100644
--- a/drivers/net/cnxk/cn9k_tx.h
+++ b/drivers/net/cnxk/cn9k_tx.h
@@ -62,12 +62,12 @@ cn9k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 {
 	uint64_t mask, ol_flags = m->ol_flags;
 
-	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & PKT_TX_TCP_SEG)) {
+	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		uintptr_t mdata = rte_pktmbuf_mtod(m, uintptr_t);
 		uint16_t *iplen, *oiplen, *oudplen;
 		uint16_t lso_sb, paylen;
 
-		mask = -!!(ol_flags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6));
+		mask = -!!(ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IPV6));
 		lso_sb = (mask & (m->outer_l2_len + m->outer_l3_len)) +
 			 m->l2_len + m->l3_len + m->l4_len;
 
@@ -76,18 +76,18 @@ cn9k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 
 		/* Get iplen position assuming no tunnel hdr */
 		iplen = (uint16_t *)(mdata + m->l2_len +
-				     (2 << !!(ol_flags & PKT_TX_IPV6)));
+				     (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
 		/* Handle tunnel tso */
 		if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-		    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+		    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 			const uint8_t is_udp_tun =
 				(CNXK_NIX_UDP_TUN_BITMASK >>
-				 ((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) &
+				 ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) &
 				0x1;
 
 			oiplen = (uint16_t *)(mdata + m->outer_l2_len +
 					      (2 << !!(ol_flags &
-						       PKT_TX_OUTER_IPV6)));
+						       RTE_MBUF_F_TX_OUTER_IPV6)));
 			*oiplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*oiplen) -
 						   paylen);
 
@@ -102,7 +102,7 @@ cn9k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 			/* Update iplen position to inner ip hdr */
 			iplen = (uint16_t *)(mdata + lso_sb - m->l3_len -
 					     m->l4_len +
-					     (2 << !!(ol_flags & PKT_TX_IPV6)));
+					     (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
 		}
 
 		*iplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*iplen) - paylen);
@@ -152,11 +152,11 @@ cn9k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 
 	if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
 	    (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F)) {
-		const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+		const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
 		const uint8_t ol3type =
-			((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) +
-			((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) +
-			!!(ol_flags & PKT_TX_OUTER_IP_CKSUM);
+			((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
+			((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
+			!!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
 
 		/* Outer L3 */
 		w1.ol3type = ol3type;
@@ -168,15 +168,15 @@ cn9k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		w1.ol4type = csum + (csum << 1);
 
 		/* Inner L3 */
-		w1.il3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) +
-			     ((!!(ol_flags & PKT_TX_IPV6)) << 2);
+		w1.il3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
+			     ((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2);
 		w1.il3ptr = w1.ol4ptr + m->l2_len;
 		w1.il4ptr = w1.il3ptr + m->l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.il3type = w1.il3type + !!(ol_flags & PKT_TX_IP_CKSUM);
+		w1.il3type = w1.il3type + !!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
 
 		/* Inner L4 */
-		w1.il4type = (ol_flags & PKT_TX_L4_MASK) >> 52;
+		w1.il4type = (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
 
 		/* In case of no tunnel header use only
 		 * shift IL3/IL4 fields a bit to use
@@ -187,16 +187,16 @@ cn9k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		       ((w1.u & 0X00000000FFFFFFFF) >> (mask << 4));
 
 	} else if (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) {
-		const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+		const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
 		const uint8_t outer_l2_len = m->outer_l2_len;
 
 		/* Outer L3 */
 		w1.ol3ptr = outer_l2_len;
 		w1.ol4ptr = outer_l2_len + m->outer_l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.ol3type = ((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) +
-			     ((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) +
-			     !!(ol_flags & PKT_TX_OUTER_IP_CKSUM);
+		w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
+			     ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
+			     !!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
 
 		/* Outer L4 */
 		w1.ol4type = csum + (csum << 1);
@@ -212,27 +212,27 @@ cn9k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		w1.ol3ptr = l2_len;
 		w1.ol4ptr = l2_len + m->l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.ol3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) +
-			     ((!!(ol_flags & PKT_TX_IPV6)) << 2) +
-			     !!(ol_flags & PKT_TX_IP_CKSUM);
+		w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
+			     ((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2) +
+			     !!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
 
 		/* Inner L4 */
-		w1.ol4type = (ol_flags & PKT_TX_L4_MASK) >> 52;
+		w1.ol4type = (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
 	}
 
 	if (flags & NIX_TX_NEED_EXT_HDR && flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) {
-		send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & PKT_TX_VLAN);
+		send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_VLAN);
 		/* HW will update ptr after vlan0 update */
 		send_hdr_ext->w1.vlan1_ins_ptr = 12;
 		send_hdr_ext->w1.vlan1_ins_tci = m->vlan_tci;
 
-		send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & PKT_TX_QINQ);
+		send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_QINQ);
 		/* 2B before end of l2 header */
 		send_hdr_ext->w1.vlan0_ins_ptr = 12;
 		send_hdr_ext->w1.vlan0_ins_tci = m->vlan_tci_outer;
 	}
 
-	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & PKT_TX_TCP_SEG)) {
+	if (flags & NIX_TX_OFFLOAD_TSO_F && (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		uint16_t lso_sb;
 		uint64_t mask;
 
@@ -243,20 +243,20 @@ cn9k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		send_hdr_ext->w0.lso = 1;
 		send_hdr_ext->w0.lso_mps = m->tso_segsz;
 		send_hdr_ext->w0.lso_format =
-			NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & PKT_TX_IPV6);
+			NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & RTE_MBUF_F_TX_IPV6);
 		w1.ol4type = NIX_SENDL4TYPE_TCP_CKSUM;
 
 		/* Handle tunnel tso */
 		if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-		    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+		    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 			const uint8_t is_udp_tun =
 				(CNXK_NIX_UDP_TUN_BITMASK >>
-				 ((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) &
+				 ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) &
 				0x1;
 			uint8_t shift = is_udp_tun ? 32 : 0;
 
-			shift += (!!(ol_flags & PKT_TX_OUTER_IPV6) << 4);
-			shift += (!!(ol_flags & PKT_TX_IPV6) << 3);
+			shift += (!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) << 4);
+			shift += (!!(ol_flags & RTE_MBUF_F_TX_IPV6) << 3);
 
 			w1.il4type = NIX_SENDL4TYPE_TCP_CKSUM;
 			w1.ol4type = is_udp_tun ? NIX_SENDL4TYPE_UDP_CKSUM : 0;
@@ -297,7 +297,7 @@ cn9k_nix_xmit_prepare_tstamp(uint64_t *cmd, const uint64_t *send_mem_desc,
 	if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
 		struct nix_send_mem_s *send_mem;
 		uint16_t off = (no_segdw - 1) << 1;
-		const uint8_t is_ol_tstamp = !(ol_flags & PKT_TX_IEEE1588_TMST);
+		const uint8_t is_ol_tstamp = !(ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST);
 
 		send_mem = (struct nix_send_mem_s *)(cmd + off);
 		if (flags & NIX_TX_MULTI_SEG_F) {
@@ -310,7 +310,7 @@ cn9k_nix_xmit_prepare_tstamp(uint64_t *cmd, const uint64_t *send_mem_desc,
 			rte_compiler_barrier();
 		}
 
-		/* Packets for which PKT_TX_IEEE1588_TMST is not set, tx tstamp
+		/* Packets for which RTE_MBUF_F_TX_IEEE1588_TMST is not set, tx tstamp
 		 * should not be recorded, hence changing the alg type to
 		 * NIX_SENDMEMALG_SET and also changing send mem addr field to
 		 * next 8 bytes as it corrpt the actual tx tstamp registered
@@ -554,7 +554,7 @@ cn9k_nix_prepare_tso(struct rte_mbuf *m, union nix_send_hdr_w1_u *w1,
 	uint16_t lso_sb;
 	uint64_t mask;
 
-	if (!(ol_flags & PKT_TX_TCP_SEG))
+	if (!(ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 		return;
 
 	mask = -(!w1->il3type);
@@ -563,15 +563,15 @@ cn9k_nix_prepare_tso(struct rte_mbuf *m, union nix_send_hdr_w1_u *w1,
 	w0->u |= BIT(14);
 	w0->lso_sb = lso_sb;
 	w0->lso_mps = m->tso_segsz;
-	w0->lso_format = NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & PKT_TX_IPV6);
+	w0->lso_format = NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & RTE_MBUF_F_TX_IPV6);
 	w1->ol4type = NIX_SENDL4TYPE_TCP_CKSUM;
 
 	/* Handle tunnel tso */
 	if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-	    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+	    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 		const uint8_t is_udp_tun =
 			(CNXK_NIX_UDP_TUN_BITMASK >>
-			 ((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) &
+			 ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) &
 			0x1;
 
 		w1->il4type = NIX_SENDL4TYPE_TCP_CKSUM;
@@ -579,7 +579,7 @@ cn9k_nix_prepare_tso(struct rte_mbuf *m, union nix_send_hdr_w1_u *w1,
 		/* Update format for UDP tunneled packet */
 		w0->lso_format += is_udp_tun ? 2 : 6;
 
-		w0->lso_format += !!(ol_flags & PKT_TX_OUTER_IPV6) << 1;
+		w0->lso_format += !!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) << 1;
 	}
 }
 
@@ -1061,26 +1061,26 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			const uint8x16_t tbl = {
 				/* [0-15] = il4type:il3type */
 				0x04, /* none (IPv6 assumed) */
-				0x14, /* PKT_TX_TCP_CKSUM (IPv6 assumed) */
-				0x24, /* PKT_TX_SCTP_CKSUM (IPv6 assumed) */
-				0x34, /* PKT_TX_UDP_CKSUM (IPv6 assumed) */
-				0x03, /* PKT_TX_IP_CKSUM */
-				0x13, /* PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM */
-				0x23, /* PKT_TX_IP_CKSUM | PKT_TX_SCTP_CKSUM */
-				0x33, /* PKT_TX_IP_CKSUM | PKT_TX_UDP_CKSUM */
-				0x02, /* PKT_TX_IPV4  */
-				0x12, /* PKT_TX_IPV4 | PKT_TX_TCP_CKSUM */
-				0x22, /* PKT_TX_IPV4 | PKT_TX_SCTP_CKSUM */
-				0x32, /* PKT_TX_IPV4 | PKT_TX_UDP_CKSUM */
-				0x03, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM */
-				0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_TCP_CKSUM
+				0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6 assumed) */
+				0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6 assumed) */
+				0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6 assumed) */
+				0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
+				0x13, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM */
+				0x23, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_SCTP_CKSUM */
+				0x33, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM */
+				0x02, /* RTE_MBUF_F_TX_IPV4  */
+				0x12, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_TCP_CKSUM */
+				0x22, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_SCTP_CKSUM */
+				0x32, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_UDP_CKSUM */
+				0x03, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM */
+				0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_TCP_CKSUM
 				       */
-				0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_SCTP_CKSUM
+				0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_SCTP_CKSUM
 				       */
-				0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_UDP_CKSUM
+				0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_UDP_CKSUM
 				       */
 			};
 
@@ -1265,40 +1265,40 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 				{
 					/* [0-15] = il4type:il3type */
 					0x04, /* none (IPv6) */
-					0x14, /* PKT_TX_TCP_CKSUM (IPv6) */
-					0x24, /* PKT_TX_SCTP_CKSUM (IPv6) */
-					0x34, /* PKT_TX_UDP_CKSUM (IPv6) */
-					0x03, /* PKT_TX_IP_CKSUM */
-					0x13, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_TCP_CKSUM
+					0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6) */
+					0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6) */
+					0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6) */
+					0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
+					0x13, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x23, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_SCTP_CKSUM
+					0x23, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x33, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_UDP_CKSUM
+					0x33, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
-					0x02, /* PKT_TX_IPV4 */
-					0x12, /* PKT_TX_IPV4 |
-					       * PKT_TX_TCP_CKSUM
+					0x02, /* RTE_MBUF_F_TX_IPV4 */
+					0x12, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x22, /* PKT_TX_IPV4 |
-					       * PKT_TX_SCTP_CKSUM
+					0x22, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x32, /* PKT_TX_IPV4 |
-					       * PKT_TX_UDP_CKSUM
+					0x32, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
-					0x03, /* PKT_TX_IPV4 |
-					       * PKT_TX_IP_CKSUM
+					0x03, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_IP_CKSUM
 					       */
-					0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_TCP_CKSUM
+					0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_SCTP_CKSUM
+					0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_UDP_CKSUM
+					0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
 				},
 
@@ -1486,11 +1486,11 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 		if (flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) {
 			/* Tx ol_flag for vlan. */
-			const uint64x2_t olv = {PKT_TX_VLAN, PKT_TX_VLAN};
+			const uint64x2_t olv = {RTE_MBUF_F_TX_VLAN, RTE_MBUF_F_TX_VLAN};
 			/* Bit enable for VLAN1 */
 			const uint64x2_t mlv = {BIT_ULL(49), BIT_ULL(49)};
 			/* Tx ol_flag for QnQ. */
-			const uint64x2_t olq = {PKT_TX_QINQ, PKT_TX_QINQ};
+			const uint64x2_t olq = {RTE_MBUF_F_TX_QINQ, RTE_MBUF_F_TX_QINQ};
 			/* Bit enable for VLAN0 */
 			const uint64x2_t mlq = {BIT_ULL(48), BIT_ULL(48)};
 			/* Load vlan values from packet. outer is VLAN 0 */
@@ -1532,8 +1532,8 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 		if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
 			/* Tx ol_flag for timestam. */
-			const uint64x2_t olf = {PKT_TX_IEEE1588_TMST,
-						PKT_TX_IEEE1588_TMST};
+			const uint64x2_t olf = {RTE_MBUF_F_TX_IEEE1588_TMST,
+						RTE_MBUF_F_TX_IEEE1588_TMST};
 			/* Set send mem alg to SUB. */
 			const uint64x2_t alg = {BIT_ULL(59), BIT_ULL(59)};
 			/* Increment send mem address by 8. */
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index ff21b977b7..0667c2f115 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -124,8 +124,8 @@
 #define CNXK_NIX_FASTPATH_LOOKUP_MEM "cnxk_nix_fastpath_lookup_mem"
 
 #define CNXK_NIX_UDP_TUN_BITMASK                                               \
-	((1ull << (PKT_TX_TUNNEL_VXLAN >> 45)) |                               \
-	 (1ull << (PKT_TX_TUNNEL_GENEVE >> 45)))
+	((1ull << (RTE_MBUF_F_TX_TUNNEL_VXLAN >> 45)) |                               \
+	 (1ull << (RTE_MBUF_F_TX_TUNNEL_GENEVE >> 45)))
 
 /* Subtype from inline outbound error event */
 #define CNXK_ETHDEV_SEC_OUTB_EV_SUB 0xFFUL
@@ -596,15 +596,15 @@ cnxk_nix_mbuf_to_tstamp(struct rte_mbuf *mbuf,
 		 */
 		*cnxk_nix_timestamp_dynfield(mbuf, tstamp) =
 			rte_be_to_cpu_64(*tstamp_ptr);
-		/* PKT_RX_IEEE1588_TMST flag needs to be set only in case
+		/* RTE_MBUF_F_RX_IEEE1588_TMST flag needs to be set only in case
 		 * PTP packets are received.
 		 */
 		if (mbuf->packet_type == RTE_PTYPE_L2_ETHER_TIMESYNC) {
 			tstamp->rx_tstamp =
 				*cnxk_nix_timestamp_dynfield(mbuf, tstamp);
 			tstamp->rx_ready = 1;
-			mbuf->ol_flags |= PKT_RX_IEEE1588_PTP |
-					  PKT_RX_IEEE1588_TMST |
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP |
+					  RTE_MBUF_F_RX_IEEE1588_TMST |
 					  tstamp->rx_tstamp_dynflag;
 		}
 	}
diff --git a/drivers/net/cnxk/cnxk_lookup.c b/drivers/net/cnxk/cnxk_lookup.c
index f6ec7689fc..4eb1ecf17d 100644
--- a/drivers/net/cnxk/cnxk_lookup.c
+++ b/drivers/net/cnxk/cnxk_lookup.c
@@ -238,9 +238,9 @@ nix_create_rx_ol_flags_array(void *mem)
 		errlev = idx & 0xf;
 		errcode = (idx & 0xff0) >> 4;
 
-		val = PKT_RX_IP_CKSUM_UNKNOWN;
-		val |= PKT_RX_L4_CKSUM_UNKNOWN;
-		val |= PKT_RX_OUTER_L4_CKSUM_UNKNOWN;
+		val = RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
+		val |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
+		val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN;
 
 		switch (errlev) {
 		case NPC_ERRLEV_RE:
@@ -248,46 +248,46 @@ nix_create_rx_ol_flags_array(void *mem)
 			 * including Outer L2 length mismatch error
 			 */
 			if (errcode) {
-				val |= PKT_RX_IP_CKSUM_BAD;
-				val |= PKT_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 			break;
 		case NPC_ERRLEV_LC:
 			if (errcode == NPC_EC_OIP4_CSUM ||
 			    errcode == NPC_EC_IP_FRAG_OFFSET_1) {
-				val |= PKT_RX_IP_CKSUM_BAD;
-				val |= PKT_RX_OUTER_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			}
 			break;
 		case NPC_ERRLEV_LG:
 			if (errcode == NPC_EC_IIP4_CSUM)
-				val |= PKT_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			else
-				val |= PKT_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			break;
 		case NPC_ERRLEV_NIX:
 			if (errcode == NIX_RX_PERRCODE_OL4_CHK ||
 			    errcode == NIX_RX_PERRCODE_OL4_LEN ||
 			    errcode == NIX_RX_PERRCODE_OL4_PORT) {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_BAD;
-				val |= PKT_RX_OUTER_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 			} else if (errcode == NIX_RX_PERRCODE_IL4_CHK ||
 				   errcode == NIX_RX_PERRCODE_IL4_LEN ||
 				   errcode == NIX_RX_PERRCODE_IL4_PORT) {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			} else if (errcode == NIX_RX_PERRCODE_IL3_LEN ||
 				   errcode == NIX_RX_PERRCODE_OL3_LEN) {
-				val |= PKT_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 			break;
 		}
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index 3299d6252e..20aa84b653 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -539,7 +539,7 @@ static inline unsigned int flits_to_desc(unsigned int n)
  */
 static inline int is_eth_imm(const struct rte_mbuf *m)
 {
-	unsigned int hdrlen = (m->ol_flags & PKT_TX_TCP_SEG) ?
+	unsigned int hdrlen = (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) ?
 			      sizeof(struct cpl_tx_pkt_lso_core) : 0;
 
 	hdrlen += sizeof(struct cpl_tx_pkt);
@@ -749,12 +749,12 @@ static u64 hwcsum(enum chip_type chip, const struct rte_mbuf *m)
 {
 	int csum_type;
 
-	if (m->ol_flags & PKT_TX_IP_CKSUM) {
-		switch (m->ol_flags & PKT_TX_L4_MASK) {
-		case PKT_TX_TCP_CKSUM:
+	if (m->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
+		switch (m->ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+		case RTE_MBUF_F_TX_TCP_CKSUM:
 			csum_type = TX_CSUM_TCPIP;
 			break;
-		case PKT_TX_UDP_CKSUM:
+		case RTE_MBUF_F_TX_UDP_CKSUM:
 			csum_type = TX_CSUM_UDPIP;
 			break;
 		default:
@@ -1029,7 +1029,7 @@ static inline int tx_do_packet_coalesce(struct sge_eth_txq *txq,
 	/* fill the cpl message, same as in t4_eth_xmit, this should be kept
 	 * similar to t4_eth_xmit
 	 */
-	if (mbuf->ol_flags & PKT_TX_IP_CKSUM) {
+	if (mbuf->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		cntrl = hwcsum(adap->params.chip, mbuf) |
 			       F_TXPKT_IPCSUM_DIS;
 		txq->stats.tx_cso++;
@@ -1037,7 +1037,7 @@ static inline int tx_do_packet_coalesce(struct sge_eth_txq *txq,
 		cntrl = F_TXPKT_L4CSUM_DIS | F_TXPKT_IPCSUM_DIS;
 	}
 
-	if (mbuf->ol_flags & PKT_TX_VLAN) {
+	if (mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		txq->stats.vlan_ins++;
 		cntrl |= F_TXPKT_VLAN_VLD | V_TXPKT_VLAN(mbuf->vlan_tci);
 	}
@@ -1129,7 +1129,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
 		return 0;
 	}
 
-	if ((!(m->ol_flags & PKT_TX_TCP_SEG)) &&
+	if ((!(m->ol_flags & RTE_MBUF_F_TX_TCP_SEG)) &&
 	    (unlikely(m->pkt_len > max_pkt_len)))
 		goto out_free;
 
@@ -1140,7 +1140,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
 	/* align the end of coalesce WR to a 512 byte boundary */
 	txq->q.coalesce.max = (8 - (txq->q.pidx & 7)) * 8;
 
-	if (!((m->ol_flags & PKT_TX_TCP_SEG) ||
+	if (!((m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) ||
 			m->pkt_len > RTE_ETHER_MAX_LEN)) {
 		if (should_tx_packet_coalesce(txq, mbuf, &cflits, adap)) {
 			if (unlikely(map_mbuf(mbuf, addr) < 0)) {
@@ -1203,7 +1203,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
 	len += sizeof(*cpl);
 
 	/* Coalescing skipped and we send through normal path */
-	if (!(m->ol_flags & PKT_TX_TCP_SEG)) {
+	if (!(m->ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		wr->op_immdlen = htonl(V_FW_WR_OP(is_pf4(adap) ?
 						  FW_ETH_TX_PKT_WR :
 						  FW_ETH_TX_PKT_VM_WR) |
@@ -1212,7 +1212,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
 			cpl = (void *)(wr + 1);
 		else
 			cpl = (void *)(vmwr + 1);
-		if (m->ol_flags & PKT_TX_IP_CKSUM) {
+		if (m->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 			cntrl = hwcsum(adap->params.chip, m) |
 				F_TXPKT_IPCSUM_DIS;
 			txq->stats.tx_cso++;
@@ -1222,7 +1222,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
 			lso = (void *)(wr + 1);
 		else
 			lso = (void *)(vmwr + 1);
-		v6 = (m->ol_flags & PKT_TX_IPV6) != 0;
+		v6 = (m->ol_flags & RTE_MBUF_F_TX_IPV6) != 0;
 		l3hdr_len = m->l3_len;
 		l4hdr_len = m->l4_len;
 		eth_xtra_len = m->l2_len - RTE_ETHER_HDR_LEN;
@@ -1258,7 +1258,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
 		txq->stats.tx_cso += m->tso_segsz;
 	}
 
-	if (m->ol_flags & PKT_TX_VLAN) {
+	if (m->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		txq->stats.vlan_ins++;
 		cntrl |= F_TXPKT_VLAN_VLD | V_TXPKT_VLAN(m->vlan_tci);
 	}
@@ -1528,27 +1528,27 @@ static inline void cxgbe_fill_mbuf_info(struct adapter *adap,
 
 	if (cpl->vlan_ex)
 		cxgbe_set_mbuf_info(pkt, RTE_PTYPE_L2_ETHER_VLAN,
-				    PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+				    RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 	else
 		cxgbe_set_mbuf_info(pkt, RTE_PTYPE_L2_ETHER, 0);
 
 	if (cpl->l2info & htonl(F_RXF_IP))
 		cxgbe_set_mbuf_info(pkt, RTE_PTYPE_L3_IPV4,
-				    csum_ok ? PKT_RX_IP_CKSUM_GOOD :
-					      PKT_RX_IP_CKSUM_BAD);
+				    csum_ok ? RTE_MBUF_F_RX_IP_CKSUM_GOOD :
+				    RTE_MBUF_F_RX_IP_CKSUM_BAD);
 	else if (cpl->l2info & htonl(F_RXF_IP6))
 		cxgbe_set_mbuf_info(pkt, RTE_PTYPE_L3_IPV6,
-				    csum_ok ? PKT_RX_IP_CKSUM_GOOD :
-					      PKT_RX_IP_CKSUM_BAD);
+				    csum_ok ? RTE_MBUF_F_RX_IP_CKSUM_GOOD :
+				    RTE_MBUF_F_RX_IP_CKSUM_BAD);
 
 	if (cpl->l2info & htonl(F_RXF_TCP))
 		cxgbe_set_mbuf_info(pkt, RTE_PTYPE_L4_TCP,
-				    csum_ok ? PKT_RX_L4_CKSUM_GOOD :
-					      PKT_RX_L4_CKSUM_BAD);
+				    csum_ok ? RTE_MBUF_F_RX_L4_CKSUM_GOOD :
+				    RTE_MBUF_F_RX_L4_CKSUM_BAD);
 	else if (cpl->l2info & htonl(F_RXF_UDP))
 		cxgbe_set_mbuf_info(pkt, RTE_PTYPE_L4_UDP,
-				    csum_ok ? PKT_RX_L4_CKSUM_GOOD :
-					      PKT_RX_L4_CKSUM_BAD);
+				    csum_ok ? RTE_MBUF_F_RX_L4_CKSUM_GOOD :
+				    RTE_MBUF_F_RX_L4_CKSUM_BAD);
 }
 
 /**
@@ -1639,7 +1639,7 @@ static int process_responses(struct sge_rspq *q, int budget,
 
 				if (!rss_hdr->filter_tid &&
 				    rss_hdr->hash_type) {
-					pkt->ol_flags |= PKT_RX_RSS_HASH;
+					pkt->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 					pkt->hash.rss =
 						ntohl(rss_hdr->hash_val);
 				}
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index b5728e09c2..98edc53359 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -80,10 +80,9 @@
 	ETH_RSS_TCP | \
 	ETH_RSS_SCTP)
 
-#define DPAA_TX_CKSUM_OFFLOAD_MASK (             \
-		PKT_TX_IP_CKSUM |                \
-		PKT_TX_TCP_CKSUM |               \
-		PKT_TX_UDP_CKSUM)
+#define DPAA_TX_CKSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM |                \
+		RTE_MBUF_F_TX_TCP_CKSUM |               \
+		RTE_MBUF_F_TX_UDP_CKSUM)
 
 /* DPAA Frame descriptor macros */
 
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 423de40e95..ffac6ce3e2 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -125,8 +125,8 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m, void *fd_virt_addr)
 
 	DPAA_DP_LOG(DEBUG, " Parsing mbuf: %p with annotations: %p", m, annot);
 
-	m->ol_flags = PKT_RX_RSS_HASH | PKT_RX_IP_CKSUM_GOOD |
-		PKT_RX_L4_CKSUM_GOOD;
+	m->ol_flags = RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+		RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	switch (prs) {
 	case DPAA_PKT_TYPE_IPV4:
@@ -204,13 +204,13 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m, void *fd_virt_addr)
 		break;
 	case DPAA_PKT_TYPE_IPV4_CSUM_ERR:
 	case DPAA_PKT_TYPE_IPV6_CSUM_ERR:
-		m->ol_flags = PKT_RX_RSS_HASH | PKT_RX_IP_CKSUM_BAD;
+		m->ol_flags = RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		break;
 	case DPAA_PKT_TYPE_IPV4_TCP_CSUM_ERR:
 	case DPAA_PKT_TYPE_IPV6_TCP_CSUM_ERR:
 	case DPAA_PKT_TYPE_IPV4_UDP_CSUM_ERR:
 	case DPAA_PKT_TYPE_IPV6_UDP_CSUM_ERR:
-		m->ol_flags = PKT_RX_RSS_HASH | PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags = RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		break;
 	case DPAA_PKT_TYPE_NONE:
 		m->packet_type = 0;
@@ -229,7 +229,7 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m, void *fd_virt_addr)
 
 	/* Check if Vlan is present */
 	if (prs & DPAA_PARSE_VLAN_MASK)
-		m->ol_flags |= PKT_RX_VLAN;
+		m->ol_flags |= RTE_MBUF_F_RX_VLAN;
 	/* Packet received without stripping the vlan */
 }
 
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index f491f4d10a..267090c59b 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -114,7 +114,7 @@ dpaa2_dev_rx_parse_new(struct rte_mbuf *m, const struct qbman_fd *fd,
 		m->packet_type = dpaa2_dev_rx_parse_slow(m, annotation);
 	}
 	m->hash.rss = fd->simple.flc_hi;
-	m->ol_flags |= PKT_RX_RSS_HASH;
+	m->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 
 	if (dpaa2_enable_ts[m->port]) {
 		*dpaa2_timestamp_dynfield(m) = annotation->word2;
@@ -141,20 +141,20 @@ dpaa2_dev_rx_parse_slow(struct rte_mbuf *mbuf,
 
 #if defined(RTE_LIBRTE_IEEE1588)
 	if (BIT_ISSET_AT_POS(annotation->word1, DPAA2_ETH_FAS_PTP))
-		mbuf->ol_flags |= PKT_RX_IEEE1588_PTP;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP;
 #endif
 
 	if (BIT_ISSET_AT_POS(annotation->word3, L2_VLAN_1_PRESENT)) {
 		vlan_tci = rte_pktmbuf_mtod_offset(mbuf, uint16_t *,
 			(VLAN_TCI_OFFSET_1(annotation->word5) >> 16));
 		mbuf->vlan_tci = rte_be_to_cpu_16(*vlan_tci);
-		mbuf->ol_flags |= PKT_RX_VLAN;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN;
 		pkt_type |= RTE_PTYPE_L2_ETHER_VLAN;
 	} else if (BIT_ISSET_AT_POS(annotation->word3, L2_VLAN_N_PRESENT)) {
 		vlan_tci = rte_pktmbuf_mtod_offset(mbuf, uint16_t *,
 			(VLAN_TCI_OFFSET_1(annotation->word5) >> 16));
 		mbuf->vlan_tci = rte_be_to_cpu_16(*vlan_tci);
-		mbuf->ol_flags |= PKT_RX_VLAN | PKT_RX_QINQ;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_QINQ;
 		pkt_type |= RTE_PTYPE_L2_ETHER_QINQ;
 	}
 
@@ -189,9 +189,9 @@ dpaa2_dev_rx_parse_slow(struct rte_mbuf *mbuf,
 	}
 
 	if (BIT_ISSET_AT_POS(annotation->word8, DPAA2_ETH_FAS_L3CE))
-		mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else if (BIT_ISSET_AT_POS(annotation->word8, DPAA2_ETH_FAS_L4CE))
-		mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 
 	if (BIT_ISSET_AT_POS(annotation->word4, L3_IP_1_FIRST_FRAGMENT |
 	    L3_IP_1_MORE_FRAGMENT |
@@ -232,9 +232,9 @@ dpaa2_dev_rx_parse(struct rte_mbuf *mbuf, void *hw_annot_addr)
 			   annotation->word4);
 
 	if (BIT_ISSET_AT_POS(annotation->word8, DPAA2_ETH_FAS_L3CE))
-		mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else if (BIT_ISSET_AT_POS(annotation->word8, DPAA2_ETH_FAS_L4CE))
-		mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 
 	if (dpaa2_enable_ts[mbuf->port]) {
 		*dpaa2_timestamp_dynfield(mbuf) = annotation->word2;
@@ -1228,9 +1228,9 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 				    (*bufs)->nb_segs == 1 &&
 				    rte_mbuf_refcnt_read((*bufs)) == 1)) {
 					if (unlikely(((*bufs)->ol_flags
-						& PKT_TX_VLAN) ||
-						(eth_data->dev_conf.txmode.offloads
-						& DEV_TX_OFFLOAD_VLAN_INSERT))) {
+						& RTE_MBUF_F_TX_VLAN) ||
+						     (eth_data->dev_conf.txmode.offloads
+						      & DEV_TX_OFFLOAD_VLAN_INSERT))) {
 						ret = rte_vlan_insert(bufs);
 						if (ret)
 							goto send_n_return;
@@ -1271,9 +1271,9 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 				goto send_n_return;
 			}
 
-			if (unlikely(((*bufs)->ol_flags & PKT_TX_VLAN) ||
-				(eth_data->dev_conf.txmode.offloads
-				& DEV_TX_OFFLOAD_VLAN_INSERT))) {
+			if (unlikely(((*bufs)->ol_flags & RTE_MBUF_F_TX_VLAN) ||
+				     (eth_data->dev_conf.txmode.offloads
+				      & DEV_TX_OFFLOAD_VLAN_INSERT))) {
 				int ret = rte_vlan_insert(bufs);
 				if (ret)
 					goto send_n_return;
@@ -1532,7 +1532,7 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 				    (*bufs)->nb_segs == 1 &&
 				    rte_mbuf_refcnt_read((*bufs)) == 1)) {
 					if (unlikely((*bufs)->ol_flags
-						& PKT_TX_VLAN)) {
+						& RTE_MBUF_F_TX_VLAN)) {
 					  ret = rte_vlan_insert(bufs);
 					  if (ret)
 						goto send_n_return;
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index 82873c91b0..82516bda8b 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -50,15 +50,14 @@
 
 #define E1000_RXDCTL_GRAN	0x01000000 /* RXDCTL Granularity */
 
-#define E1000_TX_OFFLOAD_MASK ( \
-		PKT_TX_IPV6 |           \
-		PKT_TX_IPV4 |           \
-		PKT_TX_IP_CKSUM |       \
-		PKT_TX_L4_MASK |        \
-		PKT_TX_VLAN)
+#define E1000_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_IPV6 |           \
+		RTE_MBUF_F_TX_IPV4 |           \
+		RTE_MBUF_F_TX_IP_CKSUM |       \
+		RTE_MBUF_F_TX_L4_MASK |        \
+		RTE_MBUF_F_TX_VLAN)
 
 #define E1000_TX_OFFLOAD_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ E1000_TX_OFFLOAD_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ E1000_TX_OFFLOAD_MASK)
 
 /* PCI offset for querying configuration status register */
 #define PCI_CFG_STATUS_REG                 0x06
@@ -236,7 +235,7 @@ em_set_xmit_ctx(struct em_tx_queue* txq,
 	 * When doing checksum or TCP segmentation with IPv6 headers,
 	 * IPCSE field should be set t0 0.
 	 */
-	if (flags & PKT_TX_IP_CKSUM) {
+	if (flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		ctx.lower_setup.ip_fields.ipcse =
 			(uint16_t)rte_cpu_to_le_16(ipcse - 1);
 		cmd_len |= E1000_TXD_CMD_IP;
@@ -249,13 +248,13 @@ em_set_xmit_ctx(struct em_tx_queue* txq,
 	ctx.upper_setup.tcp_fields.tucss = (uint8_t)ipcse;
 	ctx.upper_setup.tcp_fields.tucse = 0;
 
-	switch (flags & PKT_TX_L4_MASK) {
-	case PKT_TX_UDP_CKSUM:
+	switch (flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		ctx.upper_setup.tcp_fields.tucso = (uint8_t)(ipcse +
 				offsetof(struct rte_udp_hdr, dgram_cksum));
 		cmp_mask |= TX_MACIP_LEN_CMP_MASK;
 		break;
-	case PKT_TX_TCP_CKSUM:
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		ctx.upper_setup.tcp_fields.tucso = (uint8_t)(ipcse +
 				offsetof(struct rte_tcp_hdr, cksum));
 		cmd_len |= E1000_TXD_CMD_TCP;
@@ -358,8 +357,8 @@ tx_desc_cksum_flags_to_upper(uint64_t ol_flags)
 	static const uint32_t l3_olinfo[2] = {0, E1000_TXD_POPTS_IXSM << 8};
 	uint32_t tmp;
 
-	tmp = l4_olinfo[(ol_flags & PKT_TX_L4_MASK) != PKT_TX_L4_NO_CKSUM];
-	tmp |= l3_olinfo[(ol_flags & PKT_TX_IP_CKSUM) != 0];
+	tmp = l4_olinfo[(ol_flags & RTE_MBUF_F_TX_L4_MASK) != RTE_MBUF_F_TX_L4_NO_CKSUM];
+	tmp |= l3_olinfo[(ol_flags & RTE_MBUF_F_TX_IP_CKSUM) != 0];
 	return tmp;
 }
 
@@ -412,7 +411,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		ol_flags = tx_pkt->ol_flags;
 
 		/* If hardware offload required */
-		tx_ol_req = (ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK));
+		tx_ol_req = (ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK));
 		if (tx_ol_req) {
 			hdrlen.f.vlan_tci = tx_pkt->vlan_tci;
 			hdrlen.f.l2_len = tx_pkt->l2_len;
@@ -508,7 +507,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		popts_spec = 0;
 
 		/* Set VLAN Tag offload fields. */
-		if (ol_flags & PKT_TX_VLAN) {
+		if (ol_flags & RTE_MBUF_F_TX_VLAN) {
 			cmd_type_len |= E1000_TXD_CMD_VLE;
 			popts_spec = tx_pkt->vlan_tci << E1000_TXD_VLAN_SHIFT;
 		}
@@ -658,7 +657,7 @@ rx_desc_status_to_pkt_flags(uint32_t rx_status)
 
 	/* Check if VLAN present */
 	pkt_flags = ((rx_status & E1000_RXD_STAT_VP) ?
-		PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED : 0);
+		RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED : 0);
 
 	return pkt_flags;
 }
@@ -669,9 +668,9 @@ rx_desc_error_to_pkt_flags(uint32_t rx_error)
 	uint64_t pkt_flags = 0;
 
 	if (rx_error & E1000_RXD_ERR_IPE)
-		pkt_flags |= PKT_RX_IP_CKSUM_BAD;
+		pkt_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	if (rx_error & E1000_RXD_ERR_TCPE)
-		pkt_flags |= PKT_RX_L4_CKSUM_BAD;
+		pkt_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	return pkt_flags;
 }
 
@@ -813,7 +812,7 @@ eth_em_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		rxm->ol_flags = rxm->ol_flags |
 				rx_desc_error_to_pkt_flags(rxd.errors);
 
-		/* Only valid if PKT_RX_VLAN set in pkt_flags */
+		/* Only valid if RTE_MBUF_F_RX_VLAN set in pkt_flags */
 		rxm->vlan_tci = rte_le_to_cpu_16(rxd.special);
 
 		/*
@@ -1039,7 +1038,7 @@ eth_em_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		first_seg->ol_flags = first_seg->ol_flags |
 					rx_desc_error_to_pkt_flags(rxd.errors);
 
-		/* Only valid if PKT_RX_VLAN set in pkt_flags */
+		/* Only valid if RTE_MBUF_F_RX_VLAN set in pkt_flags */
 		rxm->vlan_tci = rte_le_to_cpu_16(rxd.special);
 
 		/* Prefetch data of first segment, if configured to do so. */
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index fa2797074f..141c2ba000 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -44,24 +44,23 @@
 #include "e1000_ethdev.h"
 
 #ifdef RTE_LIBRTE_IEEE1588
-#define IGB_TX_IEEE1588_TMST PKT_TX_IEEE1588_TMST
+#define IGB_TX_IEEE1588_TMST RTE_MBUF_F_TX_IEEE1588_TMST
 #else
 #define IGB_TX_IEEE1588_TMST 0
 #endif
 /* Bit Mask to indicate what bits required for building TX context */
-#define IGB_TX_OFFLOAD_MASK (			 \
-		PKT_TX_OUTER_IPV6 |	 \
-		PKT_TX_OUTER_IPV4 |	 \
-		PKT_TX_IPV6 |		 \
-		PKT_TX_IPV4 |		 \
-		PKT_TX_VLAN |		 \
-		PKT_TX_IP_CKSUM |		 \
-		PKT_TX_L4_MASK |		 \
-		PKT_TX_TCP_SEG |		 \
+#define IGB_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_OUTER_IPV6 |	 \
+		RTE_MBUF_F_TX_OUTER_IPV4 |	 \
+		RTE_MBUF_F_TX_IPV6 |		 \
+		RTE_MBUF_F_TX_IPV4 |		 \
+		RTE_MBUF_F_TX_VLAN |		 \
+		RTE_MBUF_F_TX_IP_CKSUM |		 \
+		RTE_MBUF_F_TX_L4_MASK |		 \
+		RTE_MBUF_F_TX_TCP_SEG |		 \
 		IGB_TX_IEEE1588_TMST)
 
 #define IGB_TX_OFFLOAD_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ IGB_TX_OFFLOAD_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IGB_TX_OFFLOAD_MASK)
 
 /**
  * Structure associated with each descriptor of the RX ring of a RX queue.
@@ -226,12 +225,12 @@ struct igb_tx_queue {
 static inline uint64_t
 check_tso_para(uint64_t ol_req, union igb_tx_offload ol_para)
 {
-	if (!(ol_req & PKT_TX_TCP_SEG))
+	if (!(ol_req & RTE_MBUF_F_TX_TCP_SEG))
 		return ol_req;
 	if ((ol_para.tso_segsz > IGB_TSO_MAX_MSS) || (ol_para.l2_len +
 			ol_para.l3_len + ol_para.l4_len > IGB_TSO_MAX_HDRLEN)) {
-		ol_req &= ~PKT_TX_TCP_SEG;
-		ol_req |= PKT_TX_TCP_CKSUM;
+		ol_req &= ~RTE_MBUF_F_TX_TCP_SEG;
+		ol_req |= RTE_MBUF_F_TX_TCP_CKSUM;
 	}
 	return ol_req;
 }
@@ -262,13 +261,13 @@ igbe_set_xmit_ctx(struct igb_tx_queue* txq,
 	/* Specify which HW CTX to upload. */
 	mss_l4len_idx = (ctx_idx << E1000_ADVTXD_IDX_SHIFT);
 
-	if (ol_flags & PKT_TX_VLAN)
+	if (ol_flags & RTE_MBUF_F_TX_VLAN)
 		tx_offload_mask.data |= TX_VLAN_CMP_MASK;
 
 	/* check if TCP segmentation required for this packet */
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		/* implies IP cksum in IPv4 */
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			type_tucmd_mlhl = E1000_ADVTXD_TUCMD_IPV4 |
 				E1000_ADVTXD_TUCMD_L4T_TCP |
 				E1000_ADVTXD_DTYP_CTXT | E1000_ADVTXD_DCMD_DEXT;
@@ -281,26 +280,26 @@ igbe_set_xmit_ctx(struct igb_tx_queue* txq,
 		mss_l4len_idx |= tx_offload.tso_segsz << E1000_ADVTXD_MSS_SHIFT;
 		mss_l4len_idx |= tx_offload.l4_len << E1000_ADVTXD_L4LEN_SHIFT;
 	} else { /* no TSO, check if hardware checksum is needed */
-		if (ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK))
+		if (ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK))
 			tx_offload_mask.data |= TX_MACIP_LEN_CMP_MASK;
 
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			type_tucmd_mlhl = E1000_ADVTXD_TUCMD_IPV4;
 
-		switch (ol_flags & PKT_TX_L4_MASK) {
-		case PKT_TX_UDP_CKSUM:
+		switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+		case RTE_MBUF_F_TX_UDP_CKSUM:
 			type_tucmd_mlhl |= E1000_ADVTXD_TUCMD_L4T_UDP |
 				E1000_ADVTXD_DTYP_CTXT | E1000_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= sizeof(struct rte_udp_hdr)
 				<< E1000_ADVTXD_L4LEN_SHIFT;
 			break;
-		case PKT_TX_TCP_CKSUM:
+		case RTE_MBUF_F_TX_TCP_CKSUM:
 			type_tucmd_mlhl |= E1000_ADVTXD_TUCMD_L4T_TCP |
 				E1000_ADVTXD_DTYP_CTXT | E1000_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= sizeof(struct rte_tcp_hdr)
 				<< E1000_ADVTXD_L4LEN_SHIFT;
 			break;
-		case PKT_TX_SCTP_CKSUM:
+		case RTE_MBUF_F_TX_SCTP_CKSUM:
 			type_tucmd_mlhl |= E1000_ADVTXD_TUCMD_L4T_SCTP |
 				E1000_ADVTXD_DTYP_CTXT | E1000_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= sizeof(struct rte_sctp_hdr)
@@ -359,9 +358,9 @@ tx_desc_cksum_flags_to_olinfo(uint64_t ol_flags)
 	static const uint32_t l3_olinfo[2] = {0, E1000_ADVTXD_POPTS_IXSM};
 	uint32_t tmp;
 
-	tmp  = l4_olinfo[(ol_flags & PKT_TX_L4_MASK)  != PKT_TX_L4_NO_CKSUM];
-	tmp |= l3_olinfo[(ol_flags & PKT_TX_IP_CKSUM) != 0];
-	tmp |= l4_olinfo[(ol_flags & PKT_TX_TCP_SEG) != 0];
+	tmp  = l4_olinfo[(ol_flags & RTE_MBUF_F_TX_L4_MASK)  != RTE_MBUF_F_TX_L4_NO_CKSUM];
+	tmp |= l3_olinfo[(ol_flags & RTE_MBUF_F_TX_IP_CKSUM) != 0];
+	tmp |= l4_olinfo[(ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0];
 	return tmp;
 }
 
@@ -371,8 +370,8 @@ tx_desc_vlan_flags_to_cmdtype(uint64_t ol_flags)
 	uint32_t cmdtype;
 	static uint32_t vlan_cmd[2] = {0, E1000_ADVTXD_DCMD_VLE};
 	static uint32_t tso_cmd[2] = {0, E1000_ADVTXD_DCMD_TSE};
-	cmdtype = vlan_cmd[(ol_flags & PKT_TX_VLAN) != 0];
-	cmdtype |= tso_cmd[(ol_flags & PKT_TX_TCP_SEG) != 0];
+	cmdtype = vlan_cmd[(ol_flags & RTE_MBUF_F_TX_VLAN) != 0];
+	cmdtype |= tso_cmd[(ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0];
 	return cmdtype;
 }
 
@@ -528,11 +527,11 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		 */
 		cmd_type_len = txq->txd_type |
 			E1000_ADVTXD_DCMD_IFCS | E1000_ADVTXD_DCMD_DEXT;
-		if (tx_ol_req & PKT_TX_TCP_SEG)
+		if (tx_ol_req & RTE_MBUF_F_TX_TCP_SEG)
 			pkt_len -= (tx_pkt->l2_len + tx_pkt->l3_len + tx_pkt->l4_len);
 		olinfo_status = (pkt_len << E1000_ADVTXD_PAYLEN_SHIFT);
 #if defined(RTE_LIBRTE_IEEE1588)
-		if (ol_flags & PKT_TX_IEEE1588_TMST)
+		if (ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 			cmd_type_len |= E1000_ADVTXD_MAC_TSTAMP;
 #endif
 		if (tx_ol_req) {
@@ -630,7 +629,7 @@ eth_igb_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		m = tx_pkts[i];
 
 		/* Check some limitations for TSO in hardware */
-		if (m->ol_flags & PKT_TX_TCP_SEG)
+		if (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			if ((m->tso_segsz > IGB_TSO_MAX_MSS) ||
 					(m->l2_len + m->l3_len + m->l4_len >
 					IGB_TSO_MAX_HDRLEN)) {
@@ -745,11 +744,11 @@ igb_rxd_pkt_info_to_pkt_type(uint16_t pkt_info)
 static inline uint64_t
 rx_desc_hlen_type_rss_to_pkt_flags(struct igb_rx_queue *rxq, uint32_t hl_tp_rs)
 {
-	uint64_t pkt_flags = ((hl_tp_rs & 0x0F) == 0) ?  0 : PKT_RX_RSS_HASH;
+	uint64_t pkt_flags = ((hl_tp_rs & 0x0F) == 0) ?  0 : RTE_MBUF_F_RX_RSS_HASH;
 
 #if defined(RTE_LIBRTE_IEEE1588)
 	static uint32_t ip_pkt_etqf_map[8] = {
-		0, 0, 0, PKT_RX_IEEE1588_PTP,
+		0, 0, 0, RTE_MBUF_F_RX_IEEE1588_PTP,
 		0, 0, 0, 0,
 	};
 
@@ -775,11 +774,11 @@ rx_desc_status_to_pkt_flags(uint32_t rx_status)
 
 	/* Check if VLAN present */
 	pkt_flags = ((rx_status & E1000_RXD_STAT_VP) ?
-		PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED : 0);
+		RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED : 0);
 
 #if defined(RTE_LIBRTE_IEEE1588)
 	if (rx_status & E1000_RXD_STAT_TMST)
-		pkt_flags = pkt_flags | PKT_RX_IEEE1588_TMST;
+		pkt_flags = pkt_flags | RTE_MBUF_F_RX_IEEE1588_TMST;
 #endif
 	return pkt_flags;
 }
@@ -793,10 +792,10 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status)
 	 */
 
 	static uint64_t error_to_pkt_flags_map[4] = {
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD,
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD,
-		PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD,
-		PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD
 	};
 	return error_to_pkt_flags_map[(rx_status >>
 		E1000_RXD_ERR_CKSUM_BIT) & E1000_RXD_ERR_CKSUM_MSK];
@@ -938,7 +937,7 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
 
 		/*
-		 * The vlan_tci field is only valid when PKT_RX_VLAN is
+		 * The vlan_tci field is only valid when RTE_MBUF_F_RX_VLAN is
 		 * set in the pkt_flags field and must be in CPU byte order.
 		 */
 		if ((staterr & rte_cpu_to_le_32(E1000_RXDEXT_STATERR_LB)) &&
@@ -1178,7 +1177,7 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		first_seg->hash.rss = rxd.wb.lower.hi_dword.rss;
 
 		/*
-		 * The vlan_tci field is only valid when PKT_RX_VLAN is
+		 * The vlan_tci field is only valid when RTE_MBUF_F_RX_VLAN is
 		 * set in the pkt_flags field and must be in CPU byte order.
 		 */
 		if ((staterr & rte_cpu_to_le_32(E1000_RXDEXT_STATERR_LB)) &&
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index a82d4b6287..e1e88096c5 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -120,9 +120,9 @@ static const struct ena_stats ena_stats_rx_strings[] = {
 			DEV_TX_OFFLOAD_UDP_CKSUM |\
 			DEV_TX_OFFLOAD_IPV4_CKSUM |\
 			DEV_TX_OFFLOAD_TCP_TSO)
-#define MBUF_OFFLOADS (PKT_TX_L4_MASK |\
-		       PKT_TX_IP_CKSUM |\
-		       PKT_TX_TCP_SEG)
+#define MBUF_OFFLOADS (RTE_MBUF_F_TX_L4_MASK |\
+		       RTE_MBUF_F_TX_IP_CKSUM |\
+		       RTE_MBUF_F_TX_TCP_SEG)
 
 /** Vendor ID used by Amazon devices */
 #define PCI_VENDOR_ID_AMAZON 0x1D0F
@@ -130,15 +130,14 @@ static const struct ena_stats ena_stats_rx_strings[] = {
 #define PCI_DEVICE_ID_ENA_VF		0xEC20
 #define PCI_DEVICE_ID_ENA_VF_RSERV0	0xEC21
 
-#define	ENA_TX_OFFLOAD_MASK	(\
-	PKT_TX_L4_MASK |         \
-	PKT_TX_IPV6 |            \
-	PKT_TX_IPV4 |            \
-	PKT_TX_IP_CKSUM |        \
-	PKT_TX_TCP_SEG)
+#define	ENA_TX_OFFLOAD_MASK	(RTE_MBUF_F_TX_L4_MASK |         \
+	RTE_MBUF_F_TX_IPV6 |            \
+	RTE_MBUF_F_TX_IPV4 |            \
+	RTE_MBUF_F_TX_IP_CKSUM |        \
+	RTE_MBUF_F_TX_TCP_SEG)
 
 #define	ENA_TX_OFFLOAD_NOTSUP_MASK	\
-	(PKT_TX_OFFLOAD_MASK ^ ENA_TX_OFFLOAD_MASK)
+	(RTE_MBUF_F_TX_OFFLOAD_MASK ^ ENA_TX_OFFLOAD_MASK)
 
 static const struct rte_pci_id pci_id_ena_map[] = {
 	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_AMAZON, PCI_DEVICE_ID_ENA_VF) },
@@ -274,24 +273,24 @@ static inline void ena_rx_mbuf_prepare(struct rte_mbuf *mbuf,
 	if (ena_rx_ctx->l3_proto == ENA_ETH_IO_L3_PROTO_IPV4) {
 		packet_type |= RTE_PTYPE_L3_IPV4;
 		if (unlikely(ena_rx_ctx->l3_csum_err))
-			ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		else
-			ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 	} else if (ena_rx_ctx->l3_proto == ENA_ETH_IO_L3_PROTO_IPV6) {
 		packet_type |= RTE_PTYPE_L3_IPV6;
 	}
 
 	if (!ena_rx_ctx->l4_csum_checked || ena_rx_ctx->frag)
-		ol_flags |= PKT_RX_L4_CKSUM_UNKNOWN;
+		ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 	else
 		if (unlikely(ena_rx_ctx->l4_csum_err))
-			ol_flags |= PKT_RX_L4_CKSUM_BAD;
+			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		else
-			ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	if (fill_hash &&
 	    likely((packet_type & ENA_PTYPE_HAS_HASH) && !ena_rx_ctx->frag)) {
-		ol_flags |= PKT_RX_RSS_HASH;
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mbuf->hash.rss = ena_rx_ctx->hash;
 	}
 
@@ -309,7 +308,7 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
 	if ((mbuf->ol_flags & MBUF_OFFLOADS) &&
 	    (queue_offloads & QUEUE_OFFLOADS)) {
 		/* check if TSO is required */
-		if ((mbuf->ol_flags & PKT_TX_TCP_SEG) &&
+		if ((mbuf->ol_flags & RTE_MBUF_F_TX_TCP_SEG) &&
 		    (queue_offloads & DEV_TX_OFFLOAD_TCP_TSO)) {
 			ena_tx_ctx->tso_enable = true;
 
@@ -317,11 +316,11 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
 		}
 
 		/* check if L3 checksum is needed */
-		if ((mbuf->ol_flags & PKT_TX_IP_CKSUM) &&
+		if ((mbuf->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) &&
 		    (queue_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM))
 			ena_tx_ctx->l3_csum_enable = true;
 
-		if (mbuf->ol_flags & PKT_TX_IPV6) {
+		if (mbuf->ol_flags & RTE_MBUF_F_TX_IPV6) {
 			ena_tx_ctx->l3_proto = ENA_ETH_IO_L3_PROTO_IPV6;
 		} else {
 			ena_tx_ctx->l3_proto = ENA_ETH_IO_L3_PROTO_IPV4;
@@ -334,12 +333,12 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
 		}
 
 		/* check if L4 checksum is needed */
-		if (((mbuf->ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM) &&
+		if (((mbuf->ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_TCP_CKSUM) &&
 		    (queue_offloads & DEV_TX_OFFLOAD_TCP_CKSUM)) {
 			ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_TCP;
 			ena_tx_ctx->l4_csum_enable = true;
-		} else if (((mbuf->ol_flags & PKT_TX_L4_MASK) ==
-				PKT_TX_UDP_CKSUM) &&
+		} else if (((mbuf->ol_flags & RTE_MBUF_F_TX_L4_MASK) ==
+				RTE_MBUF_F_TX_UDP_CKSUM) &&
 				(queue_offloads & DEV_TX_OFFLOAD_UDP_CKSUM)) {
 			ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_UDP;
 			ena_tx_ctx->l4_csum_enable = true;
@@ -2149,7 +2148,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		ena_rx_mbuf_prepare(mbuf, &ena_rx_ctx, fill_hash);
 
 		if (unlikely(mbuf->ol_flags &
-				(PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD))) {
+				(RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD))) {
 			rte_atomic64_inc(&rx_ring->adapter->drv_stats->ierrors);
 			++rx_ring->rx_stats.bad_csum;
 		}
@@ -2191,7 +2190,7 @@ eth_ena_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		m = tx_pkts[i];
 		ol_flags = m->ol_flags;
 
-		if (!(ol_flags & PKT_TX_IPV4))
+		if (!(ol_flags & RTE_MBUF_F_TX_IPV4))
 			continue;
 
 		/* If there was not L2 header length specified, assume it is
@@ -2215,8 +2214,8 @@ eth_ena_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		}
 
 		if ((ol_flags & ENA_TX_OFFLOAD_NOTSUP_MASK) != 0 ||
-				(ol_flags & PKT_TX_L4_MASK) ==
-				PKT_TX_SCTP_CKSUM) {
+				(ol_flags & RTE_MBUF_F_TX_L4_MASK) ==
+				RTE_MBUF_F_TX_SCTP_CKSUM) {
 			rte_errno = ENOTSUP;
 			return i;
 		}
@@ -2235,7 +2234,7 @@ eth_ena_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		 */
 
 		ret = rte_net_intel_cksum_flags_prepare(m,
-			ol_flags & ~PKT_TX_TCP_SEG);
+			ol_flags & ~RTE_MBUF_F_TX_TCP_SEG);
 		if (ret != 0) {
 			rte_errno = -ret;
 			return i;
diff --git a/drivers/net/enetc/enetc_rxtx.c b/drivers/net/enetc/enetc_rxtx.c
index 412322523d..ea64c9f682 100644
--- a/drivers/net/enetc/enetc_rxtx.c
+++ b/drivers/net/enetc/enetc_rxtx.c
@@ -174,80 +174,80 @@ enetc_refill_rx_ring(struct enetc_bdr *rx_ring, const int buff_cnt)
 static inline void enetc_slow_parsing(struct rte_mbuf *m,
 				     uint64_t parse_results)
 {
-	m->ol_flags &= ~(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+	m->ol_flags &= ~(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 
 	switch (parse_results) {
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV4:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV4;
-		m->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV6:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV6;
-		m->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV4_TCP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV4 |
 				 RTE_PTYPE_L4_TCP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV6_TCP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV6 |
 				 RTE_PTYPE_L4_TCP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV4_UDP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV4 |
 				 RTE_PTYPE_L4_UDP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV6_UDP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV6 |
 				 RTE_PTYPE_L4_UDP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV4_SCTP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV4 |
 				 RTE_PTYPE_L4_SCTP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV6_SCTP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV6 |
 				 RTE_PTYPE_L4_SCTP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV4_ICMP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV4 |
 				 RTE_PTYPE_L4_ICMP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	case ENETC_PARSE_ERROR | ENETC_PKT_TYPE_IPV6_ICMP:
 		m->packet_type = RTE_PTYPE_L2_ETHER |
 				 RTE_PTYPE_L3_IPV6 |
 				 RTE_PTYPE_L4_ICMP;
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD |
-			       PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		return;
 	/* More switch cases can be added */
 	default:
 		m->packet_type = RTE_PTYPE_UNKNOWN;
-		m->ol_flags |= PKT_RX_IP_CKSUM_UNKNOWN |
-			       PKT_RX_L4_CKSUM_UNKNOWN;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN |
+			       RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 	}
 }
 
@@ -256,7 +256,7 @@ static inline void __rte_hot
 enetc_dev_rx_parse(struct rte_mbuf *m, uint16_t parse_results)
 {
 	ENETC_PMD_DP_DEBUG("parse summary = 0x%x   ", parse_results);
-	m->ol_flags |= PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD;
+	m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	switch (parse_results) {
 	case ENETC_PKT_TYPE_ETHER:
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 2affd380c6..b312e216ef 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -250,7 +250,7 @@ void enic_init_vnic_resources(struct enic *enic)
 			error_interrupt_offset);
 		/* Compute unsupported ol flags for enic_prep_pkts() */
 		enic->wq[index].tx_offload_notsup_mask =
-			PKT_TX_OFFLOAD_MASK ^ enic->tx_offload_mask;
+			RTE_MBUF_F_TX_OFFLOAD_MASK ^ enic->tx_offload_mask;
 
 		cq_idx = enic_cq_wq(enic, index);
 		vnic_cq_init(&enic->cq[cq_idx],
@@ -1755,10 +1755,10 @@ enic_enable_overlay_offload(struct enic *enic)
 		(enic->geneve ? DEV_TX_OFFLOAD_GENEVE_TNL_TSO : 0) |
 		(enic->vxlan ? DEV_TX_OFFLOAD_VXLAN_TNL_TSO : 0);
 	enic->tx_offload_mask |=
-		PKT_TX_OUTER_IPV6 |
-		PKT_TX_OUTER_IPV4 |
-		PKT_TX_OUTER_IP_CKSUM |
-		PKT_TX_TUNNEL_MASK;
+		RTE_MBUF_F_TX_OUTER_IPV6 |
+		RTE_MBUF_F_TX_OUTER_IPV4 |
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM |
+		RTE_MBUF_F_TX_TUNNEL_MASK;
 	enic->overlay_offload = true;
 
 	if (enic->vxlan && enic->geneve)
diff --git a/drivers/net/enic/enic_res.c b/drivers/net/enic/enic_res.c
index 0493e096d0..e85f9f23fb 100644
--- a/drivers/net/enic/enic_res.c
+++ b/drivers/net/enic/enic_res.c
@@ -216,12 +216,12 @@ int enic_get_vnic_config(struct enic *enic)
 		DEV_RX_OFFLOAD_TCP_CKSUM |
 		DEV_RX_OFFLOAD_RSS_HASH;
 	enic->tx_offload_mask =
-		PKT_TX_IPV6 |
-		PKT_TX_IPV4 |
-		PKT_TX_VLAN |
-		PKT_TX_IP_CKSUM |
-		PKT_TX_L4_MASK |
-		PKT_TX_TCP_SEG;
+		RTE_MBUF_F_TX_IPV6 |
+		RTE_MBUF_F_TX_IPV4 |
+		RTE_MBUF_F_TX_VLAN |
+		RTE_MBUF_F_TX_IP_CKSUM |
+		RTE_MBUF_F_TX_L4_MASK |
+		RTE_MBUF_F_TX_TCP_SEG;
 
 	return 0;
 }
diff --git a/drivers/net/enic/enic_rxtx.c b/drivers/net/enic/enic_rxtx.c
index 3899907d6d..c44715bfd0 100644
--- a/drivers/net/enic/enic_rxtx.c
+++ b/drivers/net/enic/enic_rxtx.c
@@ -424,7 +424,7 @@ uint16_t enic_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 	for (i = 0; i != nb_pkts; i++) {
 		m = tx_pkts[i];
 		ol_flags = m->ol_flags;
-		if (!(ol_flags & PKT_TX_TCP_SEG)) {
+		if (!(ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 			if (unlikely(m->pkt_len > ENIC_TX_MAX_PKT_SIZE)) {
 				rte_errno = EINVAL;
 				return i;
@@ -489,7 +489,7 @@ uint16_t enic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 	wq_desc_avail = vnic_wq_desc_avail(wq);
 	head_idx = wq->head_idx;
 	desc_count = wq->ring.desc_count;
-	ol_flags_mask = PKT_TX_VLAN | PKT_TX_IP_CKSUM | PKT_TX_L4_MASK;
+	ol_flags_mask = RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK;
 	tx_oversized = &enic->soft_stats.tx_oversized;
 
 	nb_pkts = RTE_MIN(nb_pkts, ENIC_TX_XMIT_MAX);
@@ -500,7 +500,7 @@ uint16_t enic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		data_len = tx_pkt->data_len;
 		ol_flags = tx_pkt->ol_flags;
 		nb_segs = tx_pkt->nb_segs;
-		tso = ol_flags & PKT_TX_TCP_SEG;
+		tso = ol_flags & RTE_MBUF_F_TX_TCP_SEG;
 
 		/* drop packet if it's too big to send */
 		if (unlikely(!tso && pkt_len > ENIC_TX_MAX_PKT_SIZE)) {
@@ -517,7 +517,7 @@ uint16_t enic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 		mss = 0;
 		vlan_id = tx_pkt->vlan_tci;
-		vlan_tag_insert = !!(ol_flags & PKT_TX_VLAN);
+		vlan_tag_insert = !!(ol_flags & RTE_MBUF_F_TX_VLAN);
 		bus_addr = (dma_addr_t)
 			   (tx_pkt->buf_iova + tx_pkt->data_off);
 
@@ -543,20 +543,20 @@ uint16_t enic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			offload_mode = WQ_ENET_OFFLOAD_MODE_TSO;
 			mss = tx_pkt->tso_segsz;
 			/* For tunnel, need the size of outer+inner headers */
-			if (ol_flags & PKT_TX_TUNNEL_MASK) {
+			if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
 				header_len += tx_pkt->outer_l2_len +
 					tx_pkt->outer_l3_len;
 			}
 		}
 
 		if ((ol_flags & ol_flags_mask) && (header_len == 0)) {
-			if (ol_flags & PKT_TX_IP_CKSUM)
+			if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 				mss |= ENIC_CALC_IP_CKSUM;
 
 			/* Nic uses just 1 bit for UDP and TCP */
-			switch (ol_flags & PKT_TX_L4_MASK) {
-			case PKT_TX_TCP_CKSUM:
-			case PKT_TX_UDP_CKSUM:
+			switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+			case RTE_MBUF_F_TX_TCP_CKSUM:
+			case RTE_MBUF_F_TX_UDP_CKSUM:
 				mss |= ENIC_CALC_TCP_UDP_CKSUM;
 				break;
 			}
@@ -634,7 +634,7 @@ static void enqueue_simple_pkts(struct rte_mbuf **pkts,
 		desc->header_length_flags &=
 			((1 << WQ_ENET_FLAGS_EOP_SHIFT) |
 			 (1 << WQ_ENET_FLAGS_CQ_ENTRY_SHIFT));
-		if (p->ol_flags & PKT_TX_VLAN) {
+		if (p->ol_flags & RTE_MBUF_F_TX_VLAN) {
 			desc->header_length_flags |=
 				1 << WQ_ENET_FLAGS_VLAN_TAG_INSERT_SHIFT;
 		}
@@ -643,9 +643,9 @@ static void enqueue_simple_pkts(struct rte_mbuf **pkts,
 		 * is 0, so no need to set offload_mode.
 		 */
 		mss = 0;
-		if (p->ol_flags & PKT_TX_IP_CKSUM)
+		if (p->ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			mss |= ENIC_CALC_IP_CKSUM << WQ_ENET_MSS_SHIFT;
-		if (p->ol_flags & PKT_TX_L4_MASK)
+		if (p->ol_flags & RTE_MBUF_F_TX_L4_MASK)
 			mss |= ENIC_CALC_TCP_UDP_CKSUM << WQ_ENET_MSS_SHIFT;
 		desc->mss_loopback = mss;
 
diff --git a/drivers/net/enic/enic_rxtx_common.h b/drivers/net/enic/enic_rxtx_common.h
index d8668d1898..9d6d3476b0 100644
--- a/drivers/net/enic/enic_rxtx_common.h
+++ b/drivers/net/enic/enic_rxtx_common.h
@@ -209,11 +209,11 @@ enic_cq_rx_to_pkt_flags(struct cq_desc *cqd, struct rte_mbuf *mbuf)
 
 	/* VLAN STRIPPED flag. The L2 packet type updated here also */
 	if (bwflags & CQ_ENET_RQ_DESC_FLAGS_VLAN_STRIPPED) {
-		pkt_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		pkt_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mbuf->packet_type |= RTE_PTYPE_L2_ETHER;
 	} else {
 		if (vlan_tci != 0) {
-			pkt_flags |= PKT_RX_VLAN;
+			pkt_flags |= RTE_MBUF_F_RX_VLAN;
 			mbuf->packet_type |= RTE_PTYPE_L2_ETHER_VLAN;
 		} else {
 			mbuf->packet_type |= RTE_PTYPE_L2_ETHER;
@@ -227,16 +227,16 @@ enic_cq_rx_to_pkt_flags(struct cq_desc *cqd, struct rte_mbuf *mbuf)
 		clsf_cqd = (struct cq_enet_rq_clsf_desc *)cqd;
 		filter_id = clsf_cqd->filter_id;
 		if (filter_id) {
-			pkt_flags |= PKT_RX_FDIR;
+			pkt_flags |= RTE_MBUF_F_RX_FDIR;
 			if (filter_id != ENIC_MAGIC_FILTER_ID) {
 				/* filter_id = mark id + 1, so subtract 1 */
 				mbuf->hash.fdir.hi = filter_id - 1;
-				pkt_flags |= PKT_RX_FDIR_ID;
+				pkt_flags |= RTE_MBUF_F_RX_FDIR_ID;
 			}
 		}
 	} else if (enic_cq_rx_desc_rss_type(cqrd)) {
 		/* RSS flag */
-		pkt_flags |= PKT_RX_RSS_HASH;
+		pkt_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mbuf->hash.rss = enic_cq_rx_desc_rss_hash(cqrd);
 	}
 
@@ -254,17 +254,17 @@ enic_cq_rx_to_pkt_flags(struct cq_desc *cqd, struct rte_mbuf *mbuf)
 			 */
 			if (mbuf->packet_type & RTE_PTYPE_L3_IPV4) {
 				if (enic_cq_rx_desc_ipv4_csum_ok(cqrd))
-					pkt_flags |= PKT_RX_IP_CKSUM_GOOD;
+					pkt_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 				else
-					pkt_flags |= PKT_RX_IP_CKSUM_BAD;
+					pkt_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			}
 
 			if (l4_flags == RTE_PTYPE_L4_UDP ||
 			    l4_flags == RTE_PTYPE_L4_TCP) {
 				if (enic_cq_rx_desc_tcp_udp_csum_ok(cqrd))
-					pkt_flags |= PKT_RX_L4_CKSUM_GOOD;
+					pkt_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 				else
-					pkt_flags |= PKT_RX_L4_CKSUM_BAD;
+					pkt_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			}
 		}
 	}
diff --git a/drivers/net/enic/enic_rxtx_vec_avx2.c b/drivers/net/enic/enic_rxtx_vec_avx2.c
index 1848f52717..600efff270 100644
--- a/drivers/net/enic/enic_rxtx_vec_avx2.c
+++ b/drivers/net/enic/enic_rxtx_vec_avx2.c
@@ -167,21 +167,21 @@ enic_noscatter_vec_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 			0x80, 0x80, 11, 10,
 			0x80, 0x80, 11, 10,
 			0x80, 0x80, 11, 10);
-	/* PKT_RX_RSS_HASH is 1<<1 so fits in 8-bit integer */
+	/* RTE_MBUF_F_RX_RSS_HASH is 1<<1 so fits in 8-bit integer */
 	const __m256i rss_shuffle =
 		_mm256_set_epi8(/* second 128 bits */
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
 			0, /* rss_types = 0 */
 			/* first 128 bits */
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
 			0 /* rss_types = 0 */);
 	/*
 	 * VLAN offload flags.
@@ -191,8 +191,8 @@ enic_noscatter_vec_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	 */
 	const __m256i vlan_shuffle =
 		_mm256_set_epi32(0, 0, 0, 0,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, PKT_RX_VLAN);
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, RTE_MBUF_F_RX_VLAN);
 	/* Use the same shuffle index as vlan_shuffle */
 	const __m256i vlan_ptype_shuffle =
 		_mm256_set_epi32(0, 0, 0, 0,
@@ -211,39 +211,39 @@ enic_noscatter_vec_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	const __m256i csum_shuffle =
 		_mm256_set_epi8(/* second 128 bits */
 			/* 1111 ip4+ip4_ok+l4+l4_ok */
-			((PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1),
+			((RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1),
 			/* 1110 ip4_ok+ip4+l4+!l4_ok */
-			((PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1),
-			(PKT_RX_IP_CKSUM_GOOD >> 1), /* 1101 ip4+ip4_ok */
-			(PKT_RX_IP_CKSUM_GOOD >> 1), /* 1100 ip4_ok+ip4 */
-			(PKT_RX_L4_CKSUM_GOOD >> 1), /* 1011 l4+l4_ok */
-			(PKT_RX_L4_CKSUM_BAD >> 1),  /* 1010 l4+!l4_ok */
+			((RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1),
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD >> 1), /* 1101 ip4+ip4_ok */
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD >> 1), /* 1100 ip4_ok+ip4 */
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD >> 1), /* 1011 l4+l4_ok */
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD >> 1),  /* 1010 l4+!l4_ok */
 			0, /* 1001 */
 			0, /* 1000 */
 			/* 0111 !ip4_ok+ip4+l4+l4_ok */
-			((PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD) >> 1),
+			((RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1),
 			/* 0110 !ip4_ok+ip4+l4+!l4_ok */
-			((PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD) >> 1),
-			(PKT_RX_IP_CKSUM_BAD >> 1),  /* 0101 !ip4_ok+ip4 */
-			(PKT_RX_IP_CKSUM_BAD >> 1),  /* 0100 !ip4_ok+ip4 */
-			(PKT_RX_L4_CKSUM_GOOD >> 1), /* 0011 l4+l4_ok */
-			(PKT_RX_L4_CKSUM_BAD >> 1),  /* 0010 l4+!l4_ok */
+			((RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1),
+			(RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1),  /* 0101 !ip4_ok+ip4 */
+			(RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1),  /* 0100 !ip4_ok+ip4 */
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD >> 1), /* 0011 l4+l4_ok */
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD >> 1),  /* 0010 l4+!l4_ok */
 			0, /* 0001 */
 			0, /* 0000 */
 			/* first 128 bits */
-			((PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1),
-			((PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1),
-			(PKT_RX_IP_CKSUM_GOOD >> 1),
-			(PKT_RX_IP_CKSUM_GOOD >> 1),
-			(PKT_RX_L4_CKSUM_GOOD >> 1),
-			(PKT_RX_L4_CKSUM_BAD >> 1),
+			((RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1),
+			((RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1),
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD >> 1),
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD >> 1),
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD >> 1),
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD >> 1),
 			0, 0,
-			((PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD) >> 1),
-			((PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD) >> 1),
-			(PKT_RX_IP_CKSUM_BAD >> 1),
-			(PKT_RX_IP_CKSUM_BAD >> 1),
-			(PKT_RX_L4_CKSUM_GOOD >> 1),
-			(PKT_RX_L4_CKSUM_BAD >> 1),
+			((RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1),
+			((RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1),
+			(RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1),
+			(RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1),
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD >> 1),
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD >> 1),
 			0, 0);
 	/*
 	 * Non-fragment PTYPEs.
@@ -471,7 +471,7 @@ enic_noscatter_vec_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 			break;
 
 		/*
-		 * Compute PKT_RX_RSS_HASH.
+		 * Compute RTE_MBUF_F_RX_RSS_HASH.
 		 * Use 2 shifts and 1 shuffle for 8 desc: 0.375 inst/desc
 		 * RSS types in byte 0, 4, 8, 12, 16, 20, 24, 28
 		 * Everything else is zero.
@@ -479,7 +479,7 @@ enic_noscatter_vec_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		__m256i rss_types =
 			_mm256_srli_epi32(_mm256_slli_epi32(flags0_7, 10), 28);
 		/*
-		 * RSS flags (PKT_RX_RSS_HASH) are in
+		 * RSS flags (RTE_MBUF_F_RX_RSS_HASH) are in
 		 * byte 0, 4, 8, 12, 16, 20, 24, 28
 		 * Everything else is zero.
 		 */
@@ -557,7 +557,7 @@ enic_noscatter_vec_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		vlan0_7 = _mm256_sub_epi32(zero4, vlan0_7);
 
 		/*
-		 * Compute PKT_RX_VLAN and PKT_RX_VLAN_STRIPPED.
+		 * Compute RTE_MBUF_F_RX_VLAN and RTE_MBUF_F_RX_VLAN_STRIPPED.
 		 * Use 3 shifts, 1 or,  1 shuffle for 8 desc: 0.625 inst/desc
 		 * VLAN offload flags in byte 0, 4, 8, 12, 16, 20, 24, 28
 		 * Everything else is zero.
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index 496e72a003..b232d09104 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -37,16 +37,15 @@ static inline void dump_rxd(union fm10k_rx_desc *rxd)
 }
 #endif
 
-#define FM10K_TX_OFFLOAD_MASK (  \
-		PKT_TX_VLAN |        \
-		PKT_TX_IPV6 |            \
-		PKT_TX_IPV4 |            \
-		PKT_TX_IP_CKSUM |        \
-		PKT_TX_L4_MASK |         \
-		PKT_TX_TCP_SEG)
+#define FM10K_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_VLAN |        \
+		RTE_MBUF_F_TX_IPV6 |            \
+		RTE_MBUF_F_TX_IPV4 |            \
+		RTE_MBUF_F_TX_IP_CKSUM |        \
+		RTE_MBUF_F_TX_L4_MASK |         \
+		RTE_MBUF_F_TX_TCP_SEG)
 
 #define FM10K_TX_OFFLOAD_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ FM10K_TX_OFFLOAD_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ FM10K_TX_OFFLOAD_MASK)
 
 /* @note: When this function is changed, make corresponding change to
  * fm10k_dev_supported_ptypes_get()
@@ -78,21 +77,21 @@ rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d)
 						>> FM10K_RXD_PKTTYPE_SHIFT];
 
 	if (d->w.pkt_info & FM10K_RXD_RSSTYPE_MASK)
-		m->ol_flags |= PKT_RX_RSS_HASH;
+		m->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 
 	if (unlikely((d->d.staterr &
 		(FM10K_RXD_STATUS_IPCS | FM10K_RXD_STATUS_IPE)) ==
 		(FM10K_RXD_STATUS_IPCS | FM10K_RXD_STATUS_IPE)))
-		m->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		m->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+		m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	if (unlikely((d->d.staterr &
 		(FM10K_RXD_STATUS_L4CS | FM10K_RXD_STATUS_L4E)) ==
 		(FM10K_RXD_STATUS_L4CS | FM10K_RXD_STATUS_L4E)))
-		m->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+		m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	else
-		m->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+		m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 }
 
 uint16_t
@@ -131,10 +130,10 @@ fm10k_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		 * Packets in fm10k device always carry at least one VLAN tag.
 		 * For those packets coming in without VLAN tag,
 		 * the port default VLAN tag will be used.
-		 * So, always PKT_RX_VLAN flag is set and vlan_tci
+		 * So, always RTE_MBUF_F_RX_VLAN flag is set and vlan_tci
 		 * is valid for each RX packet's mbuf.
 		 */
-		mbuf->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mbuf->vlan_tci = desc.w.vlan;
 		/**
 		 * mbuf->vlan_tci_outer is an idle field in fm10k driver,
@@ -292,10 +291,10 @@ fm10k_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		 * Packets in fm10k device always carry at least one VLAN tag.
 		 * For those packets coming in without VLAN tag,
 		 * the port default VLAN tag will be used.
-		 * So, always PKT_RX_VLAN flag is set and vlan_tci
+		 * So, always RTE_MBUF_F_RX_VLAN flag is set and vlan_tci
 		 * is valid for each RX packet's mbuf.
 		 */
-		first_seg->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		first_seg->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		first_seg->vlan_tci = desc.w.vlan;
 		/**
 		 * mbuf->vlan_tci_outer is an idle field in fm10k driver,
@@ -605,11 +604,11 @@ static inline void tx_xmit_pkt(struct fm10k_tx_queue *q, struct rte_mbuf *mb)
 	/* set checksum flags on first descriptor of packet. SCTP checksum
 	 * offload is not supported, but we do not explicitly check for this
 	 * case in favor of greatly simplified processing. */
-	if (mb->ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK | PKT_TX_TCP_SEG))
+	if (mb->ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK | RTE_MBUF_F_TX_TCP_SEG))
 		q->hw_ring[q->next_free].flags |= FM10K_TXD_FLAG_CSUM;
 
 	/* set vlan if requested */
-	if (mb->ol_flags & PKT_TX_VLAN)
+	if (mb->ol_flags & RTE_MBUF_F_TX_VLAN)
 		q->hw_ring[q->next_free].vlan = mb->vlan_tci;
 	else
 		q->hw_ring[q->next_free].vlan = 0;
@@ -620,9 +619,9 @@ static inline void tx_xmit_pkt(struct fm10k_tx_queue *q, struct rte_mbuf *mb)
 	q->hw_ring[q->next_free].buflen =
 			rte_cpu_to_le_16(rte_pktmbuf_data_len(mb));
 
-	if (mb->ol_flags & PKT_TX_TCP_SEG) {
+	if (mb->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		hdrlen = mb->l2_len + mb->l3_len + mb->l4_len;
-		hdrlen += (mb->ol_flags & PKT_TX_TUNNEL_MASK) ?
+		hdrlen += (mb->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 			  mb->outer_l2_len + mb->outer_l3_len : 0;
 		if (q->hw_ring[q->next_free].flags & FM10K_TXD_FLAG_FTAG)
 			hdrlen += sizeof(struct fm10k_ftag);
@@ -699,7 +698,7 @@ fm10k_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 	for (i = 0; i < nb_pkts; i++) {
 		m = tx_pkts[i];
 
-		if ((m->ol_flags & PKT_TX_TCP_SEG) &&
+		if ((m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) &&
 				(m->tso_segsz < FM10K_TSO_MINMSS)) {
 			rte_errno = EINVAL;
 			return i;
diff --git a/drivers/net/fm10k/fm10k_rxtx_vec.c b/drivers/net/fm10k/fm10k_rxtx_vec.c
index 83af01dc2d..7ecba9fef2 100644
--- a/drivers/net/fm10k/fm10k_rxtx_vec.c
+++ b/drivers/net/fm10k/fm10k_rxtx_vec.c
@@ -38,7 +38,7 @@ fm10k_reset_tx_queue(struct fm10k_tx_queue *txq);
 #define RXEFLAG_SHIFT     (13)
 /* IPE/L4E flag shift */
 #define L3L4EFLAG_SHIFT     (14)
-/* shift PKT_RX_L4_CKSUM_GOOD into one byte by 1 bit */
+/* shift RTE_MBUF_F_RX_L4_CKSUM_GOOD into one byte by 1 bit */
 #define CKSUM_SHIFT     (1)
 
 static inline void
@@ -52,10 +52,10 @@ fm10k_desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
 
 	const __m128i pkttype_msk = _mm_set_epi16(
 			0x0000, 0x0000, 0x0000, 0x0000,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 
 	/* mask everything except rss type */
 	const __m128i rsstype_msk = _mm_set_epi16(
@@ -75,10 +75,10 @@ fm10k_desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
 	const __m128i l3l4cksum_flag = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			(PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD) >> CKSUM_SHIFT,
-			(PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD) >> CKSUM_SHIFT,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> CKSUM_SHIFT,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> CKSUM_SHIFT);
+			(RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> CKSUM_SHIFT,
+			(RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> CKSUM_SHIFT,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> CKSUM_SHIFT,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> CKSUM_SHIFT);
 
 	const __m128i rxe_flag = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
@@ -87,9 +87,10 @@ fm10k_desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
 
 	/* map rss type to rss hash flag */
 	const __m128i rss_flags = _mm_set_epi8(0, 0, 0, 0,
-			0, 0, 0, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH, 0,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, 0);
+			0, 0, 0, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, 0, RTE_MBUF_F_RX_RSS_HASH, 0,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, 0);
 
 	/* Calculate RSS_hash and Vlan fields */
 	ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
diff --git a/drivers/net/hinic/hinic_pmd_rx.c b/drivers/net/hinic/hinic_pmd_rx.c
index 842399cc4c..311b22ccd1 100644
--- a/drivers/net/hinic/hinic_pmd_rx.c
+++ b/drivers/net/hinic/hinic_pmd_rx.c
@@ -802,7 +802,7 @@ static inline uint64_t hinic_rx_rss_hash(uint32_t offload_type,
 	rss_type = HINIC_GET_RSS_TYPES(offload_type);
 	if (likely(rss_type != 0)) {
 		*rss_hash = cqe_hass_val;
-		return PKT_RX_RSS_HASH;
+		return RTE_MBUF_F_RX_RSS_HASH;
 	}
 
 	return 0;
@@ -815,33 +815,33 @@ static inline uint64_t hinic_rx_csum(uint32_t status, struct hinic_rxq *rxq)
 	struct hinic_nic_dev *nic_dev = rxq->nic_dev;
 
 	if (unlikely(!(nic_dev->rx_csum_en & HINIC_RX_CSUM_OFFLOAD_EN)))
-		return PKT_RX_IP_CKSUM_UNKNOWN;
+		return RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 
 	/* most case checksum is ok */
 	checksum_err = HINIC_GET_RX_CSUM_ERR(status);
 	if (likely(checksum_err == 0))
-		return (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		return (RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 
 	/* If BYPASS bit set, all other status indications should be ignored */
 	if (unlikely(HINIC_CSUM_ERR_BYPASSED(checksum_err)))
-		return PKT_RX_IP_CKSUM_UNKNOWN;
+		return RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 
 	flags = 0;
 
 	/* IP checksum error */
 	if (HINIC_CSUM_ERR_IP(checksum_err))
-		flags |= PKT_RX_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		flags |= PKT_RX_IP_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	/* L4 checksum error */
 	if (HINIC_CSUM_ERR_L4(checksum_err))
-		flags |= PKT_RX_L4_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	else
-		flags |= PKT_RX_L4_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	if (unlikely(HINIC_CSUM_ERR_OTHER(checksum_err)))
-		flags = PKT_RX_L4_CKSUM_NONE;
+		flags = RTE_MBUF_F_RX_L4_CKSUM_NONE;
 
 	rxq->rxq_stats.errors++;
 
@@ -861,7 +861,7 @@ static inline uint64_t hinic_rx_vlan(uint32_t offload_type, uint32_t vlan_len,
 
 	*vlan_tci = vlan_tag;
 
-	return PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+	return RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 }
 
 static inline u32 hinic_rx_alloc_mbuf_bulk(struct hinic_rxq *rxq,
@@ -1061,7 +1061,7 @@ u16 hinic_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts)
 		/* lro offload */
 		lro_num = HINIC_GET_RX_NUM_LRO(cqe.status);
 		if (unlikely(lro_num != 0)) {
-			rxm->ol_flags |= PKT_RX_LRO;
+			rxm->ol_flags |= RTE_MBUF_F_RX_LRO;
 			rxm->tso_segsz = pkt_len / lro_num;
 		}
 
diff --git a/drivers/net/hinic/hinic_pmd_tx.c b/drivers/net/hinic/hinic_pmd_tx.c
index e14937139d..2688817f37 100644
--- a/drivers/net/hinic/hinic_pmd_tx.c
+++ b/drivers/net/hinic/hinic_pmd_tx.c
@@ -592,7 +592,7 @@ hinic_fill_tx_offload_info(struct rte_mbuf *mbuf,
 	task->pkt_info2 = 0;
 
 	/* Base VLAN */
-	if (unlikely(ol_flags & PKT_TX_VLAN)) {
+	if (unlikely(ol_flags & RTE_MBUF_F_TX_VLAN)) {
 		vlan_tag = mbuf->vlan_tci;
 		hinic_set_vlan_tx_offload(task, queue_info, vlan_tag,
 					  vlan_tag >> VLAN_PRIO_SHIFT);
@@ -602,7 +602,7 @@ hinic_fill_tx_offload_info(struct rte_mbuf *mbuf,
 	if (unlikely(!(ol_flags & HINIC_TX_CKSUM_OFFLOAD_MASK)))
 		return;
 
-	if ((ol_flags & PKT_TX_TCP_SEG))
+	if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 		/* set tso info for task and qsf */
 		hinic_set_tso_info(task, queue_info, mbuf, tx_off_info);
 	else /* just support l4 checksum offload */
@@ -718,7 +718,7 @@ hinic_ipv4_phdr_cksum(const struct rte_ipv4_hdr *ipv4_hdr, uint64_t ol_flags)
 	psd_hdr.dst_addr = ipv4_hdr->dst_addr;
 	psd_hdr.zero = 0;
 	psd_hdr.proto = ipv4_hdr->next_proto_id;
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		psd_hdr.len = 0;
 	} else {
 		psd_hdr.len =
@@ -738,7 +738,7 @@ hinic_ipv6_phdr_cksum(const struct rte_ipv6_hdr *ipv6_hdr, uint64_t ol_flags)
 	} psd_hdr;
 
 	psd_hdr.proto = (ipv6_hdr->proto << 24);
-	if (ol_flags & PKT_TX_TCP_SEG)
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 		psd_hdr.len = 0;
 	else
 		psd_hdr.len = ipv6_hdr->payload_len;
@@ -754,10 +754,10 @@ static inline void hinic_get_outer_cs_pld_offset(struct rte_mbuf *m,
 {
 	uint64_t ol_flags = m->ol_flags;
 
-	if ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_UDP_CKSUM)
+	if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_UDP_CKSUM)
 		off_info->payload_offset = m->outer_l2_len + m->outer_l3_len +
 					   m->l2_len + m->l3_len;
-	else if ((ol_flags & PKT_TX_TCP_CKSUM) || (ol_flags & PKT_TX_TCP_SEG))
+	else if ((ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) || (ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 		off_info->payload_offset = m->outer_l2_len + m->outer_l3_len +
 					   m->l2_len + m->l3_len + m->l4_len;
 }
@@ -767,10 +767,10 @@ static inline void hinic_get_pld_offset(struct rte_mbuf *m,
 {
 	uint64_t ol_flags = m->ol_flags;
 
-	if (((ol_flags & PKT_TX_L4_MASK) == PKT_TX_UDP_CKSUM) ||
-	    ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_SCTP_CKSUM))
+	if (((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_UDP_CKSUM) ||
+	    ((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_SCTP_CKSUM))
 		off_info->payload_offset = m->l2_len + m->l3_len;
-	else if ((ol_flags & PKT_TX_TCP_CKSUM) || (ol_flags & PKT_TX_TCP_SEG))
+	else if ((ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) || (ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 		off_info->payload_offset = m->l2_len + m->l3_len +
 					   m->l4_len;
 }
@@ -845,11 +845,11 @@ static inline uint8_t hinic_analyze_l3_type(struct rte_mbuf *mbuf)
 	uint8_t l3_type;
 	uint64_t ol_flags = mbuf->ol_flags;
 
-	if (ol_flags & PKT_TX_IPV4)
-		l3_type = (ol_flags & PKT_TX_IP_CKSUM) ?
+	if (ol_flags & RTE_MBUF_F_TX_IPV4)
+		l3_type = (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) ?
 			  IPV4_PKT_WITH_CHKSUM_OFFLOAD :
 			  IPV4_PKT_NO_CHKSUM_OFFLOAD;
-	else if (ol_flags & PKT_TX_IPV6)
+	else if (ol_flags & RTE_MBUF_F_TX_IPV6)
 		l3_type = IPV6_PKT;
 	else
 		l3_type = UNKNOWN_L3TYPE;
@@ -866,11 +866,11 @@ static inline void hinic_calculate_tcp_checksum(struct rte_mbuf *mbuf,
 	struct rte_tcp_hdr *tcp_hdr;
 	uint64_t ol_flags = mbuf->ol_flags;
 
-	if (ol_flags & PKT_TX_IPV4) {
+	if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		ipv4_hdr = rte_pktmbuf_mtod_offset(mbuf, struct rte_ipv4_hdr *,
 						   inner_l3_offset);
 
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			ipv4_hdr->hdr_checksum = 0;
 
 		tcp_hdr = (struct rte_tcp_hdr *)((char *)ipv4_hdr +
@@ -898,11 +898,11 @@ static inline void hinic_calculate_udp_checksum(struct rte_mbuf *mbuf,
 	struct rte_udp_hdr *udp_hdr;
 	uint64_t ol_flags = mbuf->ol_flags;
 
-	if (ol_flags & PKT_TX_IPV4) {
+	if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		ipv4_hdr = rte_pktmbuf_mtod_offset(mbuf, struct rte_ipv4_hdr *,
 						   inner_l3_offset);
 
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			ipv4_hdr->hdr_checksum = 0;
 
 		udp_hdr = (struct rte_udp_hdr *)((char *)ipv4_hdr +
@@ -938,21 +938,21 @@ static inline void hinic_calculate_checksum(struct rte_mbuf *mbuf,
 {
 	uint64_t ol_flags = mbuf->ol_flags;
 
-	switch (ol_flags & PKT_TX_L4_MASK) {
-	case PKT_TX_UDP_CKSUM:
+	switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		hinic_calculate_udp_checksum(mbuf, off_info, inner_l3_offset);
 		break;
 
-	case PKT_TX_TCP_CKSUM:
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		hinic_calculate_tcp_checksum(mbuf, off_info, inner_l3_offset);
 		break;
 
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		hinic_calculate_sctp_checksum(off_info);
 		break;
 
 	default:
-		if (ol_flags & PKT_TX_TCP_SEG)
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			hinic_calculate_tcp_checksum(mbuf, off_info,
 						     inner_l3_offset);
 		break;
@@ -970,8 +970,8 @@ static inline int hinic_tx_offload_pkt_prepare(struct rte_mbuf *m,
 		return 0;
 
 	/* Support only vxlan offload */
-	if (unlikely((ol_flags & PKT_TX_TUNNEL_MASK) &&
-	    !(ol_flags & PKT_TX_TUNNEL_VXLAN)))
+	if (unlikely((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) &&
+		     !(ol_flags & RTE_MBUF_F_TX_TUNNEL_VXLAN)))
 		return -ENOTSUP;
 
 #ifdef RTE_LIBRTE_ETHDEV_DEBUG
@@ -979,7 +979,7 @@ static inline int hinic_tx_offload_pkt_prepare(struct rte_mbuf *m,
 		return -EINVAL;
 #endif
 
-	if (ol_flags & PKT_TX_TUNNEL_VXLAN) {
+	if (ol_flags & RTE_MBUF_F_TX_TUNNEL_VXLAN) {
 		off_info->tunnel_type = TUNNEL_UDP_NO_CSUM;
 
 		/* inner_l4_tcp_udp csum should be set to calculate outer
@@ -987,9 +987,9 @@ static inline int hinic_tx_offload_pkt_prepare(struct rte_mbuf *m,
 		 */
 		off_info->inner_l4_tcp_udp = 1;
 
-		if ((ol_flags & PKT_TX_OUTER_IP_CKSUM) ||
-		    (ol_flags & PKT_TX_OUTER_IPV6) ||
-		    (ol_flags & PKT_TX_TCP_SEG)) {
+		if ((ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) ||
+		    (ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) ||
+		    (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 			inner_l3_offset = m->l2_len + m->outer_l2_len +
 					  m->outer_l3_len;
 			off_info->outer_l2_len = m->outer_l2_len;
@@ -1057,7 +1057,7 @@ static inline bool hinic_get_sge_txoff_info(struct rte_mbuf *mbuf_pkt,
 	sqe_info->cpy_mbuf_cnt = 0;
 
 	/* non tso mbuf */
-	if (likely(!(mbuf_pkt->ol_flags & PKT_TX_TCP_SEG))) {
+	if (likely(!(mbuf_pkt->ol_flags & RTE_MBUF_F_TX_TCP_SEG))) {
 		if (unlikely(mbuf_pkt->pkt_len > MAX_SINGLE_SGE_SIZE)) {
 			/* non tso packet len must less than 64KB */
 			return false;
diff --git a/drivers/net/hinic/hinic_pmd_tx.h b/drivers/net/hinic/hinic_pmd_tx.h
index d98abad8da..a3ec6299fb 100644
--- a/drivers/net/hinic/hinic_pmd_tx.h
+++ b/drivers/net/hinic/hinic_pmd_tx.h
@@ -13,13 +13,12 @@
 #define HINIC_GET_WQ_TAIL(txq)		\
 		((txq)->wq->queue_buf_vaddr + (txq)->wq->wq_buf_size)
 
-#define HINIC_TX_CKSUM_OFFLOAD_MASK (	\
-		PKT_TX_IP_CKSUM |	\
-		PKT_TX_TCP_CKSUM |	\
-		PKT_TX_UDP_CKSUM |      \
-		PKT_TX_SCTP_CKSUM |	\
-		PKT_TX_OUTER_IP_CKSUM |	\
-		PKT_TX_TCP_SEG)
+#define HINIC_TX_CKSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM |	\
+		RTE_MBUF_F_TX_TCP_CKSUM |	\
+		RTE_MBUF_F_TX_UDP_CKSUM |      \
+		RTE_MBUF_F_TX_SCTP_CKSUM |	\
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM |	\
+		RTE_MBUF_F_TX_TCP_SEG)
 
 enum sq_wqe_type {
 	SQ_NORMAL_WQE = 0,
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index 243a4046ae..375952ba5a 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -622,7 +622,7 @@ struct hns3_hw {
 	 *  - HNS3_SPECIAL_PORT_SW_CKSUM_MODE
 	 *     In this mode, HW can not do checksum for special UDP port like
 	 *     4789, 4790, 6081 for non-tunnel UDP packets and UDP tunnel
-	 *     packets without the PKT_TX_TUNEL_MASK in the mbuf. So, PMD need
+	 *     packets without the RTE_MBUF_F_TX_TUNEL_MASK in the mbuf. So, PMD need
 	 *     do the checksum for these packets to avoid a checksum error.
 	 *
 	 *  - HNS3_SPECIAL_PORT_HW_CKSUM_MODE
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index feac7eb218..49c4bbeff2 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -2341,11 +2341,11 @@ hns3_rxd_to_vlan_tci(struct hns3_rx_queue *rxq, struct rte_mbuf *mb,
 		mb->vlan_tci = 0;
 		return;
 	case HNS3_INNER_STRP_VLAN_VLD:
-		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mb->vlan_tci = rte_le_to_cpu_16(rxd->rx.vlan_tag);
 		return;
 	case HNS3_OUTER_STRP_VLAN_VLD:
-		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mb->vlan_tci = rte_le_to_cpu_16(rxd->rx.ot_vlan_tag);
 		return;
 	default:
@@ -2395,7 +2395,7 @@ hns3_rx_ptp_timestamp_handle(struct hns3_rx_queue *rxq, struct rte_mbuf *mbuf,
 	struct hns3_pf *pf = HNS3_DEV_PRIVATE_TO_PF(rxq->hns);
 	uint64_t timestamp = rte_le_to_cpu_64(rxd->timestamp);
 
-	mbuf->ol_flags |= PKT_RX_IEEE1588_PTP | PKT_RX_IEEE1588_TMST;
+	mbuf->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP | RTE_MBUF_F_RX_IEEE1588_TMST;
 	if (hns3_timestamp_rx_dynflag > 0) {
 		*RTE_MBUF_DYNFIELD(mbuf, hns3_timestamp_dynfield_offset,
 			rte_mbuf_timestamp_t *) = timestamp;
@@ -2481,11 +2481,11 @@ hns3_recv_pkts_simple(void *rx_queue,
 		rxm->data_len = rxm->pkt_len;
 		rxm->port = rxq->port_id;
 		rxm->hash.rss = rte_le_to_cpu_32(rxd.rx.rss_hash);
-		rxm->ol_flags |= PKT_RX_RSS_HASH;
+		rxm->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		if (unlikely(bd_base_info & BIT(HNS3_RXD_LUM_B))) {
 			rxm->hash.fdir.hi =
 				rte_le_to_cpu_16(rxd.rx.fd_id);
-			rxm->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+			rxm->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		}
 		rxm->nb_segs = 1;
 		rxm->next = NULL;
@@ -2500,7 +2500,7 @@ hns3_recv_pkts_simple(void *rx_queue,
 		rxm->packet_type = hns3_rx_calc_ptype(rxq, l234_info, ol_info);
 
 		if (rxm->packet_type == RTE_PTYPE_L2_ETHER_TIMESYNC)
-			rxm->ol_flags |= PKT_RX_IEEE1588_PTP;
+			rxm->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP;
 
 		hns3_rxd_to_vlan_tci(rxq, rxm, l234_info, &rxd);
 
@@ -2699,17 +2699,17 @@ hns3_recv_scattered_pkts(void *rx_queue,
 
 		first_seg->port = rxq->port_id;
 		first_seg->hash.rss = rte_le_to_cpu_32(rxd.rx.rss_hash);
-		first_seg->ol_flags = PKT_RX_RSS_HASH;
+		first_seg->ol_flags = RTE_MBUF_F_RX_RSS_HASH;
 		if (unlikely(bd_base_info & BIT(HNS3_RXD_LUM_B))) {
 			first_seg->hash.fdir.hi =
 				rte_le_to_cpu_16(rxd.rx.fd_id);
-			first_seg->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+			first_seg->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		}
 
 		gro_size = hns3_get_field(bd_base_info, HNS3_RXD_GRO_SIZE_M,
 					  HNS3_RXD_GRO_SIZE_S);
 		if (gro_size != 0) {
-			first_seg->ol_flags |= PKT_RX_LRO;
+			first_seg->ol_flags |= RTE_MBUF_F_RX_LRO;
 			first_seg->tso_segsz = gro_size;
 		}
 
@@ -2724,7 +2724,7 @@ hns3_recv_scattered_pkts(void *rx_queue,
 						l234_info, ol_info);
 
 		if (first_seg->packet_type == RTE_PTYPE_L2_ETHER_TIMESYNC)
-			rxm->ol_flags |= PKT_RX_IEEE1588_PTP;
+			rxm->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP;
 
 		hns3_rxd_to_vlan_tci(rxq, first_seg, l234_info, &rxd);
 
@@ -3151,7 +3151,7 @@ hns3_restore_gro_conf(struct hns3_hw *hw)
 static inline bool
 hns3_pkt_is_tso(struct rte_mbuf *m)
 {
-	return (m->tso_segsz != 0 && m->ol_flags & PKT_TX_TCP_SEG);
+	return (m->tso_segsz != 0 && m->ol_flags & RTE_MBUF_F_TX_TCP_SEG);
 }
 
 static void
@@ -3184,7 +3184,7 @@ hns3_fill_first_desc(struct hns3_tx_queue *txq, struct hns3_desc *desc,
 	uint32_t paylen;
 
 	hdr_len = rxm->l2_len + rxm->l3_len + rxm->l4_len;
-	hdr_len += (ol_flags & PKT_TX_TUNNEL_MASK) ?
+	hdr_len += (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 			   rxm->outer_l2_len + rxm->outer_l3_len : 0;
 	paylen = rxm->pkt_len - hdr_len;
 	desc->tx.paylen_fd_dop_ol4cs |= rte_cpu_to_le_32(paylen);
@@ -3202,11 +3202,11 @@ hns3_fill_first_desc(struct hns3_tx_queue *txq, struct hns3_desc *desc,
 	 * To avoid the VLAN of Tx descriptor is overwritten by PVID, it should
 	 * be added to the position close to the IP header when PVID is enabled.
 	 */
-	if (!txq->pvid_sw_shift_en && ol_flags & (PKT_TX_VLAN |
-				PKT_TX_QINQ)) {
+	if (!txq->pvid_sw_shift_en && ol_flags & (RTE_MBUF_F_TX_VLAN |
+				                  RTE_MBUF_F_TX_QINQ)) {
 		desc->tx.ol_type_vlan_len_msec |=
 				rte_cpu_to_le_32(BIT(HNS3_TXD_OVLAN_B));
-		if (ol_flags & PKT_TX_QINQ)
+		if (ol_flags & RTE_MBUF_F_TX_QINQ)
 			desc->tx.outer_vlan_tag =
 					rte_cpu_to_le_16(rxm->vlan_tci_outer);
 		else
@@ -3214,14 +3214,14 @@ hns3_fill_first_desc(struct hns3_tx_queue *txq, struct hns3_desc *desc,
 					rte_cpu_to_le_16(rxm->vlan_tci);
 	}
 
-	if (ol_flags & PKT_TX_QINQ ||
-	    ((ol_flags & PKT_TX_VLAN) && txq->pvid_sw_shift_en)) {
+	if (ol_flags & RTE_MBUF_F_TX_QINQ ||
+	    ((ol_flags & RTE_MBUF_F_TX_VLAN) && txq->pvid_sw_shift_en)) {
 		desc->tx.type_cs_vlan_tso_len |=
 					rte_cpu_to_le_32(BIT(HNS3_TXD_VLAN_B));
 		desc->tx.vlan_tag = rte_cpu_to_le_16(rxm->vlan_tci);
 	}
 
-	if (ol_flags & PKT_TX_IEEE1588_TMST)
+	if (ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 		desc->tx.tp_fe_sc_vld_ra_ri |=
 				rte_cpu_to_le_16(BIT(HNS3_TXD_TSYN_B));
 }
@@ -3343,14 +3343,14 @@ hns3_parse_outer_params(struct rte_mbuf *m, uint32_t *ol_type_vlan_len_msec)
 	uint64_t ol_flags = m->ol_flags;
 
 	/* (outer) IP header type */
-	if (ol_flags & PKT_TX_OUTER_IPV4) {
-		if (ol_flags & PKT_TX_OUTER_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IPV4) {
+		if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 			tmp |= hns3_gen_field_val(HNS3_TXD_OL3T_M,
 					HNS3_TXD_OL3T_S, HNS3_OL3T_IPV4_CSUM);
 		else
 			tmp |= hns3_gen_field_val(HNS3_TXD_OL3T_M,
 				HNS3_TXD_OL3T_S, HNS3_OL3T_IPV4_NO_CSUM);
-	} else if (ol_flags & PKT_TX_OUTER_IPV6) {
+	} else if (ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) {
 		tmp |= hns3_gen_field_val(HNS3_TXD_OL3T_M, HNS3_TXD_OL3T_S,
 					HNS3_OL3T_IPV6);
 	}
@@ -3370,10 +3370,10 @@ hns3_parse_inner_params(struct rte_mbuf *m, uint32_t *ol_type_vlan_len_msec,
 	uint64_t ol_flags = m->ol_flags;
 	uint16_t inner_l2_len;
 
-	switch (ol_flags & PKT_TX_TUNNEL_MASK) {
-	case PKT_TX_TUNNEL_VXLAN_GPE:
-	case PKT_TX_TUNNEL_GENEVE:
-	case PKT_TX_TUNNEL_VXLAN:
+	switch (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
 		/* MAC in UDP tunnelling packet, include VxLAN and GENEVE */
 		tmp_outer |= hns3_gen_field_val(HNS3_TXD_TUNTYPE_M,
 				HNS3_TXD_TUNTYPE_S, HNS3_TUN_MAC_IN_UDP);
@@ -3392,7 +3392,7 @@ hns3_parse_inner_params(struct rte_mbuf *m, uint32_t *ol_type_vlan_len_msec,
 
 		inner_l2_len = m->l2_len - RTE_ETHER_VXLAN_HLEN;
 		break;
-	case PKT_TX_TUNNEL_GRE:
+	case RTE_MBUF_F_TX_TUNNEL_GRE:
 		tmp_outer |= hns3_gen_field_val(HNS3_TXD_TUNTYPE_M,
 					HNS3_TXD_TUNTYPE_S, HNS3_TUN_NVGRE);
 		/*
@@ -3441,7 +3441,7 @@ hns3_parse_tunneling_params(struct hns3_tx_queue *txq, struct rte_mbuf *m,
 	 * calculations, the length of the L2 header include the outer and
 	 * inner, will be filled during the parsing of tunnel packects.
 	 */
-	if (!(ol_flags & PKT_TX_TUNNEL_MASK)) {
+	if (!(ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 		/*
 		 * For non tunnel type the tunnel type id is 0, so no need to
 		 * assign a value to it. Only the inner(normal) L2 header length
@@ -3457,7 +3457,7 @@ hns3_parse_tunneling_params(struct hns3_tx_queue *txq, struct rte_mbuf *m,
 		 * calculate the header length.
 		 */
 		if (unlikely(!(ol_flags &
-			(PKT_TX_OUTER_IP_CKSUM | PKT_TX_OUTER_UDP_CKSUM)) &&
+			(RTE_MBUF_F_TX_OUTER_IP_CKSUM | RTE_MBUF_F_TX_OUTER_UDP_CKSUM)) &&
 					m->outer_l2_len == 0)) {
 			struct rte_net_hdr_lens hdr_len;
 			(void)rte_net_get_ptype(m, &hdr_len,
@@ -3474,7 +3474,7 @@ hns3_parse_tunneling_params(struct hns3_tx_queue *txq, struct rte_mbuf *m,
 
 	desc->tx.ol_type_vlan_len_msec = rte_cpu_to_le_32(tmp_outer);
 	desc->tx.type_cs_vlan_tso_len = rte_cpu_to_le_32(tmp_inner);
-	tmp_ol4cs = ol_flags & PKT_TX_OUTER_UDP_CKSUM ?
+	tmp_ol4cs = ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM ?
 			BIT(HNS3_TXD_OL4CS_B) : 0;
 	desc->tx.paylen_fd_dop_ol4cs = rte_cpu_to_le_32(tmp_ol4cs);
 
@@ -3489,9 +3489,9 @@ hns3_parse_l3_cksum_params(struct rte_mbuf *m, uint32_t *type_cs_vlan_tso_len)
 	uint32_t tmp;
 
 	tmp = *type_cs_vlan_tso_len;
-	if (ol_flags & PKT_TX_IPV4)
+	if (ol_flags & RTE_MBUF_F_TX_IPV4)
 		l3_type = HNS3_L3T_IPV4;
-	else if (ol_flags & PKT_TX_IPV6)
+	else if (ol_flags & RTE_MBUF_F_TX_IPV6)
 		l3_type = HNS3_L3T_IPV6;
 	else
 		l3_type = HNS3_L3T_NONE;
@@ -3503,7 +3503,7 @@ hns3_parse_l3_cksum_params(struct rte_mbuf *m, uint32_t *type_cs_vlan_tso_len)
 	tmp |= hns3_gen_field_val(HNS3_TXD_L3T_M, HNS3_TXD_L3T_S, l3_type);
 
 	/* Enable L3 checksum offloads */
-	if (ol_flags & PKT_TX_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 		tmp |= BIT(HNS3_TXD_L3CS_B);
 	*type_cs_vlan_tso_len = tmp;
 }
@@ -3514,20 +3514,20 @@ hns3_parse_l4_cksum_params(struct rte_mbuf *m, uint32_t *type_cs_vlan_tso_len)
 	uint64_t ol_flags = m->ol_flags;
 	uint32_t tmp;
 	/* Enable L4 checksum offloads */
-	switch (ol_flags & (PKT_TX_L4_MASK | PKT_TX_TCP_SEG)) {
-	case PKT_TX_TCP_CKSUM | PKT_TX_TCP_SEG:
-	case PKT_TX_TCP_CKSUM:
-	case PKT_TX_TCP_SEG:
+	switch (ol_flags & (RTE_MBUF_F_TX_L4_MASK | RTE_MBUF_F_TX_TCP_SEG)) {
+	case RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_TCP_SEG:
+	case RTE_MBUF_F_TX_TCP_CKSUM:
+	case RTE_MBUF_F_TX_TCP_SEG:
 		tmp = *type_cs_vlan_tso_len;
 		tmp |= hns3_gen_field_val(HNS3_TXD_L4T_M, HNS3_TXD_L4T_S,
 					HNS3_L4T_TCP);
 		break;
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		tmp = *type_cs_vlan_tso_len;
 		tmp |= hns3_gen_field_val(HNS3_TXD_L4T_M, HNS3_TXD_L4T_S,
 					HNS3_L4T_UDP);
 		break;
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		tmp = *type_cs_vlan_tso_len;
 		tmp |= hns3_gen_field_val(HNS3_TXD_L4T_M, HNS3_TXD_L4T_S,
 					HNS3_L4T_SCTP);
@@ -3584,7 +3584,7 @@ hns3_pkt_need_linearized(struct rte_mbuf *tx_pkts, uint32_t bd_num,
 
 	/* ensure the first 8 frags is greater than mss + header */
 	hdr_len = tx_pkts->l2_len + tx_pkts->l3_len + tx_pkts->l4_len;
-	hdr_len += (tx_pkts->ol_flags & PKT_TX_TUNNEL_MASK) ?
+	hdr_len += (tx_pkts->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 		   tx_pkts->outer_l2_len + tx_pkts->outer_l3_len : 0;
 	if (tot_len + m_last->data_len < tx_pkts->tso_segsz + hdr_len)
 		return true;
@@ -3614,15 +3614,15 @@ hns3_outer_ipv4_cksum_prepared(struct rte_mbuf *m, uint64_t ol_flags,
 	struct rte_ipv4_hdr *ipv4_hdr;
 	ipv4_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv4_hdr *,
 					   m->outer_l2_len);
-	if (ol_flags & PKT_TX_OUTER_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 		ipv4_hdr->hdr_checksum = 0;
-	if (ol_flags & PKT_TX_OUTER_UDP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM) {
 		struct rte_udp_hdr *udp_hdr;
 		/*
 		 * If OUTER_UDP_CKSUM is support, HW can caclulate the pseudo
 		 * header for TSO packets
 		 */
-		if (ol_flags & PKT_TX_TCP_SEG)
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			return true;
 		udp_hdr = rte_pktmbuf_mtod_offset(m, struct rte_udp_hdr *,
 				m->outer_l2_len + m->outer_l3_len);
@@ -3641,13 +3641,13 @@ hns3_outer_ipv6_cksum_prepared(struct rte_mbuf *m, uint64_t ol_flags,
 	struct rte_ipv6_hdr *ipv6_hdr;
 	ipv6_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv6_hdr *,
 					   m->outer_l2_len);
-	if (ol_flags & PKT_TX_OUTER_UDP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM) {
 		struct rte_udp_hdr *udp_hdr;
 		/*
 		 * If OUTER_UDP_CKSUM is support, HW can caclulate the pseudo
 		 * header for TSO packets
 		 */
-		if (ol_flags & PKT_TX_TCP_SEG)
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			return true;
 		udp_hdr = rte_pktmbuf_mtod_offset(m, struct rte_udp_hdr *,
 				m->outer_l2_len + m->outer_l3_len);
@@ -3666,10 +3666,10 @@ hns3_outer_header_cksum_prepare(struct rte_mbuf *m)
 	uint32_t paylen, hdr_len, l4_proto;
 	struct rte_udp_hdr *udp_hdr;
 
-	if (!(ol_flags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6)))
+	if (!(ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IPV6)))
 		return;
 
-	if (ol_flags & PKT_TX_OUTER_IPV4) {
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IPV4) {
 		if (hns3_outer_ipv4_cksum_prepared(m, ol_flags, &l4_proto))
 			return;
 	} else {
@@ -3678,7 +3678,7 @@ hns3_outer_header_cksum_prepare(struct rte_mbuf *m)
 	}
 
 	/* driver should ensure the outer udp cksum is 0 for TUNNEL TSO */
-	if (l4_proto == IPPROTO_UDP && (ol_flags & PKT_TX_TCP_SEG)) {
+	if (l4_proto == IPPROTO_UDP && (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		hdr_len = m->l2_len + m->l3_len + m->l4_len;
 		hdr_len += m->outer_l2_len + m->outer_l3_len;
 		paylen = m->pkt_len - hdr_len;
@@ -3704,7 +3704,7 @@ hns3_check_tso_pkt_valid(struct rte_mbuf *m)
 		return -EINVAL;
 
 	hdr_len = m->l2_len + m->l3_len + m->l4_len;
-	hdr_len += (m->ol_flags & PKT_TX_TUNNEL_MASK) ?
+	hdr_len += (m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 			m->outer_l2_len + m->outer_l3_len : 0;
 	if (hdr_len > HNS3_MAX_TSO_HDR_SIZE)
 		return -EINVAL;
@@ -3754,12 +3754,12 @@ hns3_vld_vlan_chk(struct hns3_tx_queue *txq, struct rte_mbuf *m)
 	 * implementation function named hns3_prep_pkts to inform users that
 	 * these packets will be discarded.
 	 */
-	if (m->ol_flags & PKT_TX_QINQ)
+	if (m->ol_flags & RTE_MBUF_F_TX_QINQ)
 		return -EINVAL;
 
 	eh = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 	if (eh->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN)) {
-		if (m->ol_flags & PKT_TX_VLAN)
+		if (m->ol_flags & RTE_MBUF_F_TX_VLAN)
 			return -EINVAL;
 
 		/* Ensure the incoming packet is not a QinQ packet */
@@ -3779,7 +3779,7 @@ hns3_udp_cksum_help(struct rte_mbuf *m)
 	uint16_t cksum = 0;
 	uint32_t l4_len;
 
-	if (ol_flags & PKT_TX_IPV4) {
+	if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		struct rte_ipv4_hdr *ipv4_hdr = rte_pktmbuf_mtod_offset(m,
 				struct rte_ipv4_hdr *, m->l2_len);
 		l4_len = rte_be_to_cpu_16(ipv4_hdr->total_length) - m->l3_len;
@@ -3810,8 +3810,8 @@ hns3_validate_tunnel_cksum(struct hns3_tx_queue *tx_queue, struct rte_mbuf *m)
 	uint16_t dst_port;
 
 	if (tx_queue->udp_cksum_mode == HNS3_SPECIAL_PORT_HW_CKSUM_MODE ||
-	    ol_flags & PKT_TX_TUNNEL_MASK ||
-	    (ol_flags & PKT_TX_L4_MASK) != PKT_TX_UDP_CKSUM)
+	    ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK ||
+	    (ol_flags & RTE_MBUF_F_TX_L4_MASK) != RTE_MBUF_F_TX_UDP_CKSUM)
 		return true;
 	/*
 	 * A UDP packet with the same dst_port as VXLAN\VXLAN_GPE\GENEVE will
@@ -3828,7 +3828,7 @@ hns3_validate_tunnel_cksum(struct hns3_tx_queue *tx_queue, struct rte_mbuf *m)
 	case RTE_VXLAN_GPE_DEFAULT_PORT:
 	case RTE_GENEVE_DEFAULT_PORT:
 		udp_hdr->dgram_cksum = hns3_udp_cksum_help(m);
-		m->ol_flags = ol_flags & ~PKT_TX_L4_MASK;
+		m->ol_flags = ol_flags & ~RTE_MBUF_F_TX_L4_MASK;
 		return false;
 	default:
 		return true;
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index bb309d38ed..70fd029aaa 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -471,7 +471,7 @@ struct hns3_tx_queue {
 	 *  - HNS3_SPECIAL_PORT_SW_CKSUM_MODE
 	 *     In this mode, HW can not do checksum for special UDP port like
 	 *     4789, 4790, 6081 for non-tunnel UDP packets and UDP tunnel
-	 *     packets without the PKT_TX_TUNEL_MASK in the mbuf. So, PMD need
+	 *     packets without the RTE_MBUF_F_TX_TUNEL_MASK in the mbuf. So, PMD need
 	 *     do the checksum for these packets to avoid a checksum error.
 	 *
 	 *  - HNS3_SPECIAL_PORT_HW_CKSUM_MODE
@@ -545,12 +545,11 @@ struct hns3_queue_info {
 	unsigned int socket_id;
 };
 
-#define HNS3_TX_CKSUM_OFFLOAD_MASK ( \
-	PKT_TX_OUTER_UDP_CKSUM | \
-	PKT_TX_OUTER_IP_CKSUM | \
-	PKT_TX_IP_CKSUM | \
-	PKT_TX_TCP_SEG | \
-	PKT_TX_L4_MASK)
+#define HNS3_TX_CKSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \
+	RTE_MBUF_F_TX_OUTER_IP_CKSUM | \
+	RTE_MBUF_F_TX_IP_CKSUM | \
+	RTE_MBUF_F_TX_TCP_SEG | \
+	RTE_MBUF_F_TX_L4_MASK)
 
 enum hns3_cksum_status {
 	HNS3_CKSUM_NONE = 0,
@@ -574,29 +573,29 @@ hns3_rx_set_cksum_flag(struct hns3_rx_queue *rxq,
 				 BIT(HNS3_RXD_OL4E_B))
 
 	if (likely((l234_info & HNS3_RXD_CKSUM_ERR_MASK) == 0)) {
-		rxm->ol_flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		rxm->ol_flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 		return;
 	}
 
 	if (unlikely(l234_info & BIT(HNS3_RXD_L3E_B))) {
-		rxm->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+		rxm->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		rxq->dfx_stats.l3_csum_errors++;
 	} else {
-		rxm->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+		rxm->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 	}
 
 	if (unlikely(l234_info & BIT(HNS3_RXD_L4E_B))) {
-		rxm->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+		rxm->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		rxq->dfx_stats.l4_csum_errors++;
 	} else {
-		rxm->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+		rxm->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 	}
 
 	if (unlikely(l234_info & BIT(HNS3_RXD_OL3E_B)))
 		rxq->dfx_stats.ol3_csum_errors++;
 
 	if (unlikely(l234_info & BIT(HNS3_RXD_OL4E_B))) {
-		rxm->ol_flags |= PKT_RX_OUTER_L4_CKSUM_BAD;
+		rxm->ol_flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 		rxq->dfx_stats.ol4_csum_errors++;
 	}
 }
diff --git a/drivers/net/hns3/hns3_rxtx_vec_neon.h b/drivers/net/hns3/hns3_rxtx_vec_neon.h
index 74c848d5ef..0edd4756f1 100644
--- a/drivers/net/hns3/hns3_rxtx_vec_neon.h
+++ b/drivers/net/hns3/hns3_rxtx_vec_neon.h
@@ -105,7 +105,7 @@ hns3_desc_parse_field(struct hns3_rx_queue *rxq,
 		pkt = sw_ring[i].mbuf;
 
 		/* init rte_mbuf.rearm_data last 64-bit */
-		pkt->ol_flags = PKT_RX_RSS_HASH;
+		pkt->ol_flags = RTE_MBUF_F_RX_RSS_HASH;
 
 		l234_info = rxdp[i].rx.l234_info;
 		ol_info = rxdp[i].rx.ol_info;
diff --git a/drivers/net/hns3/hns3_rxtx_vec_sve.c b/drivers/net/hns3/hns3_rxtx_vec_sve.c
index d5c49333b2..be1fdbcdf0 100644
--- a/drivers/net/hns3/hns3_rxtx_vec_sve.c
+++ b/drivers/net/hns3/hns3_rxtx_vec_sve.c
@@ -43,7 +43,7 @@ hns3_desc_parse_field_sve(struct hns3_rx_queue *rxq,
 
 	for (i = 0; i < (int)bd_vld_num; i++) {
 		/* init rte_mbuf.rearm_data last 64-bit */
-		rx_pkts[i]->ol_flags = PKT_RX_RSS_HASH;
+		rx_pkts[i]->ol_flags = RTE_MBUF_F_RX_RSS_HASH;
 
 		ret = hns3_handle_bdinfo(rxq, rx_pkts[i], key->bd_base_info[i],
 					 key->l234_info[i]);
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index c858354b73..1ce0f5e472 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -44,42 +44,39 @@
 #define I40E_TXD_CMD (I40E_TX_DESC_CMD_EOP | I40E_TX_DESC_CMD_RS)
 
 #ifdef RTE_LIBRTE_IEEE1588
-#define I40E_TX_IEEE1588_TMST PKT_TX_IEEE1588_TMST
+#define I40E_TX_IEEE1588_TMST RTE_MBUF_F_TX_IEEE1588_TMST
 #else
 #define I40E_TX_IEEE1588_TMST 0
 #endif
 
-#define I40E_TX_CKSUM_OFFLOAD_MASK (		 \
-		PKT_TX_IP_CKSUM |		 \
-		PKT_TX_L4_MASK |		 \
-		PKT_TX_TCP_SEG |		 \
-		PKT_TX_OUTER_IP_CKSUM)
-
-#define I40E_TX_OFFLOAD_MASK (  \
-		PKT_TX_OUTER_IPV4 |	\
-		PKT_TX_OUTER_IPV6 |	\
-		PKT_TX_IPV4 |		\
-		PKT_TX_IPV6 |		\
-		PKT_TX_IP_CKSUM |       \
-		PKT_TX_L4_MASK |        \
-		PKT_TX_OUTER_IP_CKSUM | \
-		PKT_TX_TCP_SEG |        \
-		PKT_TX_QINQ |       \
-		PKT_TX_VLAN |	\
-		PKT_TX_TUNNEL_MASK |	\
+#define I40E_TX_CKSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM |		 \
+		RTE_MBUF_F_TX_L4_MASK |		 \
+		RTE_MBUF_F_TX_TCP_SEG |		 \
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+
+#define I40E_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_OUTER_IPV4 |	\
+		RTE_MBUF_F_TX_OUTER_IPV6 |	\
+		RTE_MBUF_F_TX_IPV4 |		\
+		RTE_MBUF_F_TX_IPV6 |		\
+		RTE_MBUF_F_TX_IP_CKSUM |       \
+		RTE_MBUF_F_TX_L4_MASK |        \
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM | \
+		RTE_MBUF_F_TX_TCP_SEG |        \
+		RTE_MBUF_F_TX_QINQ |       \
+		RTE_MBUF_F_TX_VLAN |	\
+		RTE_MBUF_F_TX_TUNNEL_MASK |	\
 		I40E_TX_IEEE1588_TMST)
 
 #define I40E_TX_OFFLOAD_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ I40E_TX_OFFLOAD_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ I40E_TX_OFFLOAD_MASK)
 
-#define I40E_TX_OFFLOAD_SIMPLE_SUP_MASK ( \
-		PKT_TX_IPV4 | \
-		PKT_TX_IPV6 | \
-		PKT_TX_OUTER_IPV4 | \
-		PKT_TX_OUTER_IPV6)
+#define I40E_TX_OFFLOAD_SIMPLE_SUP_MASK (RTE_MBUF_F_TX_IPV4 | \
+		RTE_MBUF_F_TX_IPV6 | \
+		RTE_MBUF_F_TX_OUTER_IPV4 | \
+		RTE_MBUF_F_TX_OUTER_IPV6)
 
 #define I40E_TX_OFFLOAD_SIMPLE_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ I40E_TX_OFFLOAD_SIMPLE_SUP_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ I40E_TX_OFFLOAD_SIMPLE_SUP_MASK)
 
 static int
 i40e_monitor_callback(const uint64_t value,
@@ -119,7 +116,7 @@ i40e_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union i40e_rx_desc *rxdp)
 {
 	if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
 		(1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)) {
-		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mb->vlan_tci =
 			rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1);
 		PMD_RX_LOG(DEBUG, "Descriptor l2tag1: %u",
@@ -130,8 +127,8 @@ i40e_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union i40e_rx_desc *rxdp)
 #ifndef RTE_LIBRTE_I40E_16BYTE_RX_DESC
 	if (rte_le_to_cpu_16(rxdp->wb.qword2.ext_status) &
 		(1 << I40E_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT)) {
-		mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ |
-			PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+		mb->ol_flags |= RTE_MBUF_F_RX_QINQ_STRIPPED | RTE_MBUF_F_RX_QINQ |
+			RTE_MBUF_F_RX_VLAN_STRIPPED | RTE_MBUF_F_RX_VLAN;
 		mb->vlan_tci_outer = mb->vlan_tci;
 		mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.qword2.l2tag2_2);
 		PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
@@ -154,11 +151,11 @@ i40e_rxd_status_to_pkt_flags(uint64_t qword)
 	/* Check if RSS_HASH */
 	flags = (((qword >> I40E_RX_DESC_STATUS_FLTSTAT_SHIFT) &
 					I40E_RX_DESC_FLTSTAT_RSS_HASH) ==
-			I40E_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0;
+			I40E_RX_DESC_FLTSTAT_RSS_HASH) ? RTE_MBUF_F_RX_RSS_HASH : 0;
 
 	/* Check if FDIR Match */
 	flags |= (qword & (1 << I40E_RX_DESC_STATUS_FLM_SHIFT) ?
-							PKT_RX_FDIR : 0);
+							RTE_MBUF_F_RX_FDIR : 0);
 
 	return flags;
 }
@@ -171,22 +168,22 @@ i40e_rxd_error_to_pkt_flags(uint64_t qword)
 
 #define I40E_RX_ERR_BITS 0x3f
 	if (likely((error_bits & I40E_RX_ERR_BITS) == 0)) {
-		flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 		return flags;
 	}
 
 	if (unlikely(error_bits & (1 << I40E_RX_DESC_ERROR_IPE_SHIFT)))
-		flags |= PKT_RX_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		flags |= PKT_RX_IP_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	if (unlikely(error_bits & (1 << I40E_RX_DESC_ERROR_L4E_SHIFT)))
-		flags |= PKT_RX_L4_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	else
-		flags |= PKT_RX_L4_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	if (unlikely(error_bits & (1 << I40E_RX_DESC_ERROR_EIPE_SHIFT)))
-		flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 
 	return flags;
 }
@@ -205,9 +202,9 @@ i40e_get_iee15888_flags(struct rte_mbuf *mb, uint64_t qword)
 
 	if ((mb->packet_type & RTE_PTYPE_L2_MASK)
 			== RTE_PTYPE_L2_ETHER_TIMESYNC)
-		pkt_flags = PKT_RX_IEEE1588_PTP;
+		pkt_flags = RTE_MBUF_F_RX_IEEE1588_PTP;
 	if (tsyn & 0x04) {
-		pkt_flags |= PKT_RX_IEEE1588_TMST;
+		pkt_flags |= RTE_MBUF_F_RX_IEEE1588_TMST;
 		mb->timesync = tsyn & 0x03;
 	}
 
@@ -233,21 +230,21 @@ i40e_rxd_build_fdir(volatile union i40e_rx_desc *rxdp, struct rte_mbuf *mb)
 	if (flexbh == I40E_RX_DESC_EXT_STATUS_FLEXBH_FD_ID) {
 		mb->hash.fdir.hi =
 			rte_le_to_cpu_32(rxdp->wb.qword3.hi_dword.fd_id);
-		flags |= PKT_RX_FDIR_ID;
+		flags |= RTE_MBUF_F_RX_FDIR_ID;
 	} else if (flexbh == I40E_RX_DESC_EXT_STATUS_FLEXBH_FLEX) {
 		mb->hash.fdir.hi =
 			rte_le_to_cpu_32(rxdp->wb.qword3.hi_dword.flex_bytes_hi);
-		flags |= PKT_RX_FDIR_FLX;
+		flags |= RTE_MBUF_F_RX_FDIR_FLX;
 	}
 	if (flexbl == I40E_RX_DESC_EXT_STATUS_FLEXBL_FLEX) {
 		mb->hash.fdir.lo =
 			rte_le_to_cpu_32(rxdp->wb.qword3.lo_dword.flex_bytes_lo);
-		flags |= PKT_RX_FDIR_FLX;
+		flags |= RTE_MBUF_F_RX_FDIR_FLX;
 	}
 #else
 	mb->hash.fdir.hi =
 		rte_le_to_cpu_32(rxdp->wb.qword0.hi_dword.fd_id);
-	flags |= PKT_RX_FDIR_ID;
+	flags |= RTE_MBUF_F_RX_FDIR_ID;
 #endif
 	return flags;
 }
@@ -258,11 +255,11 @@ i40e_parse_tunneling_params(uint64_t ol_flags,
 			    uint32_t *cd_tunneling)
 {
 	/* EIPT: External (outer) IP header type */
-	if (ol_flags & PKT_TX_OUTER_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 		*cd_tunneling |= I40E_TX_CTX_EXT_IP_IPV4;
-	else if (ol_flags & PKT_TX_OUTER_IPV4)
+	else if (ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)
 		*cd_tunneling |= I40E_TX_CTX_EXT_IP_IPV4_NO_CSUM;
-	else if (ol_flags & PKT_TX_OUTER_IPV6)
+	else if (ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)
 		*cd_tunneling |= I40E_TX_CTX_EXT_IP_IPV6;
 
 	/* EIPLEN: External (outer) IP header length, in DWords */
@@ -270,15 +267,15 @@ i40e_parse_tunneling_params(uint64_t ol_flags,
 		I40E_TXD_CTX_QW0_EXT_IPLEN_SHIFT;
 
 	/* L4TUNT: L4 Tunneling Type */
-	switch (ol_flags & PKT_TX_TUNNEL_MASK) {
-	case PKT_TX_TUNNEL_IPIP:
+	switch (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+	case RTE_MBUF_F_TX_TUNNEL_IPIP:
 		/* for non UDP / GRE tunneling, set to 00b */
 		break;
-	case PKT_TX_TUNNEL_VXLAN:
-	case PKT_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 		*cd_tunneling |= I40E_TXD_CTX_UDP_TUNNELING;
 		break;
-	case PKT_TX_TUNNEL_GRE:
+	case RTE_MBUF_F_TX_TUNNEL_GRE:
 		*cd_tunneling |= I40E_TXD_CTX_GRE_TUNNELING;
 		break;
 	default:
@@ -306,7 +303,7 @@ i40e_txd_enable_checksum(uint64_t ol_flags,
 			union i40e_tx_offload tx_offload)
 {
 	/* Set MACLEN */
-	if (ol_flags & PKT_TX_TUNNEL_MASK)
+	if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 		*td_offset |= (tx_offload.outer_l2_len >> 1)
 				<< I40E_TX_DESC_LENGTH_MACLEN_SHIFT;
 	else
@@ -314,21 +311,21 @@ i40e_txd_enable_checksum(uint64_t ol_flags,
 			<< I40E_TX_DESC_LENGTH_MACLEN_SHIFT;
 
 	/* Enable L3 checksum offloads */
-	if (ol_flags & PKT_TX_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		*td_cmd |= I40E_TX_DESC_CMD_IIPT_IPV4_CSUM;
 		*td_offset |= (tx_offload.l3_len >> 2)
 				<< I40E_TX_DESC_LENGTH_IPLEN_SHIFT;
-	} else if (ol_flags & PKT_TX_IPV4) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		*td_cmd |= I40E_TX_DESC_CMD_IIPT_IPV4;
 		*td_offset |= (tx_offload.l3_len >> 2)
 				<< I40E_TX_DESC_LENGTH_IPLEN_SHIFT;
-	} else if (ol_flags & PKT_TX_IPV6) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV6) {
 		*td_cmd |= I40E_TX_DESC_CMD_IIPT_IPV6;
 		*td_offset |= (tx_offload.l3_len >> 2)
 				<< I40E_TX_DESC_LENGTH_IPLEN_SHIFT;
 	}
 
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		*td_cmd |= I40E_TX_DESC_CMD_L4T_EOFT_TCP;
 		*td_offset |= (tx_offload.l4_len >> 2)
 			<< I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
@@ -336,18 +333,18 @@ i40e_txd_enable_checksum(uint64_t ol_flags,
 	}
 
 	/* Enable L4 checksum offloads */
-	switch (ol_flags & PKT_TX_L4_MASK) {
-	case PKT_TX_TCP_CKSUM:
+	switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		*td_cmd |= I40E_TX_DESC_CMD_L4T_EOFT_TCP;
 		*td_offset |= (sizeof(struct rte_tcp_hdr) >> 2) <<
 				I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
 		break;
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		*td_cmd |= I40E_TX_DESC_CMD_L4T_EOFT_SCTP;
 		*td_offset |= (sizeof(struct rte_sctp_hdr) >> 2) <<
 				I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
 		break;
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		*td_cmd |= I40E_TX_DESC_CMD_L4T_EOFT_UDP;
 		*td_offset |= (sizeof(struct rte_udp_hdr) >> 2) <<
 				I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
@@ -526,10 +523,10 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
 				ptype_tbl[(uint8_t)((qword1 &
 				I40E_RXD_QW1_PTYPE_MASK) >>
 				I40E_RXD_QW1_PTYPE_SHIFT)];
-			if (pkt_flags & PKT_RX_RSS_HASH)
+			if (pkt_flags & RTE_MBUF_F_RX_RSS_HASH)
 				mb->hash.rss = rte_le_to_cpu_32(\
 					rxdp[j].wb.qword0.hi_dword.rss);
-			if (pkt_flags & PKT_RX_FDIR)
+			if (pkt_flags & RTE_MBUF_F_RX_FDIR)
 				pkt_flags |= i40e_rxd_build_fdir(&rxdp[j], mb);
 
 #ifdef RTE_LIBRTE_IEEE1588
@@ -789,10 +786,10 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		rxm->packet_type =
 			ptype_tbl[(uint8_t)((qword1 &
 			I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT)];
-		if (pkt_flags & PKT_RX_RSS_HASH)
+		if (pkt_flags & RTE_MBUF_F_RX_RSS_HASH)
 			rxm->hash.rss =
 				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
-		if (pkt_flags & PKT_RX_FDIR)
+		if (pkt_flags & RTE_MBUF_F_RX_FDIR)
 			pkt_flags |= i40e_rxd_build_fdir(&rxd, rxm);
 
 #ifdef RTE_LIBRTE_IEEE1588
@@ -957,10 +954,10 @@ i40e_recv_scattered_pkts(void *rx_queue,
 		first_seg->packet_type =
 			ptype_tbl[(uint8_t)((qword1 &
 			I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT)];
-		if (pkt_flags & PKT_RX_RSS_HASH)
+		if (pkt_flags & RTE_MBUF_F_RX_RSS_HASH)
 			first_seg->hash.rss =
 				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
-		if (pkt_flags & PKT_RX_FDIR)
+		if (pkt_flags & RTE_MBUF_F_RX_FDIR)
 			pkt_flags |= i40e_rxd_build_fdir(&rxd, first_seg);
 
 #ifdef RTE_LIBRTE_IEEE1588
@@ -1004,13 +1001,13 @@ i40e_recv_scattered_pkts(void *rx_queue,
 static inline uint16_t
 i40e_calc_context_desc(uint64_t flags)
 {
-	static uint64_t mask = PKT_TX_OUTER_IP_CKSUM |
-		PKT_TX_TCP_SEG |
-		PKT_TX_QINQ |
-		PKT_TX_TUNNEL_MASK;
+	static uint64_t mask = RTE_MBUF_F_TX_OUTER_IP_CKSUM |
+		RTE_MBUF_F_TX_TCP_SEG |
+		RTE_MBUF_F_TX_QINQ |
+		RTE_MBUF_F_TX_TUNNEL_MASK;
 
 #ifdef RTE_LIBRTE_IEEE1588
-	mask |= PKT_TX_IEEE1588_TMST;
+	mask |= RTE_MBUF_F_TX_IEEE1588_TMST;
 #endif
 
 	return (flags & mask) ? 1 : 0;
@@ -1029,7 +1026,7 @@ i40e_set_tso_ctx(struct rte_mbuf *mbuf, union i40e_tx_offload tx_offload)
 	}
 
 	hdr_len = tx_offload.l2_len + tx_offload.l3_len + tx_offload.l4_len;
-	hdr_len += (mbuf->ol_flags & PKT_TX_TUNNEL_MASK) ?
+	hdr_len += (mbuf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 		   tx_offload.outer_l2_len + tx_offload.outer_l3_len : 0;
 
 	cd_cmd = I40E_TX_CTX_DESC_TSO;
@@ -1122,7 +1119,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		 * the mbuf data size exceeds max data size that hw allows
 		 * per tx desc.
 		 */
-		if (ol_flags & PKT_TX_TCP_SEG)
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			nb_used = (uint16_t)(i40e_calc_pkt_desc(tx_pkt) +
 					     nb_ctx);
 		else
@@ -1151,7 +1148,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Descriptor based VLAN insertion */
-		if (ol_flags & (PKT_TX_VLAN | PKT_TX_QINQ)) {
+		if (ol_flags & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) {
 			td_cmd |= I40E_TX_DESC_CMD_IL2TAG1;
 			td_tag = tx_pkt->vlan_tci;
 		}
@@ -1161,7 +1158,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 
 		/* Fill in tunneling parameters if necessary */
 		cd_tunneling_params = 0;
-		if (ol_flags & PKT_TX_TUNNEL_MASK)
+		if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 			i40e_parse_tunneling_params(ol_flags, tx_offload,
 						    &cd_tunneling_params);
 		/* Enable checksum offloading */
@@ -1186,12 +1183,12 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			}
 
 			/* TSO enabled means no timestamp */
-			if (ol_flags & PKT_TX_TCP_SEG)
+			if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 				cd_type_cmd_tso_mss |=
 					i40e_set_tso_ctx(tx_pkt, tx_offload);
 			else {
 #ifdef RTE_LIBRTE_IEEE1588
-				if (ol_flags & PKT_TX_IEEE1588_TMST)
+				if (ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 					cd_type_cmd_tso_mss |=
 						((uint64_t)I40E_TX_CTX_DESC_TSYN <<
 						 I40E_TXD_CTX_QW1_CMD_SHIFT);
@@ -1200,7 +1197,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 
 			ctx_txd->tunneling_params =
 				rte_cpu_to_le_32(cd_tunneling_params);
-			if (ol_flags & PKT_TX_QINQ) {
+			if (ol_flags & RTE_MBUF_F_TX_QINQ) {
 				cd_l2tag2 = tx_pkt->vlan_tci_outer;
 				cd_type_cmd_tso_mss |=
 					((uint64_t)I40E_TX_CTX_DESC_IL2TAG2 <<
@@ -1239,7 +1236,7 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			slen = m_seg->data_len;
 			buf_dma_addr = rte_mbuf_data_iova(m_seg);
 
-			while ((ol_flags & PKT_TX_TCP_SEG) &&
+			while ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) &&
 				unlikely(slen > I40E_MAX_DATA_PER_TXD)) {
 				txd->buffer_addr =
 					rte_cpu_to_le_64(buf_dma_addr);
@@ -1580,7 +1577,7 @@ i40e_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		ol_flags = m->ol_flags;
 
 		/* Check for m->nb_segs to not exceed the limits. */
-		if (!(ol_flags & PKT_TX_TCP_SEG)) {
+		if (!(ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 			if (m->nb_segs > I40E_TX_MAX_MTU_SEG ||
 			    m->pkt_len > I40E_FRAME_SIZE_MAX) {
 				rte_errno = EINVAL;
diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
index b99323992f..d0bf86dfba 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c
@@ -117,26 +117,26 @@ desc_to_olflags_v(vector unsigned long descs[4], struct rte_mbuf **rx_pkts)
 	/* map rss and vlan type to rss hash and vlan flag */
 	const vector unsigned char vlan_flags = (vector unsigned char){
 			0, 0, 0, 0,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0, 0, 0,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0};
 
 	const vector unsigned char rss_flags = (vector unsigned char){
-			0, PKT_RX_FDIR, 0, 0,
-			0, 0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH | PKT_RX_FDIR,
+			0, RTE_MBUF_F_RX_FDIR, 0, 0,
+			0, 0, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR,
 			0, 0, 0, 0,
 			0, 0, 0, 0};
 
 	const vector unsigned char l3_l4e_flags = (vector unsigned char){
 			0,
-			PKT_RX_IP_CKSUM_BAD,
-			PKT_RX_L4_CKSUM_BAD,
-			PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD,
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD,
-			PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD,
-			PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD
-					     | PKT_RX_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_L4_CKSUM_BAD,
+			RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD
+					     | RTE_MBUF_F_RX_IP_CKSUM_BAD,
 			0, 0, 0, 0, 0, 0, 0, 0};
 
 	vlan0 = (vector unsigned int)vec_mergel(descs[0], descs[1]);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index 3b9eef91a9..ca10e0dd15 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -78,7 +78,7 @@ desc_fdir_processing_32b(volatile union i40e_rx_desc *rxdp,
 	 * - Position that bit correctly based on packet number
 	 * - OR in the resulting bit to mbuf_flags
 	 */
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
 	__m256i mbuf_flag_mask = _mm256_set_epi32(0, 0, 0, 1 << 13,
 						  0, 0, 0, 1 << 13);
 	__m256i desc_flag_bit =  _mm256_and_si256(mbuf_flag_mask, fdir_mask);
@@ -208,8 +208,8 @@ _recv_raw_pkts_vec_avx2(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	 * destination
 	 */
 	const __m256i vlan_flags_shuf = _mm256_set_epi32(
-			0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0,
-			0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0);
+			0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0,
+			0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0);
 	/*
 	 * data to be shuffled by result of flag mask, shifted down 11.
 	 * If RSS/FDIR bits are set, shuffle moves appropriate flags in
@@ -217,11 +217,11 @@ _recv_raw_pkts_vec_avx2(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	 */
 	const __m256i rss_flags_shuf = _mm256_set_epi8(
 			0, 0, 0, 0, 0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH, 0, 0,
-			0, 0, PKT_RX_FDIR, 0, /* end up 128-bits */
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH, 0, 0,
+			0, 0, RTE_MBUF_F_RX_FDIR, 0, /* end up 128-bits */
 			0, 0, 0, 0, 0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH, 0, 0,
-			0, 0, PKT_RX_FDIR, 0);
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH, 0, 0,
+			0, 0, RTE_MBUF_F_RX_FDIR, 0);
 
 	/*
 	 * data to be shuffled by the result of the flags mask shifted by 22
@@ -229,37 +229,37 @@ _recv_raw_pkts_vec_avx2(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	 */
 	const __m256i l3_l4_flags_shuf = _mm256_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 			/* shift right 1 bit to make sure it not exceed 255 */
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD  |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD  |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD  | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD  | PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD  | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD  | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
 			/* second 128-bits */
 			0, 0, 0, 0, 0, 0, 0, 0,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD  |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD  |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD  | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD  | PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1);
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD  | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD  | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 
 	const __m256i cksum_mask = _mm256_set1_epi32(
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD);
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 
 	RTE_SET_USED(avx_aligned); /* for 32B descriptors we don't use this */
 
@@ -442,7 +442,7 @@ _recv_raw_pkts_vec_avx2(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 			 * order (hi->lo): [1, 3, 5, 7, 0, 2, 4, 6]
 			 * Then OR FDIR flags to mbuf_flags on FDIR ID hit.
 			 */
-			RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
+			RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
 			const __m256i pkt_fdir_bit = _mm256_set1_epi32(1 << 13);
 			const __m256i fdir_mask = _mm256_cmpeq_epi32(fdir, fdir_id);
 			__m256i fdir_bits = _mm256_and_si256(fdir_mask, pkt_fdir_bit);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index bd21d64223..2c779fa2a6 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -204,7 +204,7 @@ desc_fdir_processing_32b(volatile union i40e_rx_desc *rxdp,
 	 * - Position that bit correctly based on packet number
 	 * - OR in the resulting bit to mbuf_flags
 	 */
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
 	__m256i mbuf_flag_mask = _mm256_set_epi32(0, 0, 0, 1 << 13,
 						  0, 0, 0, 1 << 13);
 	__m256i desc_flag_bit =  _mm256_and_si256(mbuf_flag_mask, fdir_mask);
@@ -319,8 +319,8 @@ _recv_raw_pkts_vec_avx512(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	 * destination
 	 */
 	const __m256i vlan_flags_shuf = _mm256_set_epi32
-		(0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0,
-		0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0);
+		(0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0,
+		0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0);
 
 	/* data to be shuffled by result of flag mask, shifted down 11.
 	 * If RSS/FDIR bits are set, shuffle moves appropriate flags in
@@ -328,11 +328,11 @@ _recv_raw_pkts_vec_avx512(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	 */
 	const __m256i rss_flags_shuf = _mm256_set_epi8
 		(0, 0, 0, 0, 0, 0, 0, 0,
-		PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH, 0, 0,
-		0, 0, PKT_RX_FDIR, 0, /* end up 128-bits */
+		RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH, 0, 0,
+		0, 0, RTE_MBUF_F_RX_FDIR, 0, /* end up 128-bits */
 		0, 0, 0, 0, 0, 0, 0, 0,
-		PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH, 0, 0,
-		0, 0, PKT_RX_FDIR, 0);
+		RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH, 0, 0,
+		0, 0, RTE_MBUF_F_RX_FDIR, 0);
 
 	/* data to be shuffled by the result of the flags mask shifted by 22
 	 * bits.  This gives use the l3_l4 flags.
@@ -340,33 +340,33 @@ _recv_raw_pkts_vec_avx512(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	const __m256i l3_l4_flags_shuf = _mm256_set_epi8
 		(0, 0, 0, 0, 0, 0, 0, 0,
 		/* shift right 1 bit to make sure it not exceed 255 */
-		(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-		PKT_RX_IP_CKSUM_BAD >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1,
 		/* second 128-bits */
 		0, 0, 0, 0, 0, 0, 0, 0,
-		(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-		PKT_RX_IP_CKSUM_BAD >> 1,
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1);
+		(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1);
 
 	const __m256i cksum_mask = _mm256_set1_epi32
-		(PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-		PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-		PKT_RX_OUTER_IP_CKSUM_BAD);
+		(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+		RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 
 	uint16_t i, received;
 
@@ -571,7 +571,7 @@ _recv_raw_pkts_vec_avx512(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 			 * order (hi->lo): [1, 3, 5, 7, 0, 2, 4, 6]
 			 * Then OR FDIR flags to mbuf_flags on FDIR ID hit.
 			 */
-			RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
+			RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
 			const __m256i pkt_fdir_bit = _mm256_set1_epi32(1 << 13);
 			const __m256i fdir_mask =
 				_mm256_cmpeq_epi32(fdir, fdir_id);
diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c
index b2683fda60..b9d9dec769 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_neon.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c
@@ -93,43 +93,43 @@ desc_to_olflags_v(struct i40e_rx_queue *rxq, uint64x2_t descs[4],
 			0x1c03804, 0x1c03804, 0x1c03804, 0x1c03804};
 
 	const uint32x4_t cksum_mask = {
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD};
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD};
 
 	/* map rss and vlan type to rss hash and vlan flag */
 	const uint8x16_t vlan_flags = {
 			0, 0, 0, 0,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0, 0, 0,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0};
 
 	const uint8x16_t rss_flags = {
-			0, PKT_RX_FDIR, 0, 0,
-			0, 0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH | PKT_RX_FDIR,
+			0, RTE_MBUF_F_RX_FDIR, 0, 0,
+			0, 0, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR,
 			0, 0, 0, 0,
 			0, 0, 0, 0};
 
 	const uint8x16_t l3_l4e_flags = {
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1,
-			PKT_RX_IP_CKSUM_BAD >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-			 PKT_RX_L4_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+			 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
 			0, 0, 0, 0, 0, 0, 0, 0};
 
 	vlan0 = vzipq_u32(vreinterpretq_u32_u64(descs[0]),
diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c
index b235502db5..497b2404c6 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_sse.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c
@@ -143,7 +143,7 @@ descs_to_fdir_32b(volatile union i40e_rx_desc *rxdp, struct rte_mbuf **rx_pkt)
 	 * correct location in the mbuf->olflags
 	 */
 	const uint32_t FDIR_ID_BIT_SHIFT = 13;
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << FDIR_ID_BIT_SHIFT));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << FDIR_ID_BIT_SHIFT));
 	v_fd_id_mask = _mm_srli_epi32(v_fd_id_mask, 31);
 	v_fd_id_mask = _mm_slli_epi32(v_fd_id_mask, FDIR_ID_BIT_SHIFT);
 
@@ -203,9 +203,9 @@ descs_to_fdir_16b(__m128i fltstat, __m128i descs[4], struct rte_mbuf **rx_pkt)
 	__m128i v_desc0_mask = _mm_and_si128(v_desc_fdir_mask, v_desc0_shift);
 	descs[0] = _mm_blendv_epi8(descs[0], _mm_setzero_si128(), v_desc0_mask);
 
-	/* Shift to 1 or 0 bit per u32 lane, then to PKT_RX_FDIR_ID offset */
+	/* Shift to 1 or 0 bit per u32 lane, then to RTE_MBUF_F_RX_FDIR_ID offset */
 	const uint32_t FDIR_ID_BIT_SHIFT = 13;
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << FDIR_ID_BIT_SHIFT));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << FDIR_ID_BIT_SHIFT));
 	__m128i v_mask_one_bit = _mm_srli_epi32(v_fdir_id_mask, 31);
 	return _mm_slli_epi32(v_mask_one_bit, FDIR_ID_BIT_SHIFT);
 }
@@ -228,44 +228,44 @@ desc_to_olflags_v(struct i40e_rx_queue *rxq, volatile union i40e_rx_desc *rxdp,
 			0x1c03804, 0x1c03804, 0x1c03804, 0x1c03804);
 
 	const __m128i cksum_mask = _mm_set_epi32(
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD);
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 
 	/* map rss and vlan type to rss hash and vlan flag */
 	const __m128i vlan_flags = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
-			0, 0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
+			0, 0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
 			0, 0, 0, 0);
 
 	const __m128i rss_flags = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH, 0, 0,
-			0, 0, PKT_RX_FDIR, 0);
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH, 0, 0,
+			0, 0, RTE_MBUF_F_RX_FDIR, 0);
 
 	const __m128i l3_l4e_flags = _mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 			/* shift right 1 bit to make sure it not exceed 255 */
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD  |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD  |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD  | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD  | PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1);
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD  | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD  | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 
 	/* Unpack "status" from quadword 1, bits 0:32 */
 	vlan0 = _mm_unpackhi_epi32(descs[0], descs[1]);
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 045fd92368..d7ee47610a 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -379,14 +379,14 @@ iavf_rxd_to_pkt_fields_by_comms_ovs(__rte_unused struct iavf_rx_queue *rxq,
 #endif
 
 	if (desc->flow_id != 0xFFFFFFFF) {
-		mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mb->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
 	}
 
 #ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
 	stat_err = rte_le_to_cpu_16(desc->status_error0);
 	if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
-		mb->ol_flags |= PKT_RX_RSS_HASH;
+		mb->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
 	}
 #endif
@@ -403,13 +403,13 @@ iavf_rxd_to_pkt_fields_by_comms_aux_v1(struct iavf_rx_queue *rxq,
 
 	stat_err = rte_le_to_cpu_16(desc->status_error0);
 	if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
-		mb->ol_flags |= PKT_RX_RSS_HASH;
+		mb->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
 	}
 
 #ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
 	if (desc->flow_id != 0xFFFFFFFF) {
-		mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mb->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
 	}
 
@@ -445,13 +445,13 @@ iavf_rxd_to_pkt_fields_by_comms_aux_v2(struct iavf_rx_queue *rxq,
 
 	stat_err = rte_le_to_cpu_16(desc->status_error0);
 	if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
-		mb->ol_flags |= PKT_RX_RSS_HASH;
+		mb->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
 	}
 
 #ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
 	if (desc->flow_id != 0xFFFFFFFF) {
-		mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mb->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
 	}
 
@@ -1044,7 +1044,7 @@ iavf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union iavf_rx_desc *rxdp)
 {
 	if (rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
 		(1 << IAVF_RX_DESC_STATUS_L2TAG1P_SHIFT)) {
-		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mb->vlan_tci =
 			rte_le_to_cpu_16(rxdp->wb.qword0.lo_dword.l2tag1);
 	} else {
@@ -1072,7 +1072,7 @@ iavf_flex_rxd_to_vlan_tci(struct rte_mbuf *mb,
 #endif
 
 	if (vlan_tci) {
-		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mb->vlan_tci = vlan_tci;
 	}
 }
@@ -1089,26 +1089,26 @@ iavf_rxd_to_pkt_flags(uint64_t qword)
 	/* Check if RSS_HASH */
 	flags = (((qword >> IAVF_RX_DESC_STATUS_FLTSTAT_SHIFT) &
 					IAVF_RX_DESC_FLTSTAT_RSS_HASH) ==
-			IAVF_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0;
+			IAVF_RX_DESC_FLTSTAT_RSS_HASH) ? RTE_MBUF_F_RX_RSS_HASH : 0;
 
 	/* Check if FDIR Match */
 	flags |= (qword & (1 << IAVF_RX_DESC_STATUS_FLM_SHIFT) ?
-				PKT_RX_FDIR : 0);
+				RTE_MBUF_F_RX_FDIR : 0);
 
 	if (likely((error_bits & IAVF_RX_ERR_BITS) == 0)) {
-		flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 		return flags;
 	}
 
 	if (unlikely(error_bits & (1 << IAVF_RX_DESC_ERROR_IPE_SHIFT)))
-		flags |= PKT_RX_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		flags |= PKT_RX_IP_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	if (unlikely(error_bits & (1 << IAVF_RX_DESC_ERROR_L4E_SHIFT)))
-		flags |= PKT_RX_L4_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	else
-		flags |= PKT_RX_L4_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	/* TODO: Oversize error bit is not processed here */
 
@@ -1129,12 +1129,12 @@ iavf_rxd_build_fdir(volatile union iavf_rx_desc *rxdp, struct rte_mbuf *mb)
 	if (flexbh == IAVF_RX_DESC_EXT_STATUS_FLEXBH_FD_ID) {
 		mb->hash.fdir.hi =
 			rte_le_to_cpu_32(rxdp->wb.qword3.hi_dword.fd_id);
-		flags |= PKT_RX_FDIR_ID;
+		flags |= RTE_MBUF_F_RX_FDIR_ID;
 	}
 #else
 	mb->hash.fdir.hi =
 		rte_le_to_cpu_32(rxdp->wb.qword0.hi_dword.fd_id);
-	flags |= PKT_RX_FDIR_ID;
+	flags |= RTE_MBUF_F_RX_FDIR_ID;
 #endif
 	return flags;
 }
@@ -1158,22 +1158,22 @@ iavf_flex_rxd_error_to_pkt_flags(uint16_t stat_err0)
 		return 0;
 
 	if (likely(!(stat_err0 & IAVF_RX_FLEX_ERR0_BITS))) {
-		flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 		return flags;
 	}
 
 	if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S)))
-		flags |= PKT_RX_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		flags |= PKT_RX_IP_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_L4E_S)))
-		flags |= PKT_RX_L4_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	else
-		flags |= PKT_RX_L4_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)))
-		flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 
 	return flags;
 }
@@ -1292,11 +1292,11 @@ iavf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 			ptype_tbl[(uint8_t)((qword1 &
 			IAVF_RXD_QW1_PTYPE_MASK) >> IAVF_RXD_QW1_PTYPE_SHIFT)];
 
-		if (pkt_flags & PKT_RX_RSS_HASH)
+		if (pkt_flags & RTE_MBUF_F_RX_RSS_HASH)
 			rxm->hash.rss =
 				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
 
-		if (pkt_flags & PKT_RX_FDIR)
+		if (pkt_flags & RTE_MBUF_F_RX_FDIR)
 			pkt_flags |= iavf_rxd_build_fdir(&rxd, rxm);
 
 		rxm->ol_flags |= pkt_flags;
@@ -1693,11 +1693,11 @@ iavf_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 			ptype_tbl[(uint8_t)((qword1 &
 			IAVF_RXD_QW1_PTYPE_MASK) >> IAVF_RXD_QW1_PTYPE_SHIFT)];
 
-		if (pkt_flags & PKT_RX_RSS_HASH)
+		if (pkt_flags & RTE_MBUF_F_RX_RSS_HASH)
 			first_seg->hash.rss =
 				rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
 
-		if (pkt_flags & PKT_RX_FDIR)
+		if (pkt_flags & RTE_MBUF_F_RX_FDIR)
 			pkt_flags |= iavf_rxd_build_fdir(&rxd, first_seg);
 
 		first_seg->ol_flags |= pkt_flags;
@@ -1862,11 +1862,11 @@ iavf_rx_scan_hw_ring(struct iavf_rx_queue *rxq)
 				IAVF_RXD_QW1_PTYPE_MASK) >>
 				IAVF_RXD_QW1_PTYPE_SHIFT)];
 
-			if (pkt_flags & PKT_RX_RSS_HASH)
+			if (pkt_flags & RTE_MBUF_F_RX_RSS_HASH)
 				mb->hash.rss = rte_le_to_cpu_32(
 					rxdp[j].wb.qword0.hi_dword.rss);
 
-			if (pkt_flags & PKT_RX_FDIR)
+			if (pkt_flags & RTE_MBUF_F_RX_FDIR)
 				pkt_flags |= iavf_rxd_build_fdir(&rxdp[j], mb);
 
 			mb->ol_flags |= pkt_flags;
@@ -2072,9 +2072,9 @@ iavf_xmit_cleanup(struct iavf_tx_queue *txq)
 static inline uint16_t
 iavf_calc_context_desc(uint64_t flags, uint8_t vlan_flag)
 {
-	if (flags & PKT_TX_TCP_SEG)
+	if (flags & RTE_MBUF_F_TX_TCP_SEG)
 		return 1;
-	if (flags & PKT_TX_VLAN &&
+	if (flags & RTE_MBUF_F_TX_VLAN &&
 	    vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2)
 		return 1;
 	return 0;
@@ -2091,21 +2091,21 @@ iavf_txd_enable_checksum(uint64_t ol_flags,
 		      IAVF_TX_DESC_LENGTH_MACLEN_SHIFT;
 
 	/* Enable L3 checksum offloads */
-	if (ol_flags & PKT_TX_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		*td_cmd |= IAVF_TX_DESC_CMD_IIPT_IPV4_CSUM;
 		*td_offset |= (tx_offload.l3_len >> 2) <<
 			      IAVF_TX_DESC_LENGTH_IPLEN_SHIFT;
-	} else if (ol_flags & PKT_TX_IPV4) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		*td_cmd |= IAVF_TX_DESC_CMD_IIPT_IPV4;
 		*td_offset |= (tx_offload.l3_len >> 2) <<
 			      IAVF_TX_DESC_LENGTH_IPLEN_SHIFT;
-	} else if (ol_flags & PKT_TX_IPV6) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV6) {
 		*td_cmd |= IAVF_TX_DESC_CMD_IIPT_IPV6;
 		*td_offset |= (tx_offload.l3_len >> 2) <<
 			      IAVF_TX_DESC_LENGTH_IPLEN_SHIFT;
 	}
 
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		*td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_TCP;
 		*td_offset |= (tx_offload.l4_len >> 2) <<
 			      IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
@@ -2113,18 +2113,18 @@ iavf_txd_enable_checksum(uint64_t ol_flags,
 	}
 
 	/* Enable L4 checksum offloads */
-	switch (ol_flags & PKT_TX_L4_MASK) {
-	case PKT_TX_TCP_CKSUM:
+	switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		*td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_TCP;
 		*td_offset |= (sizeof(struct rte_tcp_hdr) >> 2) <<
 			      IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
 		break;
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		*td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_SCTP;
 		*td_offset |= (sizeof(struct rte_sctp_hdr) >> 2) <<
 			      IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
 		break;
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		*td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_UDP;
 		*td_offset |= (sizeof(struct rte_udp_hdr) >> 2) <<
 			      IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
@@ -2260,7 +2260,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Descriptor based VLAN insertion */
-		if (ol_flags & PKT_TX_VLAN &&
+		if (ol_flags & RTE_MBUF_F_TX_VLAN &&
 		    txq->vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG1) {
 			td_cmd |= IAVF_TX_DESC_CMD_IL2TAG1;
 			td_tag = tx_pkt->vlan_tci;
@@ -2297,12 +2297,12 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			}
 
 			/* TSO enabled */
-			if (ol_flags & PKT_TX_TCP_SEG)
+			if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 				cd_type_cmd_tso_mss |=
 					iavf_set_tso_ctx(tx_pkt, tx_offload);
 
-			if (ol_flags & PKT_TX_VLAN &&
-			   txq->vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2) {
+			if (ol_flags & RTE_MBUF_F_TX_VLAN &&
+			    txq->vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2) {
 				cd_type_cmd_tso_mss |= IAVF_TX_CTX_DESC_IL2TAG2
 					<< IAVF_TXD_CTX_QW1_CMD_SHIFT;
 				cd_l2tag2 = tx_pkt->vlan_tci;
@@ -2415,7 +2415,7 @@ iavf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		ol_flags = m->ol_flags;
 
 		/* Check condition for nb_segs > IAVF_TX_MAX_MTU_SEG. */
-		if (!(ol_flags & PKT_TX_TCP_SEG)) {
+		if (!(ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 			if (m->nb_segs > IAVF_TX_MAX_MTU_SEG) {
 				rte_errno = EINVAL;
 				return i;
@@ -2446,7 +2446,7 @@ iavf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		}
 
 		if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_QOS &&
-		    ol_flags & (PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN)) {
+		    ol_flags & (RTE_MBUF_F_RX_VLAN_STRIPPED | RTE_MBUF_F_RX_VLAN)) {
 			ret = iavf_check_vlan_up2tc(txq, m);
 			if (ret != 0) {
 				rte_errno = -ret;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 229a2ea4dd..a8df309a55 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -52,23 +52,21 @@
 #define IAVF_TSO_MAX_SEG          UINT8_MAX
 #define IAVF_TX_MAX_MTU_SEG       8
 
-#define IAVF_TX_CKSUM_OFFLOAD_MASK (		 \
-		PKT_TX_IP_CKSUM |		 \
-		PKT_TX_L4_MASK |		 \
-		PKT_TX_TCP_SEG)
-
-#define IAVF_TX_OFFLOAD_MASK (  \
-		PKT_TX_OUTER_IPV6 |		 \
-		PKT_TX_OUTER_IPV4 |		 \
-		PKT_TX_IPV6 |			 \
-		PKT_TX_IPV4 |			 \
-		PKT_TX_VLAN |		 \
-		PKT_TX_IP_CKSUM |		 \
-		PKT_TX_L4_MASK |		 \
-		PKT_TX_TCP_SEG)
+#define IAVF_TX_CKSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM |		 \
+		RTE_MBUF_F_TX_L4_MASK |		 \
+		RTE_MBUF_F_TX_TCP_SEG)
+
+#define IAVF_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_OUTER_IPV6 |		 \
+		RTE_MBUF_F_TX_OUTER_IPV4 |		 \
+		RTE_MBUF_F_TX_IPV6 |			 \
+		RTE_MBUF_F_TX_IPV4 |			 \
+		RTE_MBUF_F_TX_VLAN |		 \
+		RTE_MBUF_F_TX_IP_CKSUM |		 \
+		RTE_MBUF_F_TX_L4_MASK |		 \
+		RTE_MBUF_F_TX_TCP_SEG)
 
 #define IAVF_TX_OFFLOAD_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ IAVF_TX_OFFLOAD_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IAVF_TX_OFFLOAD_MASK)
 
 /**
  * Rx Flex Descriptors
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 96c05d9319..9817d2c011 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -127,8 +127,8 @@ _iavf_recv_raw_pkts_vec_avx2(struct iavf_rx_queue *rxq,
 	 * destination
 	 */
 	const __m256i vlan_flags_shuf =
-		_mm256_set_epi32(0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0,
-				 0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0);
+		_mm256_set_epi32(0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0,
+				 0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0);
 	/**
 	 * data to be shuffled by result of flag mask, shifted down 11.
 	 * If RSS/FDIR bits are set, shuffle moves appropriate flags in
@@ -136,11 +136,11 @@ _iavf_recv_raw_pkts_vec_avx2(struct iavf_rx_queue *rxq,
 	 */
 	const __m256i rss_flags_shuf =
 		_mm256_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
-				PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH,
-				0, 0, 0, 0, PKT_RX_FDIR, 0,/* end up 128-bits */
+				RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH,
+				0, 0, 0, 0, RTE_MBUF_F_RX_FDIR, 0,/* end up 128-bits */
 				0, 0, 0, 0, 0, 0, 0, 0,
-				PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH,
-				0, 0, 0, 0, PKT_RX_FDIR, 0);
+				RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH,
+				0, 0, 0, 0, RTE_MBUF_F_RX_FDIR, 0);
 
 	/**
 	 * data to be shuffled by the result of the flags mask shifted by 22
@@ -148,33 +148,33 @@ _iavf_recv_raw_pkts_vec_avx2(struct iavf_rx_queue *rxq,
 	 */
 	const __m256i l3_l4_flags_shuf = _mm256_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 			/* shift right 1 bit to make sure it not exceed 255 */
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-			 PKT_RX_L4_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-			PKT_RX_IP_CKSUM_BAD >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+			 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1,
 			/* second 128-bits */
 			0, 0, 0, 0, 0, 0, 0, 0,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-			 PKT_RX_L4_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-			PKT_RX_IP_CKSUM_BAD >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1);
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+			 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1);
 
 	const __m256i cksum_mask =
-		 _mm256_set1_epi32(PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-				   PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-				   PKT_RX_OUTER_IP_CKSUM_BAD);
+		 _mm256_set1_epi32(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+				   RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+				   RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 
 	RTE_SET_USED(avx_aligned); /* for 32B descriptors we don't use this */
 
@@ -502,10 +502,10 @@ static inline __m256i
 flex_rxd_to_fdir_flags_vec_avx2(const __m256i fdir_id0_7)
 {
 #define FDID_MIS_MAGIC 0xFFFFFFFF
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR != (1 << 2));
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
-	const __m256i pkt_fdir_bit = _mm256_set1_epi32(PKT_RX_FDIR |
-			PKT_RX_FDIR_ID);
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR != (1 << 2));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
+	const __m256i pkt_fdir_bit = _mm256_set1_epi32(RTE_MBUF_F_RX_FDIR |
+			RTE_MBUF_F_RX_FDIR_ID);
 	/* desc->flow_id field == 0xFFFFFFFF means fdir mismatch */
 	const __m256i fdir_mis_mask = _mm256_set1_epi32(FDID_MIS_MAGIC);
 	__m256i fdir_mask = _mm256_cmpeq_epi32(fdir_id0_7,
@@ -626,36 +626,36 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
 	 */
 	const __m256i l3_l4_flags_shuf = _mm256_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 			/* shift right 1 bit to make sure it not exceed 255 */
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
 			/* second 128-bits */
 			0, 0, 0, 0, 0, 0, 0, 0,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1);
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 	const __m256i cksum_mask =
-		 _mm256_set1_epi32(PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-				   PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-				   PKT_RX_OUTER_IP_CKSUM_BAD);
+		 _mm256_set1_epi32(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+				   RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+				   RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 	/**
 	 * data to be shuffled by result of flag mask, shifted down 12.
 	 * If RSS(bit12)/VLAN(bit13) are set,
@@ -664,27 +664,27 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
 	const __m256i rss_flags_shuf = _mm256_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH, 0,
-			PKT_RX_RSS_HASH, 0,
+			RTE_MBUF_F_RX_RSS_HASH, 0,
+			RTE_MBUF_F_RX_RSS_HASH, 0,
 			/* end up 128-bits */
 			0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH, 0,
-			PKT_RX_RSS_HASH, 0);
+			RTE_MBUF_F_RX_RSS_HASH, 0,
+			RTE_MBUF_F_RX_RSS_HASH, 0);
 
 	const __m256i vlan_flags_shuf = _mm256_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
 			0, 0,
 			/* end up 128-bits */
 			0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
 			0, 0);
 
 	uint16_t i, received;
@@ -1025,8 +1025,8 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
 							0, 0, 0, 0,
 							0, 0, 0, 0,
 							0, 0,
-							PKT_RX_VLAN |
-							PKT_RX_VLAN_STRIPPED,
+							RTE_MBUF_F_RX_VLAN |
+							RTE_MBUF_F_RX_VLAN_STRIPPED,
 							0);
 
 				vlan_flags =
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index cb0b057b0f..c47fd04593 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -431,8 +431,8 @@ _iavf_recv_raw_pkts_vec_avx512(struct iavf_rx_queue *rxq,
 			 * destination
 			 */
 			const __m256i vlan_flags_shuf =
-				_mm256_set_epi32(0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0,
-						 0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED, 0);
+				_mm256_set_epi32(0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0,
+						 0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED, 0);
 #endif
 
 #ifdef IAVF_RX_RSS_OFFLOAD
@@ -443,11 +443,11 @@ _iavf_recv_raw_pkts_vec_avx512(struct iavf_rx_queue *rxq,
 			 */
 			const __m256i rss_flags_shuf =
 				_mm256_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
-						PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH,
-						0, 0, 0, 0, PKT_RX_FDIR, 0,/* end up 128-bits */
+						RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH,
+						0, 0, 0, 0, RTE_MBUF_F_RX_FDIR, 0,/* end up 128-bits */
 						0, 0, 0, 0, 0, 0, 0, 0,
-						PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH,
-						0, 0, 0, 0, PKT_RX_FDIR, 0);
+						RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH,
+						0, 0, 0, 0, RTE_MBUF_F_RX_FDIR, 0);
 #endif
 
 #ifdef IAVF_RX_CSUM_OFFLOAD
@@ -457,33 +457,33 @@ _iavf_recv_raw_pkts_vec_avx512(struct iavf_rx_queue *rxq,
 			 */
 			const __m256i l3_l4_flags_shuf = _mm256_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 					/* shift right 1 bit to make sure it not exceed 255 */
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-					 PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-					 PKT_RX_L4_CKSUM_BAD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-					PKT_RX_IP_CKSUM_BAD >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+					 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+					 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+					RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1,
 					/* second 128-bits */
 					0, 0, 0, 0, 0, 0, 0, 0,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-					 PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-					 PKT_RX_L4_CKSUM_BAD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-					PKT_RX_IP_CKSUM_BAD >> 1,
-					(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1);
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+					 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+					 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+					RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+					(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1);
 
 			const __m256i cksum_mask =
-				_mm256_set1_epi32(PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-						  PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-						  PKT_RX_OUTER_IP_CKSUM_BAD);
+				_mm256_set1_epi32(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+						  RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+						  RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 #endif
 
 #if defined(IAVF_RX_CSUM_OFFLOAD) || defined(IAVF_RX_VLAN_OFFLOAD) || defined(IAVF_RX_RSS_OFFLOAD)
@@ -688,10 +688,10 @@ static __rte_always_inline __m256i
 flex_rxd_to_fdir_flags_vec_avx512(const __m256i fdir_id0_7)
 {
 #define FDID_MIS_MAGIC 0xFFFFFFFF
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR != (1 << 2));
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
-	const __m256i pkt_fdir_bit = _mm256_set1_epi32(PKT_RX_FDIR |
-						       PKT_RX_FDIR_ID);
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR != (1 << 2));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
+	const __m256i pkt_fdir_bit = _mm256_set1_epi32(RTE_MBUF_F_RX_FDIR |
+						       RTE_MBUF_F_RX_FDIR_ID);
 	/* desc->flow_id field == 0xFFFFFFFF means fdir mismatch */
 	const __m256i fdir_mis_mask = _mm256_set1_epi32(FDID_MIS_MAGIC);
 	__m256i fdir_mask = _mm256_cmpeq_epi32(fdir_id0_7,
@@ -974,36 +974,36 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
 			 */
 			const __m256i l3_l4_flags_shuf = _mm256_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 					/* shift right 1 bit to make sure it not exceed 255 */
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-					 PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-					 PKT_RX_IP_CKSUM_GOOD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-					 PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-					 PKT_RX_IP_CKSUM_GOOD) >> 1,
-					(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-					(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+					 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+					 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+					 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+					 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
 					/* second 128-bits */
 					0, 0, 0, 0, 0, 0, 0, 0,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-					 PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-					 PKT_RX_IP_CKSUM_GOOD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-					 PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-					 PKT_RX_IP_CKSUM_GOOD) >> 1,
-					(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-					(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-					(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1);
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+					 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+					 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+					 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+					 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+					(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 			const __m256i cksum_mask =
-				_mm256_set1_epi32(PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-						  PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-						  PKT_RX_OUTER_IP_CKSUM_BAD);
+				_mm256_set1_epi32(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+						  RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+						  RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 #endif
 #if defined(IAVF_RX_VLAN_OFFLOAD) || defined(IAVF_RX_RSS_OFFLOAD)
 			/**
@@ -1015,28 +1015,28 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
 					(0, 0, 0, 0,
 					 0, 0, 0, 0,
 					 0, 0, 0, 0,
-					 PKT_RX_RSS_HASH, 0,
-					 PKT_RX_RSS_HASH, 0,
+					 RTE_MBUF_F_RX_RSS_HASH, 0,
+					 RTE_MBUF_F_RX_RSS_HASH, 0,
 					 /* end up 128-bits */
 					 0, 0, 0, 0,
 					 0, 0, 0, 0,
 					 0, 0, 0, 0,
-					 PKT_RX_RSS_HASH, 0,
-					 PKT_RX_RSS_HASH, 0);
+					 RTE_MBUF_F_RX_RSS_HASH, 0,
+					 RTE_MBUF_F_RX_RSS_HASH, 0);
 
 			const __m256i vlan_flags_shuf = _mm256_set_epi8
 					(0, 0, 0, 0,
 					 0, 0, 0, 0,
 					 0, 0, 0, 0,
-					 PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-					 PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
+					 RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+					 RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
 					 0, 0,
 					 /* end up 128-bits */
 					 0, 0, 0, 0,
 					 0, 0, 0, 0,
 					 0, 0, 0, 0,
-					 PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-					 PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
+					 RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+					 RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
 					 0, 0);
 #endif
 
@@ -1273,8 +1273,8 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
 							 0, 0, 0, 0,
 							 0, 0, 0, 0,
 							 0, 0,
-							 PKT_RX_VLAN |
-							 PKT_RX_VLAN_STRIPPED,
+							 RTE_MBUF_F_RX_VLAN |
+							 RTE_MBUF_F_RX_VLAN_STRIPPED,
 							 0);
 
 					vlan_flags =
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 457d6339e1..1fd37b74c1 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -326,33 +326,33 @@ iavf_txd_enable_offload(__rte_unused struct rte_mbuf *tx_pkt,
 		     IAVF_TX_DESC_LENGTH_MACLEN_SHIFT;
 
 	/* Enable L3 checksum offloads */
-	if (ol_flags & PKT_TX_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		td_cmd |= IAVF_TX_DESC_CMD_IIPT_IPV4_CSUM;
 		td_offset |= (tx_pkt->l3_len >> 2) <<
 			     IAVF_TX_DESC_LENGTH_IPLEN_SHIFT;
-	} else if (ol_flags & PKT_TX_IPV4) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		td_cmd |= IAVF_TX_DESC_CMD_IIPT_IPV4;
 		td_offset |= (tx_pkt->l3_len >> 2) <<
 			     IAVF_TX_DESC_LENGTH_IPLEN_SHIFT;
-	} else if (ol_flags & PKT_TX_IPV6) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV6) {
 		td_cmd |= IAVF_TX_DESC_CMD_IIPT_IPV6;
 		td_offset |= (tx_pkt->l3_len >> 2) <<
 			     IAVF_TX_DESC_LENGTH_IPLEN_SHIFT;
 	}
 
 	/* Enable L4 checksum offloads */
-	switch (ol_flags & PKT_TX_L4_MASK) {
-	case PKT_TX_TCP_CKSUM:
+	switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_TCP;
 		td_offset |= (sizeof(struct rte_tcp_hdr) >> 2) <<
 			     IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
 		break;
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_SCTP;
 		td_offset |= (sizeof(struct rte_sctp_hdr) >> 2) <<
 			     IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
 		break;
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		td_cmd |= IAVF_TX_DESC_CMD_L4T_EOFT_UDP;
 		td_offset |= (sizeof(struct rte_udp_hdr) >> 2) <<
 			     IAVF_TX_DESC_LENGTH_L4_FC_LEN_SHIFT;
@@ -365,7 +365,7 @@ iavf_txd_enable_offload(__rte_unused struct rte_mbuf *tx_pkt,
 #endif
 
 #ifdef IAVF_TX_VLAN_QINQ_OFFLOAD
-	if (ol_flags & (PKT_TX_VLAN | PKT_TX_QINQ)) {
+	if (ol_flags & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) {
 		td_cmd |= IAVF_TX_DESC_CMD_IL2TAG1;
 		*txd_hi |= ((uint64_t)tx_pkt->vlan_tci <<
 			    IAVF_TXD_QW1_L2TAG1_SHIFT);
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index ee1e905525..363d0e62df 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -108,42 +108,42 @@ desc_to_olflags_v(struct iavf_rx_queue *rxq, __m128i descs[4],
 			0x1c03804, 0x1c03804, 0x1c03804, 0x1c03804);
 
 	const __m128i cksum_mask = _mm_set_epi32(
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD |
-			PKT_RX_L4_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD |
-			PKT_RX_OUTER_IP_CKSUM_BAD);
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD |
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 
 	/* map rss and vlan type to rss hash and vlan flag */
 	const __m128i vlan_flags = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
-			0, 0, 0, PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
+			0, 0, 0, RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
 			0, 0, 0, 0);
 
 	const __m128i rss_flags = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_FDIR, PKT_RX_RSS_HASH, 0, 0,
-			0, 0, PKT_RX_FDIR, 0);
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_RSS_HASH, 0, 0,
+			0, 0, RTE_MBUF_F_RX_FDIR, 0);
 
 	const __m128i l3_l4e_flags = _mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 			/* shift right 1 bit to make sure it not exceed 255 */
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD |
-			 PKT_RX_L4_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_OUTER_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD) >> 1,
-			PKT_RX_IP_CKSUM_BAD >> 1,
-			(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1);
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+			 RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+			(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1);
 
 	vlan0 = _mm_unpackhi_epi32(descs[0], descs[1]);
 	vlan1 = _mm_unpackhi_epi32(descs[2], descs[3]);
@@ -193,10 +193,10 @@ static inline __m128i
 flex_rxd_to_fdir_flags_vec(const __m128i fdir_id0_3)
 {
 #define FDID_MIS_MAGIC 0xFFFFFFFF
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR != (1 << 2));
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
-	const __m128i pkt_fdir_bit = _mm_set1_epi32(PKT_RX_FDIR |
-			PKT_RX_FDIR_ID);
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR != (1 << 2));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
+	const __m128i pkt_fdir_bit = _mm_set1_epi32(RTE_MBUF_F_RX_FDIR |
+			RTE_MBUF_F_RX_FDIR_ID);
 	/* desc->flow_id field == 0xFFFFFFFF means fdir mismatch */
 	const __m128i fdir_mis_mask = _mm_set1_epi32(FDID_MIS_MAGIC);
 	__m128i fdir_mask = _mm_cmpeq_epi32(fdir_id0_3,
@@ -225,43 +225,43 @@ flex_desc_to_olflags_v(struct iavf_rx_queue *rxq, __m128i descs[4],
 	const __m128i desc_mask = _mm_set_epi32(0x3070, 0x3070,
 						0x3070, 0x3070);
 
-	const __m128i cksum_mask = _mm_set_epi32(PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD,
-						 PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD,
-						 PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD,
-						 PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD);
+	const __m128i cksum_mask = _mm_set_epi32(RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+						 RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+						 RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+						 RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 
 	/* map the checksum, rss and vlan fields to the checksum, rss
 	 * and vlan flag
 	 */
 	const __m128i cksum_flags = _mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0,
 			/* shift right 1 bit to make sure it not exceed 255 */
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD |
-			 PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-			(PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1);
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+			(RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 
 	const __m128i rss_vlan_flags = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_RSS_HASH, 0);
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_RSS_HASH, 0);
 
 	/* merge 4 descriptors */
 	flags = _mm_unpackhi_epi32(descs[0], descs[1]);
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 7a2220daa4..edf8d6fcd8 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -10,11 +10,10 @@
 #include "ice_rxtx.h"
 #include "ice_rxtx_vec_common.h"
 
-#define ICE_TX_CKSUM_OFFLOAD_MASK (		 \
-		PKT_TX_IP_CKSUM |		 \
-		PKT_TX_L4_MASK |		 \
-		PKT_TX_TCP_SEG |		 \
-		PKT_TX_OUTER_IP_CKSUM)
+#define ICE_TX_CKSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM |		 \
+		RTE_MBUF_F_TX_L4_MASK |		 \
+		RTE_MBUF_F_TX_TCP_SEG |		 \
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 
 /* Offset of mbuf dynamic field for protocol extraction data */
 int rte_net_ice_dynfield_proto_xtr_metadata_offs = -1;
@@ -88,13 +87,13 @@ ice_rxd_to_pkt_fields_by_comms_generic(__rte_unused struct ice_rx_queue *rxq,
 	uint16_t stat_err = rte_le_to_cpu_16(desc->status_error0);
 
 	if (likely(stat_err & (1 << ICE_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
-		mb->ol_flags |= PKT_RX_RSS_HASH;
+		mb->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
 	}
 
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
 	if (desc->flow_id != 0xFFFFFFFF) {
-		mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mb->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
 	}
 #endif
@@ -112,14 +111,14 @@ ice_rxd_to_pkt_fields_by_comms_ovs(__rte_unused struct ice_rx_queue *rxq,
 #endif
 
 	if (desc->flow_id != 0xFFFFFFFF) {
-		mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mb->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
 	}
 
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
 	stat_err = rte_le_to_cpu_16(desc->status_error0);
 	if (likely(stat_err & (1 << ICE_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
-		mb->ol_flags |= PKT_RX_RSS_HASH;
+		mb->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
 	}
 #endif
@@ -136,13 +135,13 @@ ice_rxd_to_pkt_fields_by_comms_aux_v1(struct ice_rx_queue *rxq,
 
 	stat_err = rte_le_to_cpu_16(desc->status_error0);
 	if (likely(stat_err & (1 << ICE_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
-		mb->ol_flags |= PKT_RX_RSS_HASH;
+		mb->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
 	}
 
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
 	if (desc->flow_id != 0xFFFFFFFF) {
-		mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mb->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
 	}
 
@@ -178,13 +177,13 @@ ice_rxd_to_pkt_fields_by_comms_aux_v2(struct ice_rx_queue *rxq,
 
 	stat_err = rte_le_to_cpu_16(desc->status_error0);
 	if (likely(stat_err & (1 << ICE_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
-		mb->ol_flags |= PKT_RX_RSS_HASH;
+		mb->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
 	}
 
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
 	if (desc->flow_id != 0xFFFFFFFF) {
-		mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+		mb->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 		mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
 	}
 
@@ -1506,27 +1505,27 @@ ice_rxd_error_to_pkt_flags(uint16_t stat_err0)
 		return 0;
 
 	if (likely(!(stat_err0 & ICE_RX_FLEX_ERR0_BITS))) {
-		flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+		flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 		return flags;
 	}
 
 	if (unlikely(stat_err0 & (1 << ICE_RX_FLEX_DESC_STATUS0_XSUM_IPE_S)))
-		flags |= PKT_RX_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		flags |= PKT_RX_IP_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	if (unlikely(stat_err0 & (1 << ICE_RX_FLEX_DESC_STATUS0_XSUM_L4E_S)))
-		flags |= PKT_RX_L4_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	else
-		flags |= PKT_RX_L4_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	if (unlikely(stat_err0 & (1 << ICE_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)))
-		flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 
 	if (unlikely(stat_err0 & (1 << ICE_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S)))
-		flags |= PKT_RX_OUTER_L4_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 	else
-		flags |= PKT_RX_OUTER_L4_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
 
 	return flags;
 }
@@ -1536,7 +1535,7 @@ ice_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union ice_rx_flex_desc *rxdp)
 {
 	if (rte_le_to_cpu_16(rxdp->wb.status_error0) &
 	    (1 << ICE_RX_FLEX_DESC_STATUS0_L2TAG1P_S)) {
-		mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		mb->vlan_tci =
 			rte_le_to_cpu_16(rxdp->wb.l2tag1);
 		PMD_RX_LOG(DEBUG, "Descriptor l2tag1: %u",
@@ -1548,8 +1547,8 @@ ice_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union ice_rx_flex_desc *rxdp)
 #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
 	if (rte_le_to_cpu_16(rxdp->wb.status_error1) &
 	    (1 << ICE_RX_FLEX_DESC_STATUS1_L2TAG2P_S)) {
-		mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ |
-				PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+		mb->ol_flags |= RTE_MBUF_F_RX_QINQ_STRIPPED | RTE_MBUF_F_RX_QINQ |
+				RTE_MBUF_F_RX_VLAN_STRIPPED | RTE_MBUF_F_RX_VLAN;
 		mb->vlan_tci_outer = mb->vlan_tci;
 		mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd);
 		PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
@@ -1642,7 +1641,7 @@ ice_rx_scan_hw_ring(struct ice_rx_queue *rxq)
 				rxq->time_high =
 				   rte_le_to_cpu_32(rxdp[j].wb.flex_ts.ts_high);
 				mb->timesync = rxq->queue_id;
-				pkt_flags |= PKT_RX_IEEE1588_PTP;
+				pkt_flags |= RTE_MBUF_F_RX_IEEE1588_PTP;
 			}
 
 			mb->ol_flags |= pkt_flags;
@@ -1959,7 +1958,7 @@ ice_recv_scattered_pkts(void *rx_queue,
 			rxq->time_high =
 			   rte_le_to_cpu_32(rxd.wb.flex_ts.ts_high);
 			first_seg->timesync = rxq->queue_id;
-			pkt_flags |= PKT_RX_IEEE1588_PTP;
+			pkt_flags |= RTE_MBUF_F_RX_IEEE1588_PTP;
 		}
 
 		first_seg->ol_flags |= pkt_flags;
@@ -2389,7 +2388,7 @@ ice_recv_pkts(void *rx_queue,
 			rxq->time_high =
 			   rte_le_to_cpu_32(rxd.wb.flex_ts.ts_high);
 			rxm->timesync = rxq->queue_id;
-			pkt_flags |= PKT_RX_IEEE1588_PTP;
+			pkt_flags |= RTE_MBUF_F_RX_IEEE1588_PTP;
 		}
 
 		rxm->ol_flags |= pkt_flags;
@@ -2423,11 +2422,11 @@ ice_parse_tunneling_params(uint64_t ol_flags,
 			    uint32_t *cd_tunneling)
 {
 	/* EIPT: External (outer) IP header type */
-	if (ol_flags & PKT_TX_OUTER_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 		*cd_tunneling |= ICE_TX_CTX_EIPT_IPV4;
-	else if (ol_flags & PKT_TX_OUTER_IPV4)
+	else if (ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)
 		*cd_tunneling |= ICE_TX_CTX_EIPT_IPV4_NO_CSUM;
-	else if (ol_flags & PKT_TX_OUTER_IPV6)
+	else if (ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)
 		*cd_tunneling |= ICE_TX_CTX_EIPT_IPV6;
 
 	/* EIPLEN: External (outer) IP header length, in DWords */
@@ -2435,16 +2434,16 @@ ice_parse_tunneling_params(uint64_t ol_flags,
 		ICE_TXD_CTX_QW0_EIPLEN_S;
 
 	/* L4TUNT: L4 Tunneling Type */
-	switch (ol_flags & PKT_TX_TUNNEL_MASK) {
-	case PKT_TX_TUNNEL_IPIP:
+	switch (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+	case RTE_MBUF_F_TX_TUNNEL_IPIP:
 		/* for non UDP / GRE tunneling, set to 00b */
 		break;
-	case PKT_TX_TUNNEL_VXLAN:
-	case PKT_TX_TUNNEL_GTP:
-	case PKT_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
+	case RTE_MBUF_F_TX_TUNNEL_GTP:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 		*cd_tunneling |= ICE_TXD_CTX_UDP_TUNNELING;
 		break;
-	case PKT_TX_TUNNEL_GRE:
+	case RTE_MBUF_F_TX_TUNNEL_GRE:
 		*cd_tunneling |= ICE_TXD_CTX_GRE_TUNNELING;
 		break;
 	default:
@@ -2481,7 +2480,7 @@ ice_txd_enable_checksum(uint64_t ol_flags,
 			union ice_tx_offload tx_offload)
 {
 	/* Set MACLEN */
-	if (ol_flags & PKT_TX_TUNNEL_MASK)
+	if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 		*td_offset |= (tx_offload.outer_l2_len >> 1)
 			<< ICE_TX_DESC_LEN_MACLEN_S;
 	else
@@ -2489,21 +2488,21 @@ ice_txd_enable_checksum(uint64_t ol_flags,
 			<< ICE_TX_DESC_LEN_MACLEN_S;
 
 	/* Enable L3 checksum offloads */
-	if (ol_flags & PKT_TX_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4_CSUM;
 		*td_offset |= (tx_offload.l3_len >> 2) <<
 			      ICE_TX_DESC_LEN_IPLEN_S;
-	} else if (ol_flags & PKT_TX_IPV4) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4;
 		*td_offset |= (tx_offload.l3_len >> 2) <<
 			      ICE_TX_DESC_LEN_IPLEN_S;
-	} else if (ol_flags & PKT_TX_IPV6) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV6) {
 		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV6;
 		*td_offset |= (tx_offload.l3_len >> 2) <<
 			      ICE_TX_DESC_LEN_IPLEN_S;
 	}
 
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_TCP;
 		*td_offset |= (tx_offload.l4_len >> 2) <<
 			      ICE_TX_DESC_LEN_L4_LEN_S;
@@ -2511,18 +2510,18 @@ ice_txd_enable_checksum(uint64_t ol_flags,
 	}
 
 	/* Enable L4 checksum offloads */
-	switch (ol_flags & PKT_TX_L4_MASK) {
-	case PKT_TX_TCP_CKSUM:
+	switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_TCP;
 		*td_offset |= (sizeof(struct rte_tcp_hdr) >> 2) <<
 			      ICE_TX_DESC_LEN_L4_LEN_S;
 		break;
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_SCTP;
 		*td_offset |= (sizeof(struct rte_sctp_hdr) >> 2) <<
 			      ICE_TX_DESC_LEN_L4_LEN_S;
 		break;
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		*td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_UDP;
 		*td_offset |= (sizeof(struct rte_udp_hdr) >> 2) <<
 			      ICE_TX_DESC_LEN_L4_LEN_S;
@@ -2600,11 +2599,11 @@ ice_build_ctob(uint32_t td_cmd,
 static inline uint16_t
 ice_calc_context_desc(uint64_t flags)
 {
-	static uint64_t mask = PKT_TX_TCP_SEG |
-		PKT_TX_QINQ |
-		PKT_TX_OUTER_IP_CKSUM |
-		PKT_TX_TUNNEL_MASK |
-		PKT_TX_IEEE1588_TMST;
+	static uint64_t mask = RTE_MBUF_F_TX_TCP_SEG |
+		RTE_MBUF_F_TX_QINQ |
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM |
+		RTE_MBUF_F_TX_TUNNEL_MASK |
+		RTE_MBUF_F_TX_IEEE1588_TMST;
 
 	return (flags & mask) ? 1 : 0;
 }
@@ -2622,7 +2621,7 @@ ice_set_tso_ctx(struct rte_mbuf *mbuf, union ice_tx_offload tx_offload)
 	}
 
 	hdr_len = tx_offload.l2_len + tx_offload.l3_len + tx_offload.l4_len;
-	hdr_len += (mbuf->ol_flags & PKT_TX_TUNNEL_MASK) ?
+	hdr_len += (mbuf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 		   tx_offload.outer_l2_len + tx_offload.outer_l3_len : 0;
 
 	cd_cmd = ICE_TX_CTX_DESC_TSO;
@@ -2709,7 +2708,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		 * the mbuf data size exceeds max data size that hw allows
 		 * per tx desc.
 		 */
-		if (ol_flags & PKT_TX_TCP_SEG)
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			nb_used = (uint16_t)(ice_calc_pkt_desc(tx_pkt) +
 					     nb_ctx);
 		else
@@ -2738,14 +2737,14 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Descriptor based VLAN insertion */
-		if (ol_flags & (PKT_TX_VLAN | PKT_TX_QINQ)) {
+		if (ol_flags & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) {
 			td_cmd |= ICE_TX_DESC_CMD_IL2TAG1;
 			td_tag = tx_pkt->vlan_tci;
 		}
 
 		/* Fill in tunneling parameters if necessary */
 		cd_tunneling_params = 0;
-		if (ol_flags & PKT_TX_TUNNEL_MASK)
+		if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 			ice_parse_tunneling_params(ol_flags, tx_offload,
 						   &cd_tunneling_params);
 
@@ -2769,10 +2768,10 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 				txe->mbuf = NULL;
 			}
 
-			if (ol_flags & PKT_TX_TCP_SEG)
+			if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 				cd_type_cmd_tso_mss |=
 					ice_set_tso_ctx(tx_pkt, tx_offload);
-			else if (ol_flags & PKT_TX_IEEE1588_TMST)
+			else if (ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 				cd_type_cmd_tso_mss |=
 					((uint64_t)ICE_TX_CTX_DESC_TSYN <<
 					ICE_TXD_CTX_QW1_CMD_S);
@@ -2781,7 +2780,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 				rte_cpu_to_le_32(cd_tunneling_params);
 
 			/* TX context descriptor based double VLAN insert */
-			if (ol_flags & PKT_TX_QINQ) {
+			if (ol_flags & RTE_MBUF_F_TX_QINQ) {
 				cd_l2tag2 = tx_pkt->vlan_tci_outer;
 				cd_type_cmd_tso_mss |=
 					((uint64_t)ICE_TX_CTX_DESC_IL2TAG2 <<
@@ -2809,7 +2808,7 @@ ice_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			slen = m_seg->data_len;
 			buf_dma_addr = rte_mbuf_data_iova(m_seg);
 
-			while ((ol_flags & PKT_TX_TCP_SEG) &&
+			while ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) &&
 				unlikely(slen > ICE_MAX_DATA_PER_TXD)) {
 				txd->buf_addr = rte_cpu_to_le_64(buf_dma_addr);
 				txd->cmd_type_offset_bsz =
@@ -3398,7 +3397,7 @@ ice_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		m = tx_pkts[i];
 		ol_flags = m->ol_flags;
 
-		if (ol_flags & PKT_TX_TCP_SEG &&
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG &&
 		    (m->tso_segsz < ICE_MIN_TSO_MSS ||
 		     m->tso_segsz > ICE_MAX_TSO_MSS ||
 		     m->pkt_len > ICE_MAX_TSO_FRAME_SIZE)) {
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 9725ac0180..c20927dc5c 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -20,10 +20,10 @@ static __rte_always_inline __m256i
 ice_flex_rxd_to_fdir_flags_vec_avx2(const __m256i fdir_id0_7)
 {
 #define FDID_MIS_MAGIC 0xFFFFFFFF
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR != (1 << 2));
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
-	const __m256i pkt_fdir_bit = _mm256_set1_epi32(PKT_RX_FDIR |
-			PKT_RX_FDIR_ID);
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR != (1 << 2));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
+	const __m256i pkt_fdir_bit = _mm256_set1_epi32(RTE_MBUF_F_RX_FDIR |
+			RTE_MBUF_F_RX_FDIR_ID);
 	/* desc->flow_id field == 0xFFFFFFFF means fdir mismatch */
 	const __m256i fdir_mis_mask = _mm256_set1_epi32(FDID_MIS_MAGIC);
 	__m256i fdir_mask = _mm256_cmpeq_epi32(fdir_id0_7,
@@ -142,82 +142,82 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	 * bits.  This gives use the l3_l4 flags.
 	 */
 	const __m256i l3_l4_flags_shuf =
-		_mm256_set_epi8((PKT_RX_OUTER_L4_CKSUM_BAD >> 20 |
-		 PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-		  PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
+		_mm256_set_epi8((RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 |
+		 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		  RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
 		/**
 		 * second 128-bits
 		 * shift right 20 bits to use the low two bits to indicate
 		 * outer checksum status
 		 * shift right 1 bit to make sure it not exceed 255
 		 */
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1);
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 	const __m256i cksum_mask =
-		 _mm256_set1_epi32(PKT_RX_IP_CKSUM_MASK |
-				   PKT_RX_L4_CKSUM_MASK |
-				   PKT_RX_OUTER_IP_CKSUM_BAD |
-				   PKT_RX_OUTER_L4_CKSUM_MASK);
+		 _mm256_set1_epi32(RTE_MBUF_F_RX_IP_CKSUM_MASK |
+				   RTE_MBUF_F_RX_L4_CKSUM_MASK |
+				   RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+				   RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK);
 	/**
 	 * data to be shuffled by result of flag mask, shifted down 12.
 	 * If RSS(bit12)/VLAN(bit13) are set,
@@ -226,16 +226,16 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	const __m256i rss_vlan_flags_shuf = _mm256_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_RSS_HASH, 0,
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_RSS_HASH, 0,
 			/* end up 128-bits */
 			0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_RSS_HASH, 0);
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_RSS_HASH, 0);
 
 	RTE_SET_USED(avx_aligned); /* for 32B descriptors we don't use this */
 
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 5bba9887d2..1fe3de5aa2 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -135,10 +135,10 @@ static inline __m256i
 ice_flex_rxd_to_fdir_flags_vec_avx512(const __m256i fdir_id0_7)
 {
 #define FDID_MIS_MAGIC 0xFFFFFFFF
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR != (1 << 2));
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
-	const __m256i pkt_fdir_bit = _mm256_set1_epi32(PKT_RX_FDIR |
-			PKT_RX_FDIR_ID);
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR != (1 << 2));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
+	const __m256i pkt_fdir_bit = _mm256_set1_epi32(RTE_MBUF_F_RX_FDIR |
+			RTE_MBUF_F_RX_FDIR_ID);
 	/* desc->flow_id field == 0xFFFFFFFF means fdir mismatch */
 	const __m256i fdir_mis_mask = _mm256_set1_epi32(FDID_MIS_MAGIC);
 	__m256i fdir_mask = _mm256_cmpeq_epi32(fdir_id0_7,
@@ -242,82 +242,82 @@ _ice_recv_raw_pkts_vec_avx512(struct ice_rx_queue *rxq,
 	 * bits.  This gives use the l3_l4 flags.
 	 */
 	const __m256i l3_l4_flags_shuf =
-		_mm256_set_epi8((PKT_RX_OUTER_L4_CKSUM_BAD >> 20 |
-		 PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-		  PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
+		_mm256_set_epi8((RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 |
+		 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		  RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
 		/**
 		 * second 128-bits
 		 * shift right 20 bits to use the low two bits to indicate
 		 * outer checksum status
 		 * shift right 1 bit to make sure it not exceed 255
 		 */
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD  |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1);
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD  |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 	const __m256i cksum_mask =
-		 _mm256_set1_epi32(PKT_RX_IP_CKSUM_MASK |
-				   PKT_RX_L4_CKSUM_MASK |
-				   PKT_RX_OUTER_IP_CKSUM_BAD |
-				   PKT_RX_OUTER_L4_CKSUM_MASK);
+		 _mm256_set1_epi32(RTE_MBUF_F_RX_IP_CKSUM_MASK |
+				   RTE_MBUF_F_RX_L4_CKSUM_MASK |
+				   RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+				   RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK);
 	/**
 	 * data to be shuffled by result of flag mask, shifted down 12.
 	 * If RSS(bit12)/VLAN(bit13) are set,
@@ -326,16 +326,16 @@ _ice_recv_raw_pkts_vec_avx512(struct ice_rx_queue *rxq,
 	const __m256i rss_vlan_flags_shuf = _mm256_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_RSS_HASH, 0,
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_RSS_HASH, 0,
 			/* 2nd 128-bits */
 			0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_RSS_HASH, 0);
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_RSS_HASH, 0);
 
 	uint16_t i, received;
 
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 5b5250565e..8983b6bf2c 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -568,33 +568,33 @@ ice_txd_enable_offload(struct rte_mbuf *tx_pkt,
 			ICE_TX_DESC_LEN_MACLEN_S;
 
 	/* Enable L3 checksum offload */
-	if (ol_flags & PKT_TX_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4_CSUM;
 		td_offset |= (tx_pkt->l3_len >> 2) <<
 			ICE_TX_DESC_LEN_IPLEN_S;
-	} else if (ol_flags & PKT_TX_IPV4) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4;
 		td_offset |= (tx_pkt->l3_len >> 2) <<
 			ICE_TX_DESC_LEN_IPLEN_S;
-	} else if (ol_flags & PKT_TX_IPV6) {
+	} else if (ol_flags & RTE_MBUF_F_TX_IPV6) {
 		td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV6;
 		td_offset |= (tx_pkt->l3_len >> 2) <<
 			ICE_TX_DESC_LEN_IPLEN_S;
 	}
 
 	/* Enable L4 checksum offloads */
-	switch (ol_flags & PKT_TX_L4_MASK) {
-	case PKT_TX_TCP_CKSUM:
+	switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_TCP;
 		td_offset |= (sizeof(struct rte_tcp_hdr) >> 2) <<
 			ICE_TX_DESC_LEN_L4_LEN_S;
 		break;
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_SCTP;
 		td_offset |= (sizeof(struct rte_sctp_hdr) >> 2) <<
 			ICE_TX_DESC_LEN_L4_LEN_S;
 		break;
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		td_cmd |= ICE_TX_DESC_CMD_L4T_EOFT_UDP;
 		td_offset |= (sizeof(struct rte_udp_hdr) >> 2) <<
 			ICE_TX_DESC_LEN_L4_LEN_S;
@@ -606,7 +606,7 @@ ice_txd_enable_offload(struct rte_mbuf *tx_pkt,
 	*txd_hi |= ((uint64_t)td_offset) << ICE_TXD_QW1_OFFSET_S;
 
 	/* Tx VLAN/QINQ insertion Offload */
-	if (ol_flags & (PKT_TX_VLAN | PKT_TX_QINQ)) {
+	if (ol_flags & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) {
 		td_cmd |= ICE_TX_DESC_CMD_IL2TAG1;
 		*txd_hi |= ((uint64_t)tx_pkt->vlan_tci <<
 				ICE_TXD_QW1_L2TAG1_S);
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 653bd28b41..df1347e64d 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -14,10 +14,10 @@ static inline __m128i
 ice_flex_rxd_to_fdir_flags_vec(const __m128i fdir_id0_3)
 {
 #define FDID_MIS_MAGIC 0xFFFFFFFF
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR != (1 << 2));
-	RTE_BUILD_BUG_ON(PKT_RX_FDIR_ID != (1 << 13));
-	const __m128i pkt_fdir_bit = _mm_set1_epi32(PKT_RX_FDIR |
-			PKT_RX_FDIR_ID);
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR != (1 << 2));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_FDIR_ID != (1 << 13));
+	const __m128i pkt_fdir_bit = _mm_set1_epi32(RTE_MBUF_F_RX_FDIR |
+			RTE_MBUF_F_RX_FDIR_ID);
 	/* desc->flow_id field == 0xFFFFFFFF means fdir mismatch */
 	const __m128i fdir_mis_mask = _mm_set1_epi32(FDID_MIS_MAGIC);
 	__m128i fdir_mask = _mm_cmpeq_epi32(fdir_id0_3,
@@ -116,72 +116,72 @@ ice_rx_desc_to_olflags_v(struct ice_rx_queue *rxq, __m128i descs[4],
 	 */
 	const __m128i desc_mask = _mm_set_epi32(0x30f0, 0x30f0,
 						0x30f0, 0x30f0);
-	const __m128i cksum_mask = _mm_set_epi32(PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD,
-						 PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD,
-						 PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD,
-						 PKT_RX_IP_CKSUM_MASK |
-						 PKT_RX_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_L4_CKSUM_MASK |
-						 PKT_RX_OUTER_IP_CKSUM_BAD);
+	const __m128i cksum_mask = _mm_set_epi32(RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+						 RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+						 RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD,
+						 RTE_MBUF_F_RX_IP_CKSUM_MASK |
+						 RTE_MBUF_F_RX_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK |
+						 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD);
 
 	/* map the checksum, rss and vlan fields to the checksum, rss
 	 * and vlan flag
 	 */
 	const __m128i cksum_flags =
-		_mm_set_epi8((PKT_RX_OUTER_L4_CKSUM_BAD >> 20 |
-		 PKT_RX_OUTER_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD |
-		  PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_BAD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
+		_mm_set_epi8((RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 |
+		 RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		  RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
 		/**
 		 * shift right 20 bits to use the low two bits to indicate
 		 * outer checksum status
 		 * shift right 1 bit to make sure it not exceed 255
 		 */
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_BAD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_OUTER_IP_CKSUM_BAD |
-		 PKT_RX_L4_CKSUM_GOOD | PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_BAD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_BAD) >> 1,
-		(PKT_RX_OUTER_L4_CKSUM_GOOD >> 20 | PKT_RX_L4_CKSUM_GOOD |
-		 PKT_RX_IP_CKSUM_GOOD) >> 1);
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_BAD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD |
+		 RTE_MBUF_F_RX_L4_CKSUM_GOOD | RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_BAD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_BAD) >> 1,
+		(RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD >> 20 | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		 RTE_MBUF_F_RX_IP_CKSUM_GOOD) >> 1);
 
 	const __m128i rss_vlan_flags = _mm_set_epi8(0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
-			PKT_RX_RSS_HASH | PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-			PKT_RX_RSS_HASH, 0);
+			RTE_MBUF_F_RX_RSS_HASH | RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+			RTE_MBUF_F_RX_RSS_HASH, 0);
 
 	/* merge 4 descriptors */
 	flags = _mm_unpackhi_epi32(descs[0], descs[1]);
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index 97656b39fd..7a5cb2f371 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -74,17 +74,16 @@
 #define IGC_TSO_MAX_MSS			9216
 
 /* Bit Mask to indicate what bits required for building TX context */
-#define IGC_TX_OFFLOAD_MASK (		\
-		PKT_TX_OUTER_IPV4 |	\
-		PKT_TX_IPV6 |		\
-		PKT_TX_IPV4 |		\
-		PKT_TX_VLAN |	\
-		PKT_TX_IP_CKSUM |	\
-		PKT_TX_L4_MASK |	\
-		PKT_TX_TCP_SEG |	\
-		PKT_TX_UDP_SEG)
-
-#define IGC_TX_OFFLOAD_SEG	(PKT_TX_TCP_SEG | PKT_TX_UDP_SEG)
+#define IGC_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_OUTER_IPV4 |	\
+		RTE_MBUF_F_TX_IPV6 |		\
+		RTE_MBUF_F_TX_IPV4 |		\
+		RTE_MBUF_F_TX_VLAN |	\
+		RTE_MBUF_F_TX_IP_CKSUM |	\
+		RTE_MBUF_F_TX_L4_MASK |	\
+		RTE_MBUF_F_TX_TCP_SEG |	\
+		RTE_MBUF_F_TX_UDP_SEG)
+
+#define IGC_TX_OFFLOAD_SEG	(RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG)
 
 #define IGC_ADVTXD_POPTS_TXSM	0x00000200 /* L4 Checksum offload request */
 #define IGC_ADVTXD_POPTS_IXSM	0x00000100 /* IP Checksum offload request */
@@ -92,7 +91,7 @@
 /* L4 Packet TYPE of Reserved */
 #define IGC_ADVTXD_TUCMD_L4T_RSV	0x00001800
 
-#define IGC_TX_OFFLOAD_NOTSUP_MASK (PKT_TX_OFFLOAD_MASK ^ IGC_TX_OFFLOAD_MASK)
+#define IGC_TX_OFFLOAD_NOTSUP_MASK (RTE_MBUF_F_TX_OFFLOAD_MASK ^ IGC_TX_OFFLOAD_MASK)
 
 /**
  * Structure associated with each descriptor of the RX ring of a RX queue.
@@ -215,16 +214,18 @@ struct igc_tx_queue {
 static inline uint64_t
 rx_desc_statuserr_to_pkt_flags(uint32_t statuserr)
 {
-	static uint64_t l4_chksum_flags[] = {0, 0, PKT_RX_L4_CKSUM_GOOD,
-			PKT_RX_L4_CKSUM_BAD};
+	static uint64_t l4_chksum_flags[] = {0, 0,
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD,
+			RTE_MBUF_F_RX_L4_CKSUM_BAD};
 
-	static uint64_t l3_chksum_flags[] = {0, 0, PKT_RX_IP_CKSUM_GOOD,
-			PKT_RX_IP_CKSUM_BAD};
+	static uint64_t l3_chksum_flags[] = {0, 0,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD};
 	uint64_t pkt_flags = 0;
 	uint32_t tmp;
 
 	if (statuserr & IGC_RXD_STAT_VP)
-		pkt_flags |= PKT_RX_VLAN_STRIPPED;
+		pkt_flags |= RTE_MBUF_F_RX_VLAN_STRIPPED;
 
 	tmp = !!(statuserr & (IGC_RXD_STAT_L4CS | IGC_RXD_STAT_UDPCS));
 	tmp = (tmp << 1) | (uint32_t)!!(statuserr & IGC_RXD_EXT_ERR_L4E);
@@ -332,10 +333,10 @@ rx_desc_get_pkt_info(struct igc_rx_queue *rxq, struct rte_mbuf *rxm,
 	rxm->vlan_tci = rte_le_to_cpu_16(rxd->wb.upper.vlan);
 
 	pkt_flags = (hlen_type_rss & IGC_RXD_RSS_TYPE_MASK) ?
-			PKT_RX_RSS_HASH : 0;
+			RTE_MBUF_F_RX_RSS_HASH : 0;
 
 	if (hlen_type_rss & IGC_RXD_VPKT)
-		pkt_flags |= PKT_RX_VLAN;
+		pkt_flags |= RTE_MBUF_F_RX_VLAN;
 
 	pkt_flags |= rx_desc_statuserr_to_pkt_flags(staterr);
 
@@ -1468,7 +1469,7 @@ check_tso_para(uint64_t ol_req, union igc_tx_offload ol_para)
 	if (ol_para.tso_segsz > IGC_TSO_MAX_MSS || ol_para.l2_len +
 		ol_para.l3_len + ol_para.l4_len > IGC_TSO_MAX_HDRLEN) {
 		ol_req &= ~IGC_TX_OFFLOAD_SEG;
-		ol_req |= PKT_TX_TCP_CKSUM;
+		ol_req |= RTE_MBUF_F_TX_TCP_CKSUM;
 	}
 	return ol_req;
 }
@@ -1530,20 +1531,20 @@ igc_set_xmit_ctx(struct igc_tx_queue *txq,
 	/* Specify which HW CTX to upload. */
 	mss_l4len_idx = (ctx_curr << IGC_ADVTXD_IDX_SHIFT);
 
-	if (ol_flags & PKT_TX_VLAN)
+	if (ol_flags & RTE_MBUF_F_TX_VLAN)
 		tx_offload_mask.vlan_tci = 0xffff;
 
 	/* check if TCP segmentation required for this packet */
 	if (ol_flags & IGC_TX_OFFLOAD_SEG) {
 		/* implies IP cksum in IPv4 */
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			type_tucmd_mlhl = IGC_ADVTXD_TUCMD_IPV4 |
 				IGC_ADVTXD_DTYP_CTXT | IGC_ADVTXD_DCMD_DEXT;
 		else
 			type_tucmd_mlhl = IGC_ADVTXD_TUCMD_IPV6 |
 				IGC_ADVTXD_DTYP_CTXT | IGC_ADVTXD_DCMD_DEXT;
 
-		if (ol_flags & PKT_TX_TCP_SEG)
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			type_tucmd_mlhl |= IGC_ADVTXD_TUCMD_L4T_TCP;
 		else
 			type_tucmd_mlhl |= IGC_ADVTXD_TUCMD_L4T_UDP;
@@ -1554,26 +1555,26 @@ igc_set_xmit_ctx(struct igc_tx_queue *txq,
 		mss_l4len_idx |= (uint32_t)tx_offload.l4_len <<
 				IGC_ADVTXD_L4LEN_SHIFT;
 	} else { /* no TSO, check if hardware checksum is needed */
-		if (ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK))
+		if (ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK))
 			tx_offload_mask.data |= TX_MACIP_LEN_CMP_MASK;
 
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			type_tucmd_mlhl = IGC_ADVTXD_TUCMD_IPV4;
 
-		switch (ol_flags & PKT_TX_L4_MASK) {
-		case PKT_TX_TCP_CKSUM:
+		switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+		case RTE_MBUF_F_TX_TCP_CKSUM:
 			type_tucmd_mlhl |= IGC_ADVTXD_TUCMD_L4T_TCP |
 				IGC_ADVTXD_DTYP_CTXT | IGC_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= (uint32_t)sizeof(struct rte_tcp_hdr)
 				<< IGC_ADVTXD_L4LEN_SHIFT;
 			break;
-		case PKT_TX_UDP_CKSUM:
+		case RTE_MBUF_F_TX_UDP_CKSUM:
 			type_tucmd_mlhl |= IGC_ADVTXD_TUCMD_L4T_UDP |
 				IGC_ADVTXD_DTYP_CTXT | IGC_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= (uint32_t)sizeof(struct rte_udp_hdr)
 				<< IGC_ADVTXD_L4LEN_SHIFT;
 			break;
-		case PKT_TX_SCTP_CKSUM:
+		case RTE_MBUF_F_TX_SCTP_CKSUM:
 			type_tucmd_mlhl |= IGC_ADVTXD_TUCMD_L4T_SCTP |
 				IGC_ADVTXD_DTYP_CTXT | IGC_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= (uint32_t)sizeof(struct rte_sctp_hdr)
@@ -1604,7 +1605,7 @@ tx_desc_vlan_flags_to_cmdtype(uint64_t ol_flags)
 	uint32_t cmdtype;
 	static uint32_t vlan_cmd[2] = {0, IGC_ADVTXD_DCMD_VLE};
 	static uint32_t tso_cmd[2] = {0, IGC_ADVTXD_DCMD_TSE};
-	cmdtype = vlan_cmd[(ol_flags & PKT_TX_VLAN) != 0];
+	cmdtype = vlan_cmd[(ol_flags & RTE_MBUF_F_TX_VLAN) != 0];
 	cmdtype |= tso_cmd[(ol_flags & IGC_TX_OFFLOAD_SEG) != 0];
 	return cmdtype;
 }
@@ -1616,8 +1617,8 @@ tx_desc_cksum_flags_to_olinfo(uint64_t ol_flags)
 	static const uint32_t l3_olinfo[2] = {0, IGC_ADVTXD_POPTS_IXSM};
 	uint32_t tmp;
 
-	tmp  = l4_olinfo[(ol_flags & PKT_TX_L4_MASK)  != PKT_TX_L4_NO_CKSUM];
-	tmp |= l3_olinfo[(ol_flags & PKT_TX_IP_CKSUM) != 0];
+	tmp  = l4_olinfo[(ol_flags & RTE_MBUF_F_TX_L4_MASK)  != RTE_MBUF_F_TX_L4_NO_CKSUM];
+	tmp |= l3_olinfo[(ol_flags & RTE_MBUF_F_TX_IP_CKSUM) != 0];
 	tmp |= l4_olinfo[(ol_flags & IGC_TX_OFFLOAD_SEG) != 0];
 	return tmp;
 }
@@ -1774,7 +1775,7 @@ igc_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		 * Timer 0 should be used to for packet timestamping,
 		 * sample the packet timestamp to reg 0
 		 */
-		if (ol_flags & PKT_TX_IEEE1588_TMST)
+		if (ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 			cmd_type_len |= IGC_ADVTXD_MAC_TSTAMP;
 
 		if (tx_ol_req) {
diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
index 001a368856..fa77ca4327 100644
--- a/drivers/net/ionic/ionic_rxtx.c
+++ b/drivers/net/ionic/ionic_rxtx.c
@@ -257,7 +257,7 @@ ionic_tx_tcp_pseudo_csum(struct rte_mbuf *txm)
 	struct rte_tcp_hdr *tcp_hdr = (struct rte_tcp_hdr *)
 		(l3_hdr + txm->l3_len);
 
-	if (txm->ol_flags & PKT_TX_IP_CKSUM) {
+	if (txm->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		struct rte_ipv4_hdr *ipv4_hdr = (struct rte_ipv4_hdr *)l3_hdr;
 		ipv4_hdr->hdr_checksum = 0;
 		tcp_hdr->cksum = 0;
@@ -278,7 +278,7 @@ ionic_tx_tcp_inner_pseudo_csum(struct rte_mbuf *txm)
 	struct rte_tcp_hdr *tcp_hdr = (struct rte_tcp_hdr *)
 		(l3_hdr + txm->l3_len);
 
-	if (txm->ol_flags & PKT_TX_IPV4) {
+	if (txm->ol_flags & RTE_MBUF_F_TX_IPV4) {
 		struct rte_ipv4_hdr *ipv4_hdr = (struct rte_ipv4_hdr *)l3_hdr;
 		ipv4_hdr->hdr_checksum = 0;
 		tcp_hdr->cksum = 0;
@@ -355,14 +355,14 @@ ionic_tx_tso(struct ionic_tx_qcq *txq, struct rte_mbuf *txm)
 	uint32_t offset = 0;
 	bool start, done;
 	bool encap;
-	bool has_vlan = !!(txm->ol_flags & PKT_TX_VLAN);
+	bool has_vlan = !!(txm->ol_flags & RTE_MBUF_F_TX_VLAN);
 	uint16_t vlan_tci = txm->vlan_tci;
 	uint64_t ol_flags = txm->ol_flags;
 
-	encap = ((ol_flags & PKT_TX_OUTER_IP_CKSUM) ||
-		(ol_flags & PKT_TX_OUTER_UDP_CKSUM)) &&
-		((ol_flags & PKT_TX_OUTER_IPV4) ||
-		(ol_flags & PKT_TX_OUTER_IPV6));
+	encap = ((ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) ||
+		 (ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM)) &&
+		((ol_flags & RTE_MBUF_F_TX_OUTER_IPV4) ||
+		 (ol_flags & RTE_MBUF_F_TX_OUTER_IPV6));
 
 	/* Preload inner-most TCP csum field with IP pseudo hdr
 	 * calculated with IP length set to zero.  HW will later
@@ -477,15 +477,15 @@ ionic_tx(struct ionic_tx_qcq *txq, struct rte_mbuf *txm)
 	desc = &desc_base[q->head_idx];
 	info = IONIC_INFO_PTR(q, q->head_idx);
 
-	if ((ol_flags & PKT_TX_IP_CKSUM) &&
+	if ((ol_flags & RTE_MBUF_F_TX_IP_CKSUM) &&
 	    (txq->flags & IONIC_QCQ_F_CSUM_L3)) {
 		opcode = IONIC_TXQ_DESC_OPCODE_CSUM_HW;
 		flags |= IONIC_TXQ_DESC_FLAG_CSUM_L3;
 	}
 
-	if (((ol_flags & PKT_TX_TCP_CKSUM) &&
+	if (((ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) &&
 	     (txq->flags & IONIC_QCQ_F_CSUM_TCP)) ||
-	    ((ol_flags & PKT_TX_UDP_CKSUM) &&
+	    ((ol_flags & RTE_MBUF_F_TX_UDP_CKSUM) &&
 	     (txq->flags & IONIC_QCQ_F_CSUM_UDP))) {
 		opcode = IONIC_TXQ_DESC_OPCODE_CSUM_HW;
 		flags |= IONIC_TXQ_DESC_FLAG_CSUM_L4;
@@ -494,11 +494,11 @@ ionic_tx(struct ionic_tx_qcq *txq, struct rte_mbuf *txm)
 	if (opcode == IONIC_TXQ_DESC_OPCODE_CSUM_NONE)
 		stats->no_csum++;
 
-	has_vlan = (ol_flags & PKT_TX_VLAN);
-	encap = ((ol_flags & PKT_TX_OUTER_IP_CKSUM) ||
-			(ol_flags & PKT_TX_OUTER_UDP_CKSUM)) &&
-			((ol_flags & PKT_TX_OUTER_IPV4) ||
-			(ol_flags & PKT_TX_OUTER_IPV6));
+	has_vlan = (ol_flags & RTE_MBUF_F_TX_VLAN);
+	encap = ((ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) ||
+			(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM)) &&
+			((ol_flags & RTE_MBUF_F_TX_OUTER_IPV4) ||
+			 (ol_flags & RTE_MBUF_F_TX_OUTER_IPV6));
 
 	flags |= has_vlan ? IONIC_TXQ_DESC_FLAG_VLAN : 0;
 	flags |= encap ? IONIC_TXQ_DESC_FLAG_ENCAP : 0;
@@ -555,7 +555,7 @@ ionic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			rte_prefetch0(&q->info[next_q_head_idx]);
 		}
 
-		if (tx_pkts[nb_tx]->ol_flags & PKT_TX_TCP_SEG)
+		if (tx_pkts[nb_tx]->ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			err = ionic_tx_tso(txq, tx_pkts[nb_tx]);
 		else
 			err = ionic_tx(txq, tx_pkts[nb_tx]);
@@ -585,16 +585,15 @@ ionic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
  *
  **********************************************************************/
 
-#define IONIC_TX_OFFLOAD_MASK (	\
-	PKT_TX_IPV4 |		\
-	PKT_TX_IPV6 |		\
-	PKT_TX_VLAN |		\
-	PKT_TX_IP_CKSUM |	\
-	PKT_TX_TCP_SEG |	\
-	PKT_TX_L4_MASK)
+#define IONIC_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_IPV4 |		\
+	RTE_MBUF_F_TX_IPV6 |		\
+	RTE_MBUF_F_TX_VLAN |		\
+	RTE_MBUF_F_TX_IP_CKSUM |	\
+	RTE_MBUF_F_TX_TCP_SEG |	\
+	RTE_MBUF_F_TX_L4_MASK)
 
 #define IONIC_TX_OFFLOAD_NOTSUP_MASK \
-	(PKT_TX_OFFLOAD_MASK ^ IONIC_TX_OFFLOAD_MASK)
+	(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IONIC_TX_OFFLOAD_MASK)
 
 uint16_t
 ionic_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
@@ -840,30 +839,30 @@ ionic_rx_clean(struct ionic_rx_qcq *rxq,
 	}
 
 	/* RSS */
-	pkt_flags |= PKT_RX_RSS_HASH;
+	pkt_flags |= RTE_MBUF_F_RX_RSS_HASH;
 	rxm->hash.rss = cq_desc->rss_hash;
 
 	/* Vlan Strip */
 	if (cq_desc->csum_flags & IONIC_RXQ_COMP_CSUM_F_VLAN) {
-		pkt_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		pkt_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		rxm->vlan_tci = cq_desc->vlan_tci;
 	}
 
 	/* Checksum */
 	if (cq_desc->csum_flags & IONIC_RXQ_COMP_CSUM_F_CALC) {
 		if (cq_desc->csum_flags & IONIC_RXQ_COMP_CSUM_F_IP_OK)
-			pkt_flags |= PKT_RX_IP_CKSUM_GOOD;
+			pkt_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 		else if (cq_desc->csum_flags & IONIC_RXQ_COMP_CSUM_F_IP_BAD)
-			pkt_flags |= PKT_RX_IP_CKSUM_BAD;
+			pkt_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 
 		if ((cq_desc->csum_flags & IONIC_RXQ_COMP_CSUM_F_TCP_OK) ||
 			(cq_desc->csum_flags & IONIC_RXQ_COMP_CSUM_F_UDP_OK))
-			pkt_flags |= PKT_RX_L4_CKSUM_GOOD;
+			pkt_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		else if ((cq_desc->csum_flags &
 				IONIC_RXQ_COMP_CSUM_F_TCP_BAD) ||
 				(cq_desc->csum_flags &
 				IONIC_RXQ_COMP_CSUM_F_UDP_BAD))
-			pkt_flags |= PKT_RX_L4_CKSUM_BAD;
+			pkt_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	}
 
 	rxm->ol_flags = pkt_flags;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index a127dc0d86..3a5472a5bd 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1960,10 +1960,10 @@ ixgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
 	rxq = dev->data->rx_queues[queue];
 
 	if (on) {
-		rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		rxq->vlan_flags = RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
 	} else {
-		rxq->vlan_flags = PKT_RX_VLAN;
+		rxq->vlan_flags = RTE_MBUF_F_RX_VLAN;
 		rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
 	}
 }
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 0e3aec9906..1c80cd55d3 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -54,27 +54,26 @@
 #include "ixgbe_rxtx.h"
 
 #ifdef RTE_LIBRTE_IEEE1588
-#define IXGBE_TX_IEEE1588_TMST PKT_TX_IEEE1588_TMST
+#define IXGBE_TX_IEEE1588_TMST RTE_MBUF_F_TX_IEEE1588_TMST
 #else
 #define IXGBE_TX_IEEE1588_TMST 0
 #endif
 /* Bit Mask to indicate what bits required for building TX context */
-#define IXGBE_TX_OFFLOAD_MASK (			 \
-		PKT_TX_OUTER_IPV6 |		 \
-		PKT_TX_OUTER_IPV4 |		 \
-		PKT_TX_IPV6 |			 \
-		PKT_TX_IPV4 |			 \
-		PKT_TX_VLAN |		 \
-		PKT_TX_IP_CKSUM |		 \
-		PKT_TX_L4_MASK |		 \
-		PKT_TX_TCP_SEG |		 \
-		PKT_TX_MACSEC |			 \
-		PKT_TX_OUTER_IP_CKSUM |		 \
-		PKT_TX_SEC_OFFLOAD |	 \
+#define IXGBE_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_OUTER_IPV6 |		 \
+		RTE_MBUF_F_TX_OUTER_IPV4 |		 \
+		RTE_MBUF_F_TX_IPV6 |			 \
+		RTE_MBUF_F_TX_IPV4 |			 \
+		RTE_MBUF_F_TX_VLAN |		 \
+		RTE_MBUF_F_TX_IP_CKSUM |		 \
+		RTE_MBUF_F_TX_L4_MASK |		 \
+		RTE_MBUF_F_TX_TCP_SEG |		 \
+		RTE_MBUF_F_TX_MACSEC |			 \
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM |		 \
+		RTE_MBUF_F_TX_SEC_OFFLOAD |	 \
 		IXGBE_TX_IEEE1588_TMST)
 
 #define IXGBE_TX_OFFLOAD_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ IXGBE_TX_OFFLOAD_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ IXGBE_TX_OFFLOAD_MASK)
 
 #if 1
 #define RTE_PMD_USE_PREFETCH
@@ -384,14 +383,14 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 	/* Specify which HW CTX to upload. */
 	mss_l4len_idx |= (ctx_idx << IXGBE_ADVTXD_IDX_SHIFT);
 
-	if (ol_flags & PKT_TX_VLAN) {
+	if (ol_flags & RTE_MBUF_F_TX_VLAN) {
 		tx_offload_mask.vlan_tci |= ~0;
 	}
 
 	/* check if TCP segmentation required for this packet */
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		/* implies IP cksum in IPv4 */
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			type_tucmd_mlhl = IXGBE_ADVTXD_TUCMD_IPV4 |
 				IXGBE_ADVTXD_TUCMD_L4T_TCP |
 				IXGBE_ADVTXD_DTYP_CTXT | IXGBE_ADVTXD_DCMD_DEXT;
@@ -407,14 +406,14 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 		mss_l4len_idx |= tx_offload.tso_segsz << IXGBE_ADVTXD_MSS_SHIFT;
 		mss_l4len_idx |= tx_offload.l4_len << IXGBE_ADVTXD_L4LEN_SHIFT;
 	} else { /* no TSO, check if hardware checksum is needed */
-		if (ol_flags & PKT_TX_IP_CKSUM) {
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 			type_tucmd_mlhl = IXGBE_ADVTXD_TUCMD_IPV4;
 			tx_offload_mask.l2_len |= ~0;
 			tx_offload_mask.l3_len |= ~0;
 		}
 
-		switch (ol_flags & PKT_TX_L4_MASK) {
-		case PKT_TX_UDP_CKSUM:
+		switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+		case RTE_MBUF_F_TX_UDP_CKSUM:
 			type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_UDP |
 				IXGBE_ADVTXD_DTYP_CTXT | IXGBE_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= sizeof(struct rte_udp_hdr)
@@ -422,7 +421,7 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 			tx_offload_mask.l2_len |= ~0;
 			tx_offload_mask.l3_len |= ~0;
 			break;
-		case PKT_TX_TCP_CKSUM:
+		case RTE_MBUF_F_TX_TCP_CKSUM:
 			type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_TCP |
 				IXGBE_ADVTXD_DTYP_CTXT | IXGBE_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= sizeof(struct rte_tcp_hdr)
@@ -430,7 +429,7 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 			tx_offload_mask.l2_len |= ~0;
 			tx_offload_mask.l3_len |= ~0;
 			break;
-		case PKT_TX_SCTP_CKSUM:
+		case RTE_MBUF_F_TX_SCTP_CKSUM:
 			type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_SCTP |
 				IXGBE_ADVTXD_DTYP_CTXT | IXGBE_ADVTXD_DCMD_DEXT;
 			mss_l4len_idx |= sizeof(struct rte_sctp_hdr)
@@ -445,7 +444,7 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 		}
 	}
 
-	if (ol_flags & PKT_TX_OUTER_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) {
 		tx_offload_mask.outer_l2_len |= ~0;
 		tx_offload_mask.outer_l3_len |= ~0;
 		tx_offload_mask.l2_len |= ~0;
@@ -455,7 +454,7 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 			       << IXGBE_ADVTXD_TUNNEL_LEN;
 	}
 #ifdef RTE_LIB_SECURITY
-	if (ol_flags & PKT_TX_SEC_OFFLOAD) {
+	if (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) {
 		union ixgbe_crypto_tx_desc_md *md =
 				(union ixgbe_crypto_tx_desc_md *)mdata;
 		seqnum_seed |=
@@ -479,7 +478,7 @@ ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 
 	ctx_txd->type_tucmd_mlhl = rte_cpu_to_le_32(type_tucmd_mlhl);
 	vlan_macip_lens = tx_offload.l3_len;
-	if (ol_flags & PKT_TX_OUTER_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 		vlan_macip_lens |= (tx_offload.outer_l2_len <<
 				    IXGBE_ADVTXD_MACLEN_SHIFT);
 	else
@@ -529,11 +528,11 @@ tx_desc_cksum_flags_to_olinfo(uint64_t ol_flags)
 {
 	uint32_t tmp = 0;
 
-	if ((ol_flags & PKT_TX_L4_MASK) != PKT_TX_L4_NO_CKSUM)
+	if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) != RTE_MBUF_F_TX_L4_NO_CKSUM)
 		tmp |= IXGBE_ADVTXD_POPTS_TXSM;
-	if (ol_flags & PKT_TX_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 		tmp |= IXGBE_ADVTXD_POPTS_IXSM;
-	if (ol_flags & PKT_TX_TCP_SEG)
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 		tmp |= IXGBE_ADVTXD_POPTS_TXSM;
 	return tmp;
 }
@@ -543,13 +542,13 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
 {
 	uint32_t cmdtype = 0;
 
-	if (ol_flags & PKT_TX_VLAN)
+	if (ol_flags & RTE_MBUF_F_TX_VLAN)
 		cmdtype |= IXGBE_ADVTXD_DCMD_VLE;
-	if (ol_flags & PKT_TX_TCP_SEG)
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 		cmdtype |= IXGBE_ADVTXD_DCMD_TSE;
-	if (ol_flags & PKT_TX_OUTER_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 		cmdtype |= (1 << IXGBE_ADVTXD_OUTERIPCS_SHIFT);
-	if (ol_flags & PKT_TX_MACSEC)
+	if (ol_flags & RTE_MBUF_F_TX_MACSEC)
 		cmdtype |= IXGBE_ADVTXD_MAC_LINKSEC;
 	return cmdtype;
 }
@@ -678,7 +677,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		 */
 		ol_flags = tx_pkt->ol_flags;
 #ifdef RTE_LIB_SECURITY
-		use_ipsec = txq->using_ipsec && (ol_flags & PKT_TX_SEC_OFFLOAD);
+		use_ipsec = txq->using_ipsec && (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD);
 #endif
 
 		/* If hardware offload required */
@@ -826,14 +825,14 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			IXGBE_ADVTXD_DCMD_IFCS | IXGBE_ADVTXD_DCMD_DEXT;
 
 #ifdef RTE_LIBRTE_IEEE1588
-		if (ol_flags & PKT_TX_IEEE1588_TMST)
+		if (ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 			cmd_type_len |= IXGBE_ADVTXD_MAC_1588;
 #endif
 
 		olinfo_status = 0;
 		if (tx_ol_req) {
 
-			if (ol_flags & PKT_TX_TCP_SEG) {
+			if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 				/* when TSO is on, paylen in descriptor is the
 				 * not the packet len but the tcp payload len */
 				pkt_len -= (tx_offload.l2_len +
@@ -1433,14 +1432,14 @@ static inline uint64_t
 ixgbe_rxd_pkt_info_to_pkt_flags(uint16_t pkt_info)
 {
 	static uint64_t ip_rss_types_map[16] __rte_cache_aligned = {
-		0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-		0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
-		PKT_RX_RSS_HASH, 0, 0, 0,
-		0, 0, 0,  PKT_RX_FDIR,
+		0, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+		0, RTE_MBUF_F_RX_RSS_HASH, 0, RTE_MBUF_F_RX_RSS_HASH,
+		RTE_MBUF_F_RX_RSS_HASH, 0, 0, 0,
+		0, 0, 0,  RTE_MBUF_F_RX_FDIR,
 	};
 #ifdef RTE_LIBRTE_IEEE1588
 	static uint64_t ip_pkt_etqf_map[8] = {
-		0, 0, 0, PKT_RX_IEEE1588_PTP,
+		0, 0, 0, RTE_MBUF_F_RX_IEEE1588_PTP,
 		0, 0, 0, 0,
 	};
 
@@ -1468,7 +1467,7 @@ rx_desc_status_to_pkt_flags(uint32_t rx_status, uint64_t vlan_flags)
 
 #ifdef RTE_LIBRTE_IEEE1588
 	if (rx_status & IXGBE_RXD_STAT_TMST)
-		pkt_flags = pkt_flags | PKT_RX_IEEE1588_TMST;
+		pkt_flags = pkt_flags | RTE_MBUF_F_RX_IEEE1588_TMST;
 #endif
 	return pkt_flags;
 }
@@ -1484,10 +1483,10 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status, uint16_t pkt_info,
 	 * Bit 30: L4I, L4I integrity error
 	 */
 	static uint64_t error_to_pkt_flags_map[4] = {
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD,
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD,
-		PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_GOOD,
-		PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_GOOD,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD
 	};
 	pkt_flags = error_to_pkt_flags_map[(rx_status >>
 		IXGBE_RXDADV_ERR_CKSUM_BIT) & IXGBE_RXDADV_ERR_CKSUM_MSK];
@@ -1499,18 +1498,18 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status, uint16_t pkt_info,
 	if ((rx_status & IXGBE_RXDADV_ERR_TCPE) &&
 	    (pkt_info & IXGBE_RXDADV_PKTTYPE_UDP) &&
 	    rx_udp_csum_zero_err)
-		pkt_flags &= ~PKT_RX_L4_CKSUM_BAD;
+		pkt_flags &= ~RTE_MBUF_F_RX_L4_CKSUM_BAD;
 
 	if ((rx_status & IXGBE_RXD_STAT_OUTERIPCS) &&
 	    (rx_status & IXGBE_RXDADV_ERR_OUTERIPER)) {
-		pkt_flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+		pkt_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 	}
 
 #ifdef RTE_LIB_SECURITY
 	if (rx_status & IXGBE_RXD_STAT_SECP) {
-		pkt_flags |= PKT_RX_SEC_OFFLOAD;
+		pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD;
 		if (rx_status & IXGBE_RXDADV_LNKSEC_ERROR_BAD_SIG)
-			pkt_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+			pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
 	}
 #endif
 
@@ -1597,10 +1596,10 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
 				ixgbe_rxd_pkt_info_to_pkt_type
 					(pkt_info[j], rxq->pkt_type_mask);
 
-			if (likely(pkt_flags & PKT_RX_RSS_HASH))
+			if (likely(pkt_flags & RTE_MBUF_F_RX_RSS_HASH))
 				mb->hash.rss = rte_le_to_cpu_32(
 				    rxdp[j].wb.lower.hi_dword.rss);
-			else if (pkt_flags & PKT_RX_FDIR) {
+			else if (pkt_flags & RTE_MBUF_F_RX_FDIR) {
 				mb->hash.fdir.hash = rte_le_to_cpu_16(
 				    rxdp[j].wb.lower.hi_dword.csum_ip.csum) &
 				    IXGBE_ATR_HASH_MASK;
@@ -1918,7 +1917,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		rxm->port = rxq->port_id;
 
 		pkt_info = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
-		/* Only valid if PKT_RX_VLAN set in pkt_flags */
+		/* Only valid if RTE_MBUF_F_RX_VLAN set in pkt_flags */
 		rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
 
 		pkt_flags = rx_desc_status_to_pkt_flags(staterr, vlan_flags);
@@ -1932,10 +1931,10 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 			ixgbe_rxd_pkt_info_to_pkt_type(pkt_info,
 						       rxq->pkt_type_mask);
 
-		if (likely(pkt_flags & PKT_RX_RSS_HASH))
+		if (likely(pkt_flags & RTE_MBUF_F_RX_RSS_HASH))
 			rxm->hash.rss = rte_le_to_cpu_32(
 						rxd.wb.lower.hi_dword.rss);
-		else if (pkt_flags & PKT_RX_FDIR) {
+		else if (pkt_flags & RTE_MBUF_F_RX_FDIR) {
 			rxm->hash.fdir.hash = rte_le_to_cpu_16(
 					rxd.wb.lower.hi_dword.csum_ip.csum) &
 					IXGBE_ATR_HASH_MASK;
@@ -2011,7 +2010,7 @@ ixgbe_fill_cluster_head_buf(
 
 	head->port = rxq->port_id;
 
-	/* The vlan_tci field is only valid when PKT_RX_VLAN is
+	/* The vlan_tci field is only valid when RTE_MBUF_F_RX_VLAN is
 	 * set in the pkt_flags field.
 	 */
 	head->vlan_tci = rte_le_to_cpu_16(desc->wb.upper.vlan);
@@ -2024,9 +2023,9 @@ ixgbe_fill_cluster_head_buf(
 	head->packet_type =
 		ixgbe_rxd_pkt_info_to_pkt_type(pkt_info, rxq->pkt_type_mask);
 
-	if (likely(pkt_flags & PKT_RX_RSS_HASH))
+	if (likely(pkt_flags & RTE_MBUF_F_RX_RSS_HASH))
 		head->hash.rss = rte_le_to_cpu_32(desc->wb.lower.hi_dword.rss);
-	else if (pkt_flags & PKT_RX_FDIR) {
+	else if (pkt_flags & RTE_MBUF_F_RX_FDIR) {
 		head->hash.fdir.hash =
 			rte_le_to_cpu_16(desc->wb.lower.hi_dword.csum_ip.csum)
 							  & IXGBE_ATR_HASH_MASK;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
index c541f537c7..90b254ea26 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c
@@ -105,10 +105,10 @@ desc_to_olflags_v(uint8x16x2_t sterr_tmp1, uint8x16x2_t sterr_tmp2,
 			0x00, 0x00, 0x00, 0x00};
 
 	const uint8x16_t rss_flags = {
-			0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-			0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, 0, 0, 0,
-			0, 0, 0, PKT_RX_FDIR};
+			0, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+			0, RTE_MBUF_F_RX_RSS_HASH, 0, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, 0, 0, 0,
+			0, 0, 0, RTE_MBUF_F_RX_FDIR};
 
 	/* mask everything except vlan present and l4/ip csum error */
 	const uint8x16_t vlan_csum_msk = {
@@ -123,23 +123,23 @@ desc_to_olflags_v(uint8x16x2_t sterr_tmp1, uint8x16x2_t sterr_tmp2,
 
 	/* map vlan present (0x8), IPE (0x2), L4E (0x1) to ol_flags */
 	const uint8x16_t vlan_csum_map_lo = {
-			PKT_RX_IP_CKSUM_GOOD,
-			PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_BAD,
-			PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD,
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD,
+			RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
 			0, 0, 0, 0,
-			vlan_flags | PKT_RX_IP_CKSUM_GOOD,
-			vlan_flags | PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD,
-			vlan_flags | PKT_RX_IP_CKSUM_BAD,
-			vlan_flags | PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD,
+			vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_GOOD,
+			vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+			vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_BAD,
+			vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
 			0, 0, 0, 0};
 
 	const uint8x16_t vlan_csum_map_hi = {
-			PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
-			PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
 			0, 0, 0, 0,
-			PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
-			PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
 			0, 0, 0, 0};
 
 	/* change mask from 0x200(IXGBE_RXDADV_PKTTYPE_UDP) to 0x2 */
@@ -153,7 +153,7 @@ desc_to_olflags_v(uint8x16x2_t sterr_tmp1, uint8x16x2_t sterr_tmp2,
 			0, 0, 0, 0};
 
 	const uint8x16_t udp_csum_bad_shuf = {
-			0xFF, ~(uint8_t)PKT_RX_L4_CKSUM_BAD, 0, 0,
+			0xFF, ~(uint8_t)RTE_MBUF_F_RX_L4_CKSUM_BAD, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0,
 			0, 0, 0, 0};
@@ -194,7 +194,7 @@ desc_to_olflags_v(uint8x16x2_t sterr_tmp1, uint8x16x2_t sterr_tmp2,
 	vtag_lo = vorrq_u8(ptype, vtag_lo);
 
 	/* convert the UDP header present 0x2 to 0x1 for aligning with each
-	 * PKT_RX_L4_CKSUM_BAD value in low byte of 8 bits word ol_flag in
+	 * RTE_MBUF_F_RX_L4_CKSUM_BAD value in low byte of 8 bits word ol_flag in
 	 * vtag_lo (4x8). Then mask out the bad checksum value by shuffle and
 	 * bit-mask.
 	 */
@@ -337,7 +337,7 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	sw_ring = &rxq->sw_ring[rxq->rx_tail];
 
 	/* ensure these 2 flags are in the lower 8 bits */
-	RTE_BUILD_BUG_ON((PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED) > UINT8_MAX);
+	RTE_BUILD_BUG_ON((RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED) > UINT8_MAX);
 	vlan_flags = rxq->vlan_flags & UINT8_MAX;
 
 	/* A. load 4 packet in one loop
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
index 1dea95e73b..1eed949495 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c
@@ -108,9 +108,9 @@ desc_to_olflags_v_ipsec(__m128i descs[4], struct rte_mbuf **rx_pkts)
 	const __m128i ipsec_proc_msk  =
 			_mm_set1_epi32(IXGBE_RXDADV_IPSEC_STATUS_SECP);
 	const __m128i ipsec_err_flag  =
-			_mm_set1_epi32(PKT_RX_SEC_OFFLOAD_FAILED |
-				       PKT_RX_SEC_OFFLOAD);
-	const __m128i ipsec_proc_flag = _mm_set1_epi32(PKT_RX_SEC_OFFLOAD);
+			_mm_set1_epi32(RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED |
+				       RTE_MBUF_F_RX_SEC_OFFLOAD);
+	const __m128i ipsec_proc_flag = _mm_set1_epi32(RTE_MBUF_F_RX_SEC_OFFLOAD);
 
 	rearm = _mm_set_epi32(*rearm3, *rearm2, *rearm1, *rearm0);
 	sterr = _mm_set_epi32(_mm_extract_epi32(descs[3], 2),
@@ -148,10 +148,10 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
 			0x00FF, 0x00FF, 0x00FF, 0x00FF);
 
 	/* map rss type to rss hash flag */
-	const __m128i rss_flags = _mm_set_epi8(PKT_RX_FDIR, 0, 0, 0,
-			0, 0, 0, PKT_RX_RSS_HASH,
-			PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH, 0,
-			PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, 0);
+	const __m128i rss_flags = _mm_set_epi8(RTE_MBUF_F_RX_FDIR, 0, 0, 0,
+			0, 0, 0, RTE_MBUF_F_RX_RSS_HASH,
+			RTE_MBUF_F_RX_RSS_HASH, 0, RTE_MBUF_F_RX_RSS_HASH, 0,
+			RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, 0);
 
 	/* mask everything except vlan present and l4/ip csum error */
 	const __m128i vlan_csum_msk = _mm_set_epi16(
@@ -165,23 +165,23 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
 	/* map vlan present (0x8), IPE (0x2), L4E (0x1) to ol_flags */
 	const __m128i vlan_csum_map_lo = _mm_set_epi8(
 		0, 0, 0, 0,
-		vlan_flags | PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD,
-		vlan_flags | PKT_RX_IP_CKSUM_BAD,
-		vlan_flags | PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD,
-		vlan_flags | PKT_RX_IP_CKSUM_GOOD,
+		vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+		vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_BAD,
+		vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+		vlan_flags | RTE_MBUF_F_RX_IP_CKSUM_GOOD,
 		0, 0, 0, 0,
-		PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD,
-		PKT_RX_IP_CKSUM_BAD,
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD,
-		PKT_RX_IP_CKSUM_GOOD);
+		RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD);
 
 	const __m128i vlan_csum_map_hi = _mm_set_epi8(
 		0, 0, 0, 0,
-		0, PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
-		PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t),
+		0, RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
+		RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t),
 		0, 0, 0, 0,
-		0, PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
-		PKT_RX_L4_CKSUM_GOOD >> sizeof(uint8_t));
+		0, RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t), 0,
+		RTE_MBUF_F_RX_L4_CKSUM_GOOD >> sizeof(uint8_t));
 
 	/* mask everything except UDP header present if specified */
 	const __m128i udp_hdr_p_msk = _mm_set_epi16
@@ -190,7 +190,7 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
 
 	const __m128i udp_csum_bad_shuf = _mm_set_epi8
 		(0, 0, 0, 0, 0, 0, 0, 0,
-		 0, 0, 0, 0, 0, 0, ~(uint8_t)PKT_RX_L4_CKSUM_BAD, 0xFF);
+		 0, 0, 0, 0, 0, 0, ~(uint8_t)RTE_MBUF_F_RX_L4_CKSUM_BAD, 0xFF);
 
 	ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
 	ptype1 = _mm_unpacklo_epi16(descs[2], descs[3]);
@@ -228,7 +228,7 @@ desc_to_olflags_v(__m128i descs[4], __m128i mbuf_init, uint8_t vlan_flags,
 	vtag1 = _mm_or_si128(ptype0, vtag1);
 
 	/* convert the UDP header present 0x200 to 0x1 for aligning with each
-	 * PKT_RX_L4_CKSUM_BAD value in low byte of 16 bits word ol_flag in
+	 * RTE_MBUF_F_RX_L4_CKSUM_BAD value in low byte of 16 bits word ol_flag in
 	 * vtag1 (4x16). Then mask out the bad checksum value by shuffle and
 	 * bit-mask.
 	 */
@@ -428,7 +428,7 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	sw_ring = &rxq->sw_ring[rxq->rx_tail];
 
 	/* ensure these 2 flags are in the lower 8 bits */
-	RTE_BUILD_BUG_ON((PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED) > UINT8_MAX);
+	RTE_BUILD_BUG_ON((RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED) > UINT8_MAX);
 	vlan_flags = rxq->vlan_flags & UINT8_MAX;
 
 	/* A. load 4 packet in one loop
diff --git a/drivers/net/liquidio/lio_rxtx.c b/drivers/net/liquidio/lio_rxtx.c
index 616abec070..ef127f26c4 100644
--- a/drivers/net/liquidio/lio_rxtx.c
+++ b/drivers/net/liquidio/lio_rxtx.c
@@ -437,7 +437,7 @@ lio_droq_fast_process_packet(struct lio_device *lio_dev,
 				if (rh->r_dh.has_hash) {
 					uint64_t *hash_ptr;
 
-					nicbuf->ol_flags |= PKT_RX_RSS_HASH;
+					nicbuf->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 					hash_ptr = rte_pktmbuf_mtod(nicbuf,
 								    uint64_t *);
 					lio_swap_8B_data(hash_ptr, 1);
@@ -494,7 +494,7 @@ lio_droq_fast_process_packet(struct lio_device *lio_dev,
 						uint64_t *hash_ptr;
 
 						nicbuf->ol_flags |=
-						    PKT_RX_RSS_HASH;
+						    RTE_MBUF_F_RX_RSS_HASH;
 						hash_ptr = rte_pktmbuf_mtod(
 						    nicbuf, uint64_t *);
 						lio_swap_8B_data(hash_ptr, 1);
@@ -547,10 +547,10 @@ lio_droq_fast_process_packet(struct lio_device *lio_dev,
 		struct rte_mbuf *m = rx_pkts[data_pkts - 1];
 
 		if (rh->r_dh.csum_verified & LIO_IP_CSUM_VERIFIED)
-			m->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 		if (rh->r_dh.csum_verified & LIO_L4_CSUM_VERIFIED)
-			m->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 	}
 
 	if (droq->refill_count >= droq->refill_threshold) {
@@ -1675,13 +1675,13 @@ lio_dev_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
 		cmdsetup.s.iq_no = iq_no;
 
 		/* check checksum offload flags to form cmd */
-		if (m->ol_flags & PKT_TX_IP_CKSUM)
+		if (m->ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			cmdsetup.s.ip_csum = 1;
 
-		if (m->ol_flags & PKT_TX_OUTER_IP_CKSUM)
+		if (m->ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 			cmdsetup.s.tnl_csum = 1;
-		else if ((m->ol_flags & PKT_TX_TCP_CKSUM) ||
-				(m->ol_flags & PKT_TX_UDP_CKSUM))
+		else if ((m->ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) ||
+				(m->ol_flags & RTE_MBUF_F_TX_UDP_CKSUM))
 			cmdsetup.s.transport_csum = 1;
 
 		if (m->nb_segs == 1) {
diff --git a/drivers/net/mlx4/mlx4_rxtx.c b/drivers/net/mlx4/mlx4_rxtx.c
index ecf08f53cf..ed9e41fcde 100644
--- a/drivers/net/mlx4/mlx4_rxtx.c
+++ b/drivers/net/mlx4/mlx4_rxtx.c
@@ -406,7 +406,7 @@ mlx4_tx_burst_tso_get_params(struct rte_mbuf *buf,
 {
 	struct mlx4_sq *sq = &txq->msq;
 	const uint8_t tunneled = txq->priv->hw_csum_l2tun &&
-				 (buf->ol_flags & PKT_TX_TUNNEL_MASK);
+				 (buf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK);
 
 	tinfo->tso_header_size = buf->l2_len + buf->l3_len + buf->l4_len;
 	if (tunneled)
@@ -915,7 +915,7 @@ mlx4_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
 			uint16_t flags16[2];
 		} srcrb;
 		uint32_t lkey;
-		bool tso = txq->priv->tso && (buf->ol_flags & PKT_TX_TCP_SEG);
+		bool tso = txq->priv->tso && (buf->ol_flags & RTE_MBUF_F_TX_TCP_SEG);
 
 		/* Clean up old buffer. */
 		if (likely(elt->buf != NULL)) {
@@ -991,15 +991,15 @@ mlx4_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
 		/* Enable HW checksum offload if requested */
 		if (txq->csum &&
 		    (buf->ol_flags &
-		     (PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM))) {
+		     (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM))) {
 			const uint64_t is_tunneled = (buf->ol_flags &
-						      (PKT_TX_TUNNEL_GRE |
-						       PKT_TX_TUNNEL_VXLAN));
+						      (RTE_MBUF_F_TX_TUNNEL_GRE |
+						       RTE_MBUF_F_TX_TUNNEL_VXLAN));
 
 			if (is_tunneled && txq->csum_l2tun) {
 				owner_opcode |= MLX4_WQE_CTRL_IIP_HDR_CSUM |
 						MLX4_WQE_CTRL_IL4_HDR_CSUM;
-				if (buf->ol_flags & PKT_TX_OUTER_IP_CKSUM)
+				if (buf->ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
 					srcrb.flags |=
 					    RTE_BE32(MLX4_WQE_CTRL_IP_HDR_CSUM);
 			} else {
@@ -1112,18 +1112,18 @@ rxq_cq_to_ol_flags(uint32_t flags, int csum, int csum_l2tun)
 		ol_flags |=
 			mlx4_transpose(flags,
 				       MLX4_CQE_STATUS_IP_HDR_CSUM_OK,
-				       PKT_RX_IP_CKSUM_GOOD) |
+				       RTE_MBUF_F_RX_IP_CKSUM_GOOD) |
 			mlx4_transpose(flags,
 				       MLX4_CQE_STATUS_TCP_UDP_CSUM_OK,
-				       PKT_RX_L4_CKSUM_GOOD);
+				       RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 	if ((flags & MLX4_CQE_L2_TUNNEL) && csum_l2tun)
 		ol_flags |=
 			mlx4_transpose(flags,
 				       MLX4_CQE_L2_TUNNEL_IPOK,
-				       PKT_RX_IP_CKSUM_GOOD) |
+				       RTE_MBUF_F_RX_IP_CKSUM_GOOD) |
 			mlx4_transpose(flags,
 				       MLX4_CQE_L2_TUNNEL_L4_CSUM,
-				       PKT_RX_L4_CKSUM_GOOD);
+				       RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 	return ol_flags;
 }
 
@@ -1274,7 +1274,7 @@ mlx4_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
 			/* Update packet information. */
 			pkt->packet_type =
 				rxq_cq_to_pkt_type(cqe, rxq->l2tun_offload);
-			pkt->ol_flags = PKT_RX_RSS_HASH;
+			pkt->ol_flags = RTE_MBUF_F_RX_RSS_HASH;
 			pkt->hash.rss = cqe->immed_rss_invalid;
 			if (rxq->crc_present)
 				len -= RTE_ETHER_CRC_LEN;
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index c914a7120c..ffdd50c93d 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -9275,7 +9275,7 @@ mlx5_flow_tunnel_get_restore_info(struct rte_eth_dev *dev,
 {
 	uint64_t ol_flags = m->ol_flags;
 	const struct mlx5_flow_tbl_data_entry *tble;
-	const uint64_t mask = PKT_RX_FDIR | PKT_RX_FDIR_ID;
+	const uint64_t mask = RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 
 	if (!is_tunnel_offload_active(dev)) {
 		info->flags = 0;
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index e3b1051ba4..3ae62cb8e0 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -692,10 +692,10 @@ rxq_cq_to_ol_flags(volatile struct mlx5_cqe *cqe)
 	ol_flags =
 		TRANSPOSE(flags,
 			  MLX5_CQE_RX_L3_HDR_VALID,
-			  PKT_RX_IP_CKSUM_GOOD) |
+			  RTE_MBUF_F_RX_IP_CKSUM_GOOD) |
 		TRANSPOSE(flags,
 			  MLX5_CQE_RX_L4_HDR_VALID,
-			  PKT_RX_L4_CKSUM_GOOD);
+			  RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 	return ol_flags;
 }
 
@@ -731,7 +731,7 @@ rxq_cq_to_mbuf(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt,
 			rss_hash_res = rte_be_to_cpu_32(mcqe->rx_hash_result);
 		if (rss_hash_res) {
 			pkt->hash.rss = rss_hash_res;
-			pkt->ol_flags |= PKT_RX_RSS_HASH;
+			pkt->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		}
 	}
 	if (rxq->mark) {
@@ -745,9 +745,9 @@ rxq_cq_to_mbuf(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt,
 			mark = ((mcqe->byte_cnt_flow & 0xff) << 8) |
 				(mcqe->flow_tag_high << 16);
 		if (MLX5_FLOW_MARK_IS_VALID(mark)) {
-			pkt->ol_flags |= PKT_RX_FDIR;
+			pkt->ol_flags |= RTE_MBUF_F_RX_FDIR;
 			if (mark != RTE_BE32(MLX5_FLOW_MARK_DEFAULT)) {
-				pkt->ol_flags |= PKT_RX_FDIR_ID;
+				pkt->ol_flags |= RTE_MBUF_F_RX_FDIR_ID;
 				pkt->hash.fdir.hi = mlx5_flow_mark_get(mark);
 			}
 		}
@@ -775,7 +775,7 @@ rxq_cq_to_mbuf(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt,
 			vlan_strip = mcqe->hdr_type &
 				     RTE_BE16(MLX5_CQE_VLAN_STRIPPED);
 		if (vlan_strip) {
-			pkt->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+			pkt->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 			pkt->vlan_tci = rte_be_to_cpu_16(cqe->vlan_info);
 		}
 	}
@@ -863,7 +863,7 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
 			}
 			pkt = seg;
 			MLX5_ASSERT(len >= (rxq->crc_present << 2));
-			pkt->ol_flags &= EXT_ATTACHED_MBUF;
+			pkt->ol_flags &= RTE_MBUF_F_EXTERNAL;
 			rxq_cq_to_mbuf(rxq, pkt, cqe, mcqe);
 			if (rxq->crc_present)
 				len -= RTE_ETHER_CRC_LEN;
@@ -872,7 +872,7 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
 				mlx5_lro_update_hdr
 					(rte_pktmbuf_mtod(pkt, uint8_t *), cqe,
 					 mcqe, rxq, len);
-				pkt->ol_flags |= PKT_RX_LRO;
+				pkt->ol_flags |= RTE_MBUF_F_RX_LRO;
 				pkt->tso_segsz = len / cqe->lro_num_seg;
 			}
 		}
@@ -1144,7 +1144,7 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
 		if (cqe->lro_num_seg > 1) {
 			mlx5_lro_update_hdr(rte_pktmbuf_mtod(pkt, uint8_t *),
 					    cqe, mcqe, rxq, len);
-			pkt->ol_flags |= PKT_RX_LRO;
+			pkt->ol_flags |= RTE_MBUF_F_RX_LRO;
 			pkt->tso_segsz = len / cqe->lro_num_seg;
 		}
 		PKT_LEN(pkt) = len;
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 2b7ad3e48b..32e9c97b64 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -488,7 +488,7 @@ mprq_buf_to_pkt(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt, uint32_t len,
 		shinfo = &buf->shinfos[strd_idx];
 		rte_mbuf_ext_refcnt_set(shinfo, 1);
 		/*
-		 * EXT_ATTACHED_MBUF will be set to pkt->ol_flags when
+		 * RTE_MBUF_F_EXTERNAL will be set to pkt->ol_flags when
 		 * attaching the stride to mbuf and more offload flags
 		 * will be added below by calling rxq_cq_to_mbuf().
 		 * Other fields will be overwritten.
@@ -497,7 +497,7 @@ mprq_buf_to_pkt(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt, uint32_t len,
 					  buf_len, shinfo);
 		/* Set mbuf head-room. */
 		SET_DATA_OFF(pkt, RTE_PKTMBUF_HEADROOM);
-		MLX5_ASSERT(pkt->ol_flags == EXT_ATTACHED_MBUF);
+		MLX5_ASSERT(pkt->ol_flags == RTE_MBUF_F_EXTERNAL);
 		MLX5_ASSERT(rte_pktmbuf_tailroom(pkt) >=
 			len - (hdrm_overlap > 0 ? hdrm_overlap : 0));
 		DATA_LEN(pkt) = len;
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index b68443bed5..21a455b1b2 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -180,7 +180,7 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl)
 		mbuf_init->nb_segs = 1;
 		mbuf_init->port = rxq->port_id;
 		if (priv->flags & RTE_PKTMBUF_POOL_F_PINNED_EXT_BUF)
-			mbuf_init->ol_flags = EXT_ATTACHED_MBUF;
+			mbuf_init->ol_flags = RTE_MBUF_F_EXTERNAL;
 		/*
 		 * prevent compiler reordering:
 		 * rearm_data covers previous fields.
diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 7b984eff35..646d2a31e2 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -255,10 +255,10 @@ mlx5_set_cksum_table(void)
 
 	/*
 	 * The index should have:
-	 * bit[0] = PKT_TX_TCP_SEG
-	 * bit[2:3] = PKT_TX_UDP_CKSUM, PKT_TX_TCP_CKSUM
-	 * bit[4] = PKT_TX_IP_CKSUM
-	 * bit[8] = PKT_TX_OUTER_IP_CKSUM
+	 * bit[0] = RTE_MBUF_F_TX_TCP_SEG
+	 * bit[2:3] = RTE_MBUF_F_TX_UDP_CKSUM, RTE_MBUF_F_TX_TCP_CKSUM
+	 * bit[4] = RTE_MBUF_F_TX_IP_CKSUM
+	 * bit[8] = RTE_MBUF_F_TX_OUTER_IP_CKSUM
 	 * bit[9] = tunnel
 	 */
 	for (i = 0; i < RTE_DIM(mlx5_cksum_table); ++i) {
@@ -293,10 +293,10 @@ mlx5_set_swp_types_table(void)
 
 	/*
 	 * The index should have:
-	 * bit[0:1] = PKT_TX_L4_MASK
-	 * bit[4] = PKT_TX_IPV6
-	 * bit[8] = PKT_TX_OUTER_IPV6
-	 * bit[9] = PKT_TX_OUTER_UDP
+	 * bit[0:1] = RTE_MBUF_F_TX_L4_MASK
+	 * bit[4] = RTE_MBUF_F_TX_IPV6
+	 * bit[8] = RTE_MBUF_F_TX_OUTER_IPV6
+	 * bit[9] = RTE_MBUF_F_TX_OUTER_UDP
 	 */
 	for (i = 0; i < RTE_DIM(mlx5_swp_types_table); ++i) {
 		v = 0;
@@ -306,7 +306,7 @@ mlx5_set_swp_types_table(void)
 			v |= MLX5_ETH_WQE_L4_OUTER_UDP;
 		if (i & (1 << 4))
 			v |= MLX5_ETH_WQE_L3_INNER_IPV6;
-		if ((i & 3) == (PKT_TX_UDP_CKSUM >> 52))
+		if ((i & 3) == (RTE_MBUF_F_TX_UDP_CKSUM >> 52))
 			v |= MLX5_ETH_WQE_L4_INNER_UDP;
 		mlx5_swp_types_table[i] = v;
 	}
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
index 68cef1a83e..bcf487c34e 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
@@ -283,20 +283,20 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				const vector unsigned char fdir_flags =
 					(vector unsigned char)
 					(vector unsigned int){
-					PKT_RX_FDIR, PKT_RX_FDIR,
-					PKT_RX_FDIR, PKT_RX_FDIR};
+					RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_FDIR,
+					RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_FDIR};
 				const vector unsigned char fdir_all_flags =
 					(vector unsigned char)
 					(vector unsigned int){
-					PKT_RX_FDIR | PKT_RX_FDIR_ID,
-					PKT_RX_FDIR | PKT_RX_FDIR_ID,
-					PKT_RX_FDIR | PKT_RX_FDIR_ID,
-					PKT_RX_FDIR | PKT_RX_FDIR_ID};
+					RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID,
+					RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID,
+					RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID,
+					RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID};
 				vector unsigned char fdir_id_flags =
 					(vector unsigned char)
 					(vector unsigned int){
-					PKT_RX_FDIR_ID, PKT_RX_FDIR_ID,
-					PKT_RX_FDIR_ID, PKT_RX_FDIR_ID};
+					RTE_MBUF_F_RX_FDIR_ID, RTE_MBUF_F_RX_FDIR_ID,
+					RTE_MBUF_F_RX_FDIR_ID, RTE_MBUF_F_RX_FDIR_ID};
 				/* Extract flow_tag field. */
 				vector unsigned char ftag0 = vec_perm(mcqe1,
 							zero, flow_mark_shuf);
@@ -316,7 +316,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 					ol_flags_mask,
 					(vector unsigned long)fdir_all_flags);
 
-				/* Set PKT_RX_FDIR if flow tag is non-zero. */
+				/* Set RTE_MBUF_F_RX_FDIR if flow tag is non-zero. */
 				invalid_mask = (vector unsigned char)
 					vec_cmpeq((vector unsigned int)ftag,
 					(vector unsigned int)zero);
@@ -376,10 +376,10 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				const vector unsigned char vlan_mask =
 					(vector unsigned char)
 					(vector unsigned int) {
-					(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED),
-					(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED),
-					(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED),
-					(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED)};
+					(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED),
+					(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED),
+					(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED),
+					(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED)};
 				const vector unsigned char cv_mask =
 					(vector unsigned char)
 					(vector unsigned int) {
@@ -433,10 +433,10 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 			}
 			const vector unsigned char hash_mask =
 				(vector unsigned char)(vector unsigned int) {
-					PKT_RX_RSS_HASH,
-					PKT_RX_RSS_HASH,
-					PKT_RX_RSS_HASH,
-					PKT_RX_RSS_HASH};
+					RTE_MBUF_F_RX_RSS_HASH,
+					RTE_MBUF_F_RX_RSS_HASH,
+					RTE_MBUF_F_RX_RSS_HASH,
+					RTE_MBUF_F_RX_RSS_HASH};
 			const vector unsigned char rearm_flags =
 				(vector unsigned char)(vector unsigned int) {
 				(uint32_t)t_pkt->ol_flags,
@@ -531,13 +531,13 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq,
 	vector unsigned char pinfo, ptype;
 	vector unsigned char ol_flags = (vector unsigned char)
 		(vector unsigned int){
-			rxq->rss_hash * PKT_RX_RSS_HASH |
+			rxq->rss_hash * RTE_MBUF_F_RX_RSS_HASH |
 				rxq->hw_timestamp * rxq->timestamp_rx_flag,
-			rxq->rss_hash * PKT_RX_RSS_HASH |
+			rxq->rss_hash * RTE_MBUF_F_RX_RSS_HASH |
 				rxq->hw_timestamp * rxq->timestamp_rx_flag,
-			rxq->rss_hash * PKT_RX_RSS_HASH |
+			rxq->rss_hash * RTE_MBUF_F_RX_RSS_HASH |
 				rxq->hw_timestamp * rxq->timestamp_rx_flag,
-			rxq->rss_hash * PKT_RX_RSS_HASH |
+			rxq->rss_hash * RTE_MBUF_F_RX_RSS_HASH |
 				rxq->hw_timestamp * rxq->timestamp_rx_flag};
 	vector unsigned char cv_flags;
 	const vector unsigned char zero = (vector unsigned char){0};
@@ -551,21 +551,21 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq,
 		(vector unsigned char)(vector unsigned int){
 		0x00000003, 0x00000003, 0x00000003, 0x00000003};
 	const vector unsigned char cv_flag_sel = (vector unsigned char){
-		0, (uint8_t)(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED),
-		(uint8_t)(PKT_RX_IP_CKSUM_GOOD >> 1), 0,
-		(uint8_t)(PKT_RX_L4_CKSUM_GOOD >> 1), 0,
-		(uint8_t)((PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1),
+		0, (uint8_t)(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED),
+		(uint8_t)(RTE_MBUF_F_RX_IP_CKSUM_GOOD >> 1), 0,
+		(uint8_t)(RTE_MBUF_F_RX_L4_CKSUM_GOOD >> 1), 0,
+		(uint8_t)((RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1),
 		0, 0, 0, 0, 0, 0, 0, 0, 0};
 	const vector unsigned char cv_mask =
 		(vector unsigned char)(vector unsigned int){
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD |
-		PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD |
-		PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD |
-		PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED,
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD |
-		PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED};
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+		RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED};
 	const vector unsigned char mbuf_init =
 		(vector unsigned char)vec_vsx_ld
 			(0, (vector unsigned char *)&rxq->mbuf_initializer);
@@ -602,19 +602,19 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq,
 			0xffffff00, 0xffffff00, 0xffffff00, 0xffffff00};
 		const vector unsigned char fdir_flags =
 			(vector unsigned char)(vector unsigned int){
-			PKT_RX_FDIR, PKT_RX_FDIR,
-			PKT_RX_FDIR, PKT_RX_FDIR};
+			RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_FDIR,
+			RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_FDIR};
 		vector unsigned char fdir_id_flags =
 			(vector unsigned char)(vector unsigned int){
-			PKT_RX_FDIR_ID, PKT_RX_FDIR_ID,
-			PKT_RX_FDIR_ID, PKT_RX_FDIR_ID};
+			RTE_MBUF_F_RX_FDIR_ID, RTE_MBUF_F_RX_FDIR_ID,
+			RTE_MBUF_F_RX_FDIR_ID, RTE_MBUF_F_RX_FDIR_ID};
 		vector unsigned char flow_tag, invalid_mask;
 
 		flow_tag = (vector unsigned char)
 			vec_and((vector unsigned long)pinfo,
 			(vector unsigned long)pinfo_ft_mask);
 
-		/* Check if flow tag is non-zero then set PKT_RX_FDIR. */
+		/* Check if flow tag is non-zero then set RTE_MBUF_F_RX_FDIR. */
 		invalid_mask = (vector unsigned char)
 			vec_cmpeq((vector unsigned int)flow_tag,
 			(vector unsigned int)zero);
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
index 5ff792f4cb..aa36df29a0 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
@@ -220,12 +220,12 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				const uint32x4_t ft_mask =
 					vdupq_n_u32(MLX5_FLOW_MARK_DEFAULT);
 				const uint32x4_t fdir_flags =
-					vdupq_n_u32(PKT_RX_FDIR);
+					vdupq_n_u32(RTE_MBUF_F_RX_FDIR);
 				const uint32x4_t fdir_all_flags =
-					vdupq_n_u32(PKT_RX_FDIR |
-						    PKT_RX_FDIR_ID);
+					vdupq_n_u32(RTE_MBUF_F_RX_FDIR |
+						    RTE_MBUF_F_RX_FDIR_ID);
 				uint32x4_t fdir_id_flags =
-					vdupq_n_u32(PKT_RX_FDIR_ID);
+					vdupq_n_u32(RTE_MBUF_F_RX_FDIR_ID);
 				uint32x4_t invalid_mask, ftag;
 
 				__asm__ volatile
@@ -240,7 +240,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				invalid_mask = vceqzq_u32(ftag);
 				ol_flags_mask = vorrq_u32(ol_flags_mask,
 							  fdir_all_flags);
-				/* Set PKT_RX_FDIR if flow tag is non-zero. */
+				/* Set RTE_MBUF_F_RX_FDIR if flow tag is non-zero. */
 				ol_flags = vorrq_u32(ol_flags,
 					vbicq_u32(fdir_flags, invalid_mask));
 				/* Mask out invalid entries. */
@@ -276,8 +276,8 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				const uint8_t pkt_hdr3 =
 					mcq[pos % 8 + 3].hdr_type;
 				const uint32x4_t vlan_mask =
-					vdupq_n_u32(PKT_RX_VLAN |
-						    PKT_RX_VLAN_STRIPPED);
+					vdupq_n_u32(RTE_MBUF_F_RX_VLAN |
+						    RTE_MBUF_F_RX_VLAN_STRIPPED);
 				const uint32x4_t cv_mask =
 					vdupq_n_u32(MLX5_CQE_VLAN_STRIPPED);
 				const uint32x4_t pkt_cv = {
@@ -317,7 +317,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				}
 			}
 			const uint32x4_t hash_flags =
-				vdupq_n_u32(PKT_RX_RSS_HASH);
+				vdupq_n_u32(RTE_MBUF_F_RX_RSS_HASH);
 			const uint32x4_t rearm_flags =
 				vdupq_n_u32((uint32_t)t_pkt->ol_flags);
 
@@ -396,22 +396,22 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq,
 	uint16x4_t ptype;
 	uint32x4_t pinfo, cv_flags;
 	uint32x4_t ol_flags =
-		vdupq_n_u32(rxq->rss_hash * PKT_RX_RSS_HASH |
+		vdupq_n_u32(rxq->rss_hash * RTE_MBUF_F_RX_RSS_HASH |
 			    rxq->hw_timestamp * rxq->timestamp_rx_flag);
 	const uint32x4_t ptype_ol_mask = { 0x106, 0x106, 0x106, 0x106 };
 	const uint8x16_t cv_flag_sel = {
 		0,
-		(uint8_t)(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED),
-		(uint8_t)(PKT_RX_IP_CKSUM_GOOD >> 1),
+		(uint8_t)(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED),
+		(uint8_t)(RTE_MBUF_F_RX_IP_CKSUM_GOOD >> 1),
 		0,
-		(uint8_t)(PKT_RX_L4_CKSUM_GOOD >> 1),
+		(uint8_t)(RTE_MBUF_F_RX_L4_CKSUM_GOOD >> 1),
 		0,
-		(uint8_t)((PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD) >> 1),
+		(uint8_t)((RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1),
 		0, 0, 0, 0, 0, 0, 0, 0, 0
 	};
 	const uint32x4_t cv_mask =
-		vdupq_n_u32(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD |
-			    PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+		vdupq_n_u32(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			    RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 	const uint64x2_t mbuf_init = vld1q_u64
 				((const uint64_t *)&rxq->mbuf_initializer);
 	uint64x2_t rearm0, rearm1, rearm2, rearm3;
@@ -419,11 +419,11 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq,
 
 	if (rxq->mark) {
 		const uint32x4_t ft_def = vdupq_n_u32(MLX5_FLOW_MARK_DEFAULT);
-		const uint32x4_t fdir_flags = vdupq_n_u32(PKT_RX_FDIR);
-		uint32x4_t fdir_id_flags = vdupq_n_u32(PKT_RX_FDIR_ID);
+		const uint32x4_t fdir_flags = vdupq_n_u32(RTE_MBUF_F_RX_FDIR);
+		uint32x4_t fdir_id_flags = vdupq_n_u32(RTE_MBUF_F_RX_FDIR_ID);
 		uint32x4_t invalid_mask;
 
-		/* Check if flow tag is non-zero then set PKT_RX_FDIR. */
+		/* Check if flow tag is non-zero then set RTE_MBUF_F_RX_FDIR. */
 		invalid_mask = vceqzq_u32(flow_tag);
 		ol_flags = vorrq_u32(ol_flags,
 				     vbicq_u32(fdir_flags, invalid_mask));
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
index adf991f013..b0fc29d7b9 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
@@ -204,12 +204,12 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				const __m128i ft_mask =
 					_mm_set1_epi32(0xffffff00);
 				const __m128i fdir_flags =
-					_mm_set1_epi32(PKT_RX_FDIR);
+					_mm_set1_epi32(RTE_MBUF_F_RX_FDIR);
 				const __m128i fdir_all_flags =
-					_mm_set1_epi32(PKT_RX_FDIR |
-						       PKT_RX_FDIR_ID);
+					_mm_set1_epi32(RTE_MBUF_F_RX_FDIR |
+						       RTE_MBUF_F_RX_FDIR_ID);
 				__m128i fdir_id_flags =
-					_mm_set1_epi32(PKT_RX_FDIR_ID);
+					_mm_set1_epi32(RTE_MBUF_F_RX_FDIR_ID);
 
 				/* Extract flow_tag field. */
 				__m128i ftag0 =
@@ -223,7 +223,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 
 				ol_flags_mask = _mm_or_si128(ol_flags_mask,
 							     fdir_all_flags);
-				/* Set PKT_RX_FDIR if flow tag is non-zero. */
+				/* Set RTE_MBUF_F_RX_FDIR if flow tag is non-zero. */
 				ol_flags = _mm_or_si128(ol_flags,
 					_mm_andnot_si128(invalid_mask,
 							 fdir_flags));
@@ -260,8 +260,8 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				const uint8_t pkt_hdr3 =
 					_mm_extract_epi8(mcqe2, 8);
 				const __m128i vlan_mask =
-					_mm_set1_epi32(PKT_RX_VLAN |
-						       PKT_RX_VLAN_STRIPPED);
+					_mm_set1_epi32(RTE_MBUF_F_RX_VLAN |
+						       RTE_MBUF_F_RX_VLAN_STRIPPED);
 				const __m128i cv_mask =
 					_mm_set1_epi32(MLX5_CQE_VLAN_STRIPPED);
 				const __m128i pkt_cv =
@@ -303,7 +303,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
 				}
 			}
 			const __m128i hash_flags =
-				_mm_set1_epi32(PKT_RX_RSS_HASH);
+				_mm_set1_epi32(RTE_MBUF_F_RX_RSS_HASH);
 			const __m128i rearm_flags =
 				_mm_set1_epi32((uint32_t)t_pkt->ol_flags);
 
@@ -381,7 +381,7 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq, __m128i cqes[4],
 {
 	__m128i pinfo0, pinfo1;
 	__m128i pinfo, ptype;
-	__m128i ol_flags = _mm_set1_epi32(rxq->rss_hash * PKT_RX_RSS_HASH |
+	__m128i ol_flags = _mm_set1_epi32(rxq->rss_hash * RTE_MBUF_F_RX_RSS_HASH |
 					  rxq->hw_timestamp * rxq->timestamp_rx_flag);
 	__m128i cv_flags;
 	const __m128i zero = _mm_setzero_si128();
@@ -390,17 +390,17 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq, __m128i cqes[4],
 	const __m128i pinfo_mask = _mm_set1_epi32(0x3);
 	const __m128i cv_flag_sel =
 		_mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0, 0,
-			     (uint8_t)((PKT_RX_IP_CKSUM_GOOD |
-					PKT_RX_L4_CKSUM_GOOD) >> 1),
+			     (uint8_t)((RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+					RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1),
 			     0,
-			     (uint8_t)(PKT_RX_L4_CKSUM_GOOD >> 1),
+			     (uint8_t)(RTE_MBUF_F_RX_L4_CKSUM_GOOD >> 1),
 			     0,
-			     (uint8_t)(PKT_RX_IP_CKSUM_GOOD >> 1),
-			     (uint8_t)(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED),
+			     (uint8_t)(RTE_MBUF_F_RX_IP_CKSUM_GOOD >> 1),
+			     (uint8_t)(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED),
 			     0);
 	const __m128i cv_mask =
-		_mm_set1_epi32(PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD |
-			      PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+		_mm_set1_epi32(RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD |
+			       RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 	const __m128i mbuf_init =
 		_mm_load_si128((__m128i *)&rxq->mbuf_initializer);
 	__m128i rearm0, rearm1, rearm2, rearm3;
@@ -416,12 +416,12 @@ rxq_cq_to_ptype_oflags_v(struct mlx5_rxq_data *rxq, __m128i cqes[4],
 	ptype = _mm_unpacklo_epi64(pinfo0, pinfo1);
 	if (rxq->mark) {
 		const __m128i pinfo_ft_mask = _mm_set1_epi32(0xffffff00);
-		const __m128i fdir_flags = _mm_set1_epi32(PKT_RX_FDIR);
-		__m128i fdir_id_flags = _mm_set1_epi32(PKT_RX_FDIR_ID);
+		const __m128i fdir_flags = _mm_set1_epi32(RTE_MBUF_F_RX_FDIR);
+		__m128i fdir_id_flags = _mm_set1_epi32(RTE_MBUF_F_RX_FDIR_ID);
 		__m128i flow_tag, invalid_mask;
 
 		flow_tag = _mm_and_si128(pinfo, pinfo_ft_mask);
-		/* Check if flow tag is non-zero then set PKT_RX_FDIR. */
+		/* Check if flow tag is non-zero then set RTE_MBUF_F_RX_FDIR. */
 		invalid_mask = _mm_cmpeq_epi32(flow_tag, zero);
 		ol_flags = _mm_or_si128(ol_flags,
 					_mm_andnot_si128(invalid_mask,
diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h
index 4f83291cc2..67985d3402 100644
--- a/drivers/net/mlx5/mlx5_tx.h
+++ b/drivers/net/mlx5/mlx5_tx.h
@@ -78,7 +78,7 @@ uint16_t mlx5_tx_burst_##func(void *txq, \
 
 /* Mbuf dynamic flag offset for inline. */
 extern uint64_t rte_net_mlx5_dynf_inline_mask;
-#define PKT_TX_DYNF_NOINLINE rte_net_mlx5_dynf_inline_mask
+#define RTE_MBUF_F_TX_DYNF_NOINLINE rte_net_mlx5_dynf_inline_mask
 
 extern uint32_t mlx5_ptype_table[] __rte_cache_aligned;
 extern uint8_t mlx5_cksum_table[1 << 10] __rte_cache_aligned;
@@ -513,22 +513,22 @@ txq_mbuf_to_swp(struct mlx5_txq_local *__rte_restrict loc,
 	if (!MLX5_TXOFF_CONFIG(SWP))
 		return 0;
 	ol = loc->mbuf->ol_flags;
-	tunnel = ol & PKT_TX_TUNNEL_MASK;
+	tunnel = ol & RTE_MBUF_F_TX_TUNNEL_MASK;
 	/*
 	 * Check whether Software Parser is required.
 	 * Only customized tunnels may ask for.
 	 */
-	if (likely(tunnel != PKT_TX_TUNNEL_UDP && tunnel != PKT_TX_TUNNEL_IP))
+	if (likely(tunnel != RTE_MBUF_F_TX_TUNNEL_UDP && tunnel != RTE_MBUF_F_TX_TUNNEL_IP))
 		return 0;
 	/*
 	 * The index should have:
-	 * bit[0:1] = PKT_TX_L4_MASK
-	 * bit[4] = PKT_TX_IPV6
-	 * bit[8] = PKT_TX_OUTER_IPV6
-	 * bit[9] = PKT_TX_OUTER_UDP
+	 * bit[0:1] = RTE_MBUF_F_TX_L4_MASK
+	 * bit[4] = RTE_MBUF_F_TX_IPV6
+	 * bit[8] = RTE_MBUF_F_TX_OUTER_IPV6
+	 * bit[9] = RTE_MBUF_F_TX_OUTER_UDP
 	 */
-	idx = (ol & (PKT_TX_L4_MASK | PKT_TX_IPV6 | PKT_TX_OUTER_IPV6)) >> 52;
-	idx |= (tunnel == PKT_TX_TUNNEL_UDP) ? (1 << 9) : 0;
+	idx = (ol & (RTE_MBUF_F_TX_L4_MASK | RTE_MBUF_F_TX_IPV6 | RTE_MBUF_F_TX_OUTER_IPV6)) >> 52;
+	idx |= (tunnel == RTE_MBUF_F_TX_TUNNEL_UDP) ? (1 << 9) : 0;
 	*swp_flags = mlx5_swp_types_table[idx];
 	/*
 	 * Set offsets for SW parser. Since ConnectX-5, SW parser just
@@ -538,19 +538,19 @@ txq_mbuf_to_swp(struct mlx5_txq_local *__rte_restrict loc,
 	 * should be set regardless of HW offload.
 	 */
 	off = loc->mbuf->outer_l2_len;
-	if (MLX5_TXOFF_CONFIG(VLAN) && ol & PKT_TX_VLAN)
+	if (MLX5_TXOFF_CONFIG(VLAN) && ol & RTE_MBUF_F_TX_VLAN)
 		off += sizeof(struct rte_vlan_hdr);
 	set = (off >> 1) << 8; /* Outer L3 offset. */
 	off += loc->mbuf->outer_l3_len;
-	if (tunnel == PKT_TX_TUNNEL_UDP)
+	if (tunnel == RTE_MBUF_F_TX_TUNNEL_UDP)
 		set |= off >> 1; /* Outer L4 offset. */
-	if (ol & (PKT_TX_IPV4 | PKT_TX_IPV6)) { /* Inner IP. */
-		const uint64_t csum = ol & PKT_TX_L4_MASK;
+	if (ol & (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IPV6)) { /* Inner IP. */
+		const uint64_t csum = ol & RTE_MBUF_F_TX_L4_MASK;
 			off += loc->mbuf->l2_len;
 		set |= (off >> 1) << 24; /* Inner L3 offset. */
-		if (csum == PKT_TX_TCP_CKSUM ||
-		    csum == PKT_TX_UDP_CKSUM ||
-		    (MLX5_TXOFF_CONFIG(TSO) && ol & PKT_TX_TCP_SEG)) {
+		if (csum == RTE_MBUF_F_TX_TCP_CKSUM ||
+		    csum == RTE_MBUF_F_TX_UDP_CKSUM ||
+		    (MLX5_TXOFF_CONFIG(TSO) && ol & RTE_MBUF_F_TX_TCP_SEG)) {
 			off += loc->mbuf->l3_len;
 			set |= (off >> 1) << 16; /* Inner L4 offset. */
 		}
@@ -572,16 +572,16 @@ static __rte_always_inline uint8_t
 txq_ol_cksum_to_cs(struct rte_mbuf *buf)
 {
 	uint32_t idx;
-	uint8_t is_tunnel = !!(buf->ol_flags & PKT_TX_TUNNEL_MASK);
-	const uint64_t ol_flags_mask = PKT_TX_TCP_SEG | PKT_TX_L4_MASK |
-				       PKT_TX_IP_CKSUM | PKT_TX_OUTER_IP_CKSUM;
+	uint8_t is_tunnel = !!(buf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK);
+	const uint64_t ol_flags_mask = RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_L4_MASK |
+				       RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_OUTER_IP_CKSUM;
 
 	/*
 	 * The index should have:
-	 * bit[0] = PKT_TX_TCP_SEG
-	 * bit[2:3] = PKT_TX_UDP_CKSUM, PKT_TX_TCP_CKSUM
-	 * bit[4] = PKT_TX_IP_CKSUM
-	 * bit[8] = PKT_TX_OUTER_IP_CKSUM
+	 * bit[0] = RTE_MBUF_F_TX_TCP_SEG
+	 * bit[2:3] = RTE_MBUF_F_TX_UDP_CKSUM, RTE_MBUF_F_TX_TCP_CKSUM
+	 * bit[4] = RTE_MBUF_F_TX_IP_CKSUM
+	 * bit[8] = RTE_MBUF_F_TX_OUTER_IP_CKSUM
 	 * bit[9] = tunnel
 	 */
 	idx = ((buf->ol_flags & ol_flags_mask) >> 50) | (!!is_tunnel << 9);
@@ -952,12 +952,12 @@ mlx5_tx_eseg_none(struct mlx5_txq_data *__rte_restrict txq __rte_unused,
 	es->swp_offs = txq_mbuf_to_swp(loc, &es->swp_flags, olx);
 	/* Fill metadata field if needed. */
 	es->metadata = MLX5_TXOFF_CONFIG(METADATA) ?
-		       loc->mbuf->ol_flags & PKT_TX_DYNF_METADATA ?
+		       loc->mbuf->ol_flags & RTE_MBUF_DYNFLAG_TX_METADATA ?
 		       rte_cpu_to_be_32(*RTE_FLOW_DYNF_METADATA(loc->mbuf)) :
 		       0 : 0;
 	/* Engage VLAN tag insertion feature if requested. */
 	if (MLX5_TXOFF_CONFIG(VLAN) &&
-	    loc->mbuf->ol_flags & PKT_TX_VLAN) {
+	    loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		/*
 		 * We should get here only if device support
 		 * this feature correctly.
@@ -1013,7 +1013,7 @@ mlx5_tx_eseg_dmin(struct mlx5_txq_data *__rte_restrict txq __rte_unused,
 	es->swp_offs = txq_mbuf_to_swp(loc, &es->swp_flags, olx);
 	/* Fill metadata field if needed. */
 	es->metadata = MLX5_TXOFF_CONFIG(METADATA) ?
-		       loc->mbuf->ol_flags & PKT_TX_DYNF_METADATA ?
+		       loc->mbuf->ol_flags & RTE_MBUF_DYNFLAG_TX_METADATA ?
 		       rte_cpu_to_be_32(*RTE_FLOW_DYNF_METADATA(loc->mbuf)) :
 		       0 : 0;
 	psrc = rte_pktmbuf_mtod(loc->mbuf, uint8_t *);
@@ -1097,7 +1097,7 @@ mlx5_tx_eseg_data(struct mlx5_txq_data *__rte_restrict txq,
 	es->swp_offs = txq_mbuf_to_swp(loc, &es->swp_flags, olx);
 	/* Fill metadata field if needed. */
 	es->metadata = MLX5_TXOFF_CONFIG(METADATA) ?
-		       loc->mbuf->ol_flags & PKT_TX_DYNF_METADATA ?
+		       loc->mbuf->ol_flags & RTE_MBUF_DYNFLAG_TX_METADATA ?
 		       rte_cpu_to_be_32(*RTE_FLOW_DYNF_METADATA(loc->mbuf)) :
 		       0 : 0;
 	psrc = rte_pktmbuf_mtod(loc->mbuf, uint8_t *);
@@ -1206,7 +1206,7 @@ mlx5_tx_mseg_memcpy(uint8_t *pdst,
 			MLX5_ASSERT(loc->mbuf_nseg > 1);
 			MLX5_ASSERT(loc->mbuf);
 			--loc->mbuf_nseg;
-			if (loc->mbuf->ol_flags & PKT_TX_DYNF_NOINLINE) {
+			if (loc->mbuf->ol_flags & RTE_MBUF_F_TX_DYNF_NOINLINE) {
 				unsigned int diff;
 
 				if (copy >= must) {
@@ -1310,7 +1310,7 @@ mlx5_tx_eseg_mdat(struct mlx5_txq_data *__rte_restrict txq,
 	es->swp_offs = txq_mbuf_to_swp(loc, &es->swp_flags, olx);
 	/* Fill metadata field if needed. */
 	es->metadata = MLX5_TXOFF_CONFIG(METADATA) ?
-		       loc->mbuf->ol_flags & PKT_TX_DYNF_METADATA ?
+		       loc->mbuf->ol_flags & RTE_MBUF_DYNFLAG_TX_METADATA ?
 		       rte_cpu_to_be_32(*RTE_FLOW_DYNF_METADATA(loc->mbuf)) :
 		       0 : 0;
 	MLX5_ASSERT(inlen >= MLX5_ESEG_MIN_INLINE_SIZE);
@@ -1818,13 +1818,13 @@ mlx5_tx_packet_multi_tso(struct mlx5_txq_data *__rte_restrict txq,
 	 * the required space in WQE ring buffer.
 	 */
 	dlen = rte_pktmbuf_pkt_len(loc->mbuf);
-	if (MLX5_TXOFF_CONFIG(VLAN) && loc->mbuf->ol_flags & PKT_TX_VLAN)
+	if (MLX5_TXOFF_CONFIG(VLAN) && loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN)
 		vlan = sizeof(struct rte_vlan_hdr);
 	inlen = loc->mbuf->l2_len + vlan +
 		loc->mbuf->l3_len + loc->mbuf->l4_len;
 	if (unlikely((!inlen || !loc->mbuf->tso_segsz)))
 		return MLX5_TXCMP_CODE_ERROR;
-	if (loc->mbuf->ol_flags & PKT_TX_TUNNEL_MASK)
+	if (loc->mbuf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 		inlen += loc->mbuf->outer_l2_len + loc->mbuf->outer_l3_len;
 	/* Packet must contain all TSO headers. */
 	if (unlikely(inlen > MLX5_MAX_TSO_HEADER ||
@@ -1933,7 +1933,7 @@ mlx5_tx_packet_multi_send(struct mlx5_txq_data *__rte_restrict txq,
 	/* Update sent data bytes counter. */
 	txq->stats.obytes += rte_pktmbuf_pkt_len(loc->mbuf);
 	if (MLX5_TXOFF_CONFIG(VLAN) &&
-	    loc->mbuf->ol_flags & PKT_TX_VLAN)
+	    loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN)
 		txq->stats.obytes += sizeof(struct rte_vlan_hdr);
 #endif
 	/*
@@ -2032,7 +2032,7 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
 	 * to estimate the required space for WQE.
 	 */
 	dlen = rte_pktmbuf_pkt_len(loc->mbuf);
-	if (MLX5_TXOFF_CONFIG(VLAN) && loc->mbuf->ol_flags & PKT_TX_VLAN)
+	if (MLX5_TXOFF_CONFIG(VLAN) && loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN)
 		vlan = sizeof(struct rte_vlan_hdr);
 	inlen = dlen + vlan;
 	/* Check against minimal length. */
@@ -2040,7 +2040,7 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
 		return MLX5_TXCMP_CODE_ERROR;
 	MLX5_ASSERT(txq->inlen_send >= MLX5_ESEG_MIN_INLINE_SIZE);
 	if (inlen > txq->inlen_send ||
-	    loc->mbuf->ol_flags & PKT_TX_DYNF_NOINLINE) {
+	    loc->mbuf->ol_flags & RTE_MBUF_F_TX_DYNF_NOINLINE) {
 		struct rte_mbuf *mbuf;
 		unsigned int nxlen;
 		uintptr_t start;
@@ -2062,7 +2062,7 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
 			 * support the offload, will do with software inline.
 			 */
 			inlen = MLX5_ESEG_MIN_INLINE_SIZE;
-		} else if (mbuf->ol_flags & PKT_TX_DYNF_NOINLINE ||
+		} else if (mbuf->ol_flags & RTE_MBUF_F_TX_DYNF_NOINLINE ||
 			   nxlen > txq->inlen_send) {
 			return mlx5_tx_packet_multi_send(txq, loc, olx);
 		} else {
@@ -2202,7 +2202,7 @@ mlx5_tx_burst_mseg(struct mlx5_txq_data *__rte_restrict txq,
 		if (loc->elts_free < NB_SEGS(loc->mbuf))
 			return MLX5_TXCMP_CODE_EXIT;
 		if (MLX5_TXOFF_CONFIG(TSO) &&
-		    unlikely(loc->mbuf->ol_flags & PKT_TX_TCP_SEG)) {
+		    unlikely(loc->mbuf->ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 			/* Proceed with multi-segment TSO. */
 			ret = mlx5_tx_packet_multi_tso(txq, loc, olx);
 		} else if (MLX5_TXOFF_CONFIG(INLINE)) {
@@ -2228,7 +2228,7 @@ mlx5_tx_burst_mseg(struct mlx5_txq_data *__rte_restrict txq,
 			continue;
 		/* Here ends the series of multi-segment packets. */
 		if (MLX5_TXOFF_CONFIG(TSO) &&
-		    unlikely(loc->mbuf->ol_flags & PKT_TX_TCP_SEG))
+		    unlikely(loc->mbuf->ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 			return MLX5_TXCMP_CODE_TSO;
 		return MLX5_TXCMP_CODE_SINGLE;
 	}
@@ -2295,7 +2295,7 @@ mlx5_tx_burst_tso(struct mlx5_txq_data *__rte_restrict txq,
 		}
 		dlen = rte_pktmbuf_data_len(loc->mbuf);
 		if (MLX5_TXOFF_CONFIG(VLAN) &&
-		    loc->mbuf->ol_flags & PKT_TX_VLAN) {
+		    loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) {
 			vlan = sizeof(struct rte_vlan_hdr);
 		}
 		/*
@@ -2306,7 +2306,7 @@ mlx5_tx_burst_tso(struct mlx5_txq_data *__rte_restrict txq,
 		       loc->mbuf->l3_len + loc->mbuf->l4_len;
 		if (unlikely((!hlen || !loc->mbuf->tso_segsz)))
 			return MLX5_TXCMP_CODE_ERROR;
-		if (loc->mbuf->ol_flags & PKT_TX_TUNNEL_MASK)
+		if (loc->mbuf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 			hlen += loc->mbuf->outer_l2_len +
 				loc->mbuf->outer_l3_len;
 		/* Segment must contain all TSO headers. */
@@ -2372,7 +2372,7 @@ mlx5_tx_burst_tso(struct mlx5_txq_data *__rte_restrict txq,
 		if (MLX5_TXOFF_CONFIG(MULTI) &&
 		    unlikely(NB_SEGS(loc->mbuf) > 1))
 			return MLX5_TXCMP_CODE_MULTI;
-		if (likely(!(loc->mbuf->ol_flags & PKT_TX_TCP_SEG)))
+		if (likely(!(loc->mbuf->ol_flags & RTE_MBUF_F_TX_TCP_SEG)))
 			return MLX5_TXCMP_CODE_SINGLE;
 		/* Continue with the next TSO packet. */
 	}
@@ -2413,14 +2413,14 @@ mlx5_tx_able_to_empw(struct mlx5_txq_data *__rte_restrict txq,
 	/* Check for TSO packet. */
 	if (newp &&
 	    MLX5_TXOFF_CONFIG(TSO) &&
-	    unlikely(loc->mbuf->ol_flags & PKT_TX_TCP_SEG))
+	    unlikely(loc->mbuf->ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 		return MLX5_TXCMP_CODE_TSO;
 	/* Check if eMPW is enabled at all. */
 	if (!MLX5_TXOFF_CONFIG(EMPW))
 		return MLX5_TXCMP_CODE_SINGLE;
 	/* Check if eMPW can be engaged. */
 	if (MLX5_TXOFF_CONFIG(VLAN) &&
-	    unlikely(loc->mbuf->ol_flags & PKT_TX_VLAN) &&
+	    unlikely(loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) &&
 		(!MLX5_TXOFF_CONFIG(INLINE) ||
 		 unlikely((rte_pktmbuf_data_len(loc->mbuf) +
 			   sizeof(struct rte_vlan_hdr)) > txq->inlen_empw))) {
@@ -2473,8 +2473,8 @@ mlx5_tx_match_empw(struct mlx5_txq_data *__rte_restrict txq,
 		return false;
 	/* Fill metadata field if needed. */
 	if (MLX5_TXOFF_CONFIG(METADATA) &&
-		es->metadata != (loc->mbuf->ol_flags & PKT_TX_DYNF_METADATA ?
-		rte_cpu_to_be_32(*RTE_FLOW_DYNF_METADATA(loc->mbuf)) : 0))
+		es->metadata != (loc->mbuf->ol_flags & RTE_MBUF_DYNFLAG_TX_METADATA ?
+				 rte_cpu_to_be_32(*RTE_FLOW_DYNF_METADATA(loc->mbuf)) : 0))
 		return false;
 	/* Legacy MPW can send packets with the same length only. */
 	if (MLX5_TXOFF_CONFIG(MPW) &&
@@ -2482,7 +2482,7 @@ mlx5_tx_match_empw(struct mlx5_txq_data *__rte_restrict txq,
 		return false;
 	/* There must be no VLAN packets in eMPW loop. */
 	if (MLX5_TXOFF_CONFIG(VLAN))
-		MLX5_ASSERT(!(loc->mbuf->ol_flags & PKT_TX_VLAN));
+		MLX5_ASSERT(!(loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN));
 	/* Check if the scheduling is requested. */
 	if (MLX5_TXOFF_CONFIG(TXPP) &&
 	    loc->mbuf->ol_flags & txq->ts_mask)
@@ -2918,7 +2918,7 @@ mlx5_tx_burst_empw_inline(struct mlx5_txq_data *__rte_restrict txq,
 			}
 			/* Inline or not inline - that's the Question. */
 			if (dlen > txq->inlen_empw ||
-			    loc->mbuf->ol_flags & PKT_TX_DYNF_NOINLINE)
+			    loc->mbuf->ol_flags & RTE_MBUF_F_TX_DYNF_NOINLINE)
 				goto pointer_empw;
 			if (MLX5_TXOFF_CONFIG(MPW)) {
 				if (dlen > txq->inlen_send)
@@ -2943,7 +2943,7 @@ mlx5_tx_burst_empw_inline(struct mlx5_txq_data *__rte_restrict txq,
 			}
 			/* Inline entire packet, optional VLAN insertion. */
 			if (MLX5_TXOFF_CONFIG(VLAN) &&
-			    loc->mbuf->ol_flags & PKT_TX_VLAN) {
+			    loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) {
 				/*
 				 * The packet length must be checked in
 				 * mlx5_tx_able_to_empw() and packet
@@ -3008,7 +3008,7 @@ mlx5_tx_burst_empw_inline(struct mlx5_txq_data *__rte_restrict txq,
 			MLX5_ASSERT(room >= MLX5_WQE_DSEG_SIZE);
 			if (MLX5_TXOFF_CONFIG(VLAN))
 				MLX5_ASSERT(!(loc->mbuf->ol_flags &
-					    PKT_TX_VLAN));
+					    RTE_MBUF_F_TX_VLAN));
 			mlx5_tx_dseg_ptr(txq, loc, dseg, dptr, dlen, olx);
 			/* We have to store mbuf in elts.*/
 			txq->elts[txq->elts_head++ & txq->elts_m] = loc->mbuf;
@@ -3153,7 +3153,7 @@ mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq,
 
 			inlen = rte_pktmbuf_data_len(loc->mbuf);
 			if (MLX5_TXOFF_CONFIG(VLAN) &&
-			    loc->mbuf->ol_flags & PKT_TX_VLAN) {
+			    loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) {
 				vlan = sizeof(struct rte_vlan_hdr);
 				inlen += vlan;
 			}
@@ -3174,7 +3174,7 @@ mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq,
 				if (inlen <= MLX5_ESEG_MIN_INLINE_SIZE)
 					return MLX5_TXCMP_CODE_ERROR;
 				if (loc->mbuf->ol_flags &
-				    PKT_TX_DYNF_NOINLINE) {
+				    RTE_MBUF_F_TX_DYNF_NOINLINE) {
 					/*
 					 * The hint flag not to inline packet
 					 * data is set. Check whether we can
@@ -3384,7 +3384,7 @@ mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq,
 			/* Update sent data bytes counter. */
 			txq->stats.obytes += rte_pktmbuf_data_len(loc->mbuf);
 			if (MLX5_TXOFF_CONFIG(VLAN) &&
-			    loc->mbuf->ol_flags & PKT_TX_VLAN)
+			    loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN)
 				txq->stats.obytes +=
 					sizeof(struct rte_vlan_hdr);
 #endif
@@ -3580,7 +3580,7 @@ mlx5_tx_burst_tmpl(struct mlx5_txq_data *__rte_restrict txq,
 		}
 		/* Dedicated branch for single-segment TSO packets. */
 		if (MLX5_TXOFF_CONFIG(TSO) &&
-		    unlikely(loc.mbuf->ol_flags & PKT_TX_TCP_SEG)) {
+		    unlikely(loc.mbuf->ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 			/*
 			 * TSO might require special way for inlining
 			 * (dedicated parameters) and is sent with
diff --git a/drivers/net/mvneta/mvneta_ethdev.h b/drivers/net/mvneta/mvneta_ethdev.h
index ef8067790f..d4a06d5795 100644
--- a/drivers/net/mvneta/mvneta_ethdev.h
+++ b/drivers/net/mvneta/mvneta_ethdev.h
@@ -64,9 +64,9 @@
 #define MVNETA_TX_OFFLOADS (MVNETA_TX_OFFLOAD_CHECKSUM | \
 			    DEV_TX_OFFLOAD_MULTI_SEGS)
 
-#define MVNETA_TX_PKT_OFFLOADS (PKT_TX_IP_CKSUM | \
-				PKT_TX_TCP_CKSUM | \
-				PKT_TX_UDP_CKSUM)
+#define MVNETA_TX_PKT_OFFLOADS (RTE_MBUF_F_TX_IP_CKSUM | \
+				RTE_MBUF_F_TX_TCP_CKSUM | \
+				RTE_MBUF_F_TX_UDP_CKSUM)
 
 struct mvneta_priv {
 	/* Hot fields, used in fast path. */
diff --git a/drivers/net/mvneta/mvneta_rxtx.c b/drivers/net/mvneta/mvneta_rxtx.c
index 2d61930382..f5340aa8df 100644
--- a/drivers/net/mvneta/mvneta_rxtx.c
+++ b/drivers/net/mvneta/mvneta_rxtx.c
@@ -304,18 +304,18 @@ mvneta_prepare_proto_info(uint64_t ol_flags,
 	 * default value
 	 */
 	*l3_type = NETA_OUTQ_L3_TYPE_IPV4;
-	*gen_l3_cksum = ol_flags & PKT_TX_IP_CKSUM ? 1 : 0;
+	*gen_l3_cksum = ol_flags & RTE_MBUF_F_TX_IP_CKSUM ? 1 : 0;
 
-	if (ol_flags & PKT_TX_IPV6) {
+	if (ol_flags & RTE_MBUF_F_TX_IPV6) {
 		*l3_type = NETA_OUTQ_L3_TYPE_IPV6;
 		/* no checksum for ipv6 header */
 		*gen_l3_cksum = 0;
 	}
 
-	if (ol_flags & PKT_TX_TCP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) {
 		*l4_type = NETA_OUTQ_L4_TYPE_TCP;
 		*gen_l4_cksum = 1;
-	} else if (ol_flags & PKT_TX_UDP_CKSUM) {
+	} else if (ol_flags & RTE_MBUF_F_TX_UDP_CKSUM) {
 		*l4_type = NETA_OUTQ_L4_TYPE_UDP;
 		*gen_l4_cksum = 1;
 	} else {
@@ -342,15 +342,15 @@ mvneta_desc_to_ol_flags(struct neta_ppio_desc *desc)
 
 	status = neta_ppio_inq_desc_get_l3_pkt_error(desc);
 	if (unlikely(status != NETA_DESC_ERR_OK))
-		flags = PKT_RX_IP_CKSUM_BAD;
+		flags = RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		flags = PKT_RX_IP_CKSUM_GOOD;
+		flags = RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	status = neta_ppio_inq_desc_get_l4_pkt_error(desc);
 	if (unlikely(status != NETA_DESC_ERR_OK))
-		flags |= PKT_RX_L4_CKSUM_BAD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	else
-		flags |= PKT_RX_L4_CKSUM_GOOD;
+		flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 
 	return flags;
 }
diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index 65d011300a..50b991afef 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -69,9 +69,9 @@
 #define MRVL_TX_OFFLOADS (MRVL_TX_OFFLOAD_CHECKSUM | \
 			  DEV_TX_OFFLOAD_MULTI_SEGS)
 
-#define MRVL_TX_PKT_OFFLOADS (PKT_TX_IP_CKSUM | \
-			      PKT_TX_TCP_CKSUM | \
-			      PKT_TX_UDP_CKSUM)
+#define MRVL_TX_PKT_OFFLOADS (RTE_MBUF_F_TX_IP_CKSUM | \
+			      RTE_MBUF_F_TX_TCP_CKSUM | \
+			      RTE_MBUF_F_TX_UDP_CKSUM)
 
 static const char * const valid_args[] = {
 	MRVL_IFACE_NAME_ARG,
@@ -2549,18 +2549,18 @@ mrvl_desc_to_ol_flags(struct pp2_ppio_desc *desc, uint64_t packet_type)
 	if (RTE_ETH_IS_IPV4_HDR(packet_type)) {
 		status = pp2_ppio_inq_desc_get_l3_pkt_error(desc);
 		if (unlikely(status != PP2_DESC_ERR_OK))
-			flags |= PKT_RX_IP_CKSUM_BAD;
+			flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		else
-			flags |= PKT_RX_IP_CKSUM_GOOD;
+			flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 	}
 
 	if (((packet_type & RTE_PTYPE_L4_UDP) == RTE_PTYPE_L4_UDP) ||
 	    ((packet_type & RTE_PTYPE_L4_TCP) == RTE_PTYPE_L4_TCP)) {
 		status = pp2_ppio_inq_desc_get_l4_pkt_error(desc);
 		if (unlikely(status != PP2_DESC_ERR_OK))
-			flags |= PKT_RX_L4_CKSUM_BAD;
+			flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		else
-			flags |= PKT_RX_L4_CKSUM_GOOD;
+			flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 	}
 
 	return flags;
@@ -2720,18 +2720,18 @@ mrvl_prepare_proto_info(uint64_t ol_flags,
 	 * default value
 	 */
 	*l3_type = PP2_OUTQ_L3_TYPE_IPV4;
-	*gen_l3_cksum = ol_flags & PKT_TX_IP_CKSUM ? 1 : 0;
+	*gen_l3_cksum = ol_flags & RTE_MBUF_F_TX_IP_CKSUM ? 1 : 0;
 
-	if (ol_flags & PKT_TX_IPV6) {
+	if (ol_flags & RTE_MBUF_F_TX_IPV6) {
 		*l3_type = PP2_OUTQ_L3_TYPE_IPV6;
 		/* no checksum for ipv6 header */
 		*gen_l3_cksum = 0;
 	}
 
-	if ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM) {
+	if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_TCP_CKSUM) {
 		*l4_type = PP2_OUTQ_L4_TYPE_TCP;
 		*gen_l4_cksum = 1;
-	} else if ((ol_flags & PKT_TX_L4_MASK) ==  PKT_TX_UDP_CKSUM) {
+	} else if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) ==  RTE_MBUF_F_TX_UDP_CKSUM) {
 		*l4_type = PP2_OUTQ_L4_TYPE_UDP;
 		*gen_l4_cksum = 1;
 	} else {
diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c
index acae68e082..b915bff459 100644
--- a/drivers/net/netvsc/hn_rxtx.c
+++ b/drivers/net/netvsc/hn_rxtx.c
@@ -615,7 +615,7 @@ static void hn_rxpkt(struct hn_rx_queue *rxq, struct hn_rx_bufinfo *rxb,
 
 	if (info->vlan_info != HN_NDIS_VLAN_INFO_INVALID) {
 		m->vlan_tci = info->vlan_info;
-		m->ol_flags |= PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+		m->ol_flags |= RTE_MBUF_F_RX_VLAN_STRIPPED | RTE_MBUF_F_RX_VLAN;
 
 		/* NDIS always strips tag, put it back if necessary */
 		if (!hv->vlan_strip && rte_vlan_insert(&m)) {
@@ -630,18 +630,18 @@ static void hn_rxpkt(struct hn_rx_queue *rxq, struct hn_rx_bufinfo *rxb,
 
 	if (info->csum_info != HN_NDIS_RXCSUM_INFO_INVALID) {
 		if (info->csum_info & NDIS_RXCSUM_INFO_IPCS_OK)
-			m->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 		if (info->csum_info & (NDIS_RXCSUM_INFO_UDPCS_OK
 				       | NDIS_RXCSUM_INFO_TCPCS_OK))
-			m->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		else if (info->csum_info & (NDIS_RXCSUM_INFO_TCPCS_FAILED
 					    | NDIS_RXCSUM_INFO_UDPCS_FAILED))
-			m->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+			m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	}
 
 	if (info->hash_info != HN_NDIS_HASH_INFO_INVALID) {
-		m->ol_flags |= PKT_RX_RSS_HASH;
+		m->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		m->hash.rss = info->hash_value;
 	}
 
@@ -1331,17 +1331,17 @@ static void hn_encap(struct rndis_packet_msg *pkt,
 					  NDIS_PKTINFO_TYPE_HASHVAL);
 	*pi_data = queue_id;
 
-	if (m->ol_flags & PKT_TX_VLAN) {
+	if (m->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		pi_data = hn_rndis_pktinfo_append(pkt, NDIS_VLAN_INFO_SIZE,
 						  NDIS_PKTINFO_TYPE_VLAN);
 		*pi_data = m->vlan_tci;
 	}
 
-	if (m->ol_flags & PKT_TX_TCP_SEG) {
+	if (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		pi_data = hn_rndis_pktinfo_append(pkt, NDIS_LSO2_INFO_SIZE,
 						  NDIS_PKTINFO_TYPE_LSO);
 
-		if (m->ol_flags & PKT_TX_IPV6) {
+		if (m->ol_flags & RTE_MBUF_F_TX_IPV6) {
 			*pi_data = NDIS_LSO2_INFO_MAKEIPV6(hlen,
 							   m->tso_segsz);
 		} else {
@@ -1349,23 +1349,23 @@ static void hn_encap(struct rndis_packet_msg *pkt,
 							   m->tso_segsz);
 		}
 	} else if (m->ol_flags &
-		   (PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM | PKT_TX_IP_CKSUM)) {
+		   (RTE_MBUF_F_TX_TCP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM | RTE_MBUF_F_TX_IP_CKSUM)) {
 		pi_data = hn_rndis_pktinfo_append(pkt, NDIS_TXCSUM_INFO_SIZE,
 						  NDIS_PKTINFO_TYPE_CSUM);
 		*pi_data = 0;
 
-		if (m->ol_flags & PKT_TX_IPV6)
+		if (m->ol_flags & RTE_MBUF_F_TX_IPV6)
 			*pi_data |= NDIS_TXCSUM_INFO_IPV6;
-		if (m->ol_flags & PKT_TX_IPV4) {
+		if (m->ol_flags & RTE_MBUF_F_TX_IPV4) {
 			*pi_data |= NDIS_TXCSUM_INFO_IPV4;
 
-			if (m->ol_flags & PKT_TX_IP_CKSUM)
+			if (m->ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 				*pi_data |= NDIS_TXCSUM_INFO_IPCS;
 		}
 
-		if (m->ol_flags & PKT_TX_TCP_CKSUM)
+		if (m->ol_flags & RTE_MBUF_F_TX_TCP_CKSUM)
 			*pi_data |= NDIS_TXCSUM_INFO_MKTCPCS(hlen);
-		else if (m->ol_flags & PKT_TX_UDP_CKSUM)
+		else if (m->ol_flags & RTE_MBUF_F_TX_UDP_CKSUM)
 			*pi_data |= NDIS_TXCSUM_INFO_MKUDPCS(hlen);
 	}
 
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 0df300fe0d..300382984d 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -203,7 +203,7 @@ nfp_net_set_hash(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd,
 	}
 
 	mbuf->hash.rss = hash;
-	mbuf->ol_flags |= PKT_RX_RSS_HASH;
+	mbuf->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 
 	switch (hash_type) {
 	case NFP_NET_RSS_IPV4:
@@ -245,9 +245,9 @@ nfp_net_rx_cksum(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd,
 	/* If IPv4 and IP checksum error, fail */
 	if (unlikely((rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM) &&
 	    !(rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM_OK)))
-		mb->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+		mb->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
-		mb->ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+		mb->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	/* If neither UDP nor TCP return */
 	if (!(rxd->rxd.flags & PCIE_DESC_RX_TCP_CSUM) &&
@@ -255,9 +255,9 @@ nfp_net_rx_cksum(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd,
 		return;
 
 	if (likely(rxd->rxd.flags & PCIE_DESC_RX_L4_CSUM_OK))
-		mb->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+		mb->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 	else
-		mb->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+		mb->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 }
 
 /*
@@ -403,7 +403,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		if ((rxds->rxd.flags & PCIE_DESC_RX_VLAN) &&
 		    (hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN)) {
 			mb->vlan_tci = rte_cpu_to_le_32(rxds->rxd.vlan);
-			mb->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+			mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		}
 
 		/* Adding the mbuf to the mbuf array passed by the app */
@@ -827,7 +827,7 @@ nfp_net_tx_tso(struct nfp_net_txq *txq, struct nfp_net_tx_desc *txd,
 
 	ol_flags = mb->ol_flags;
 
-	if (!(ol_flags & PKT_TX_TCP_SEG))
+	if (!(ol_flags & RTE_MBUF_F_TX_TCP_SEG))
 		goto clean_txd;
 
 	txd->l3_offset = mb->l2_len;
@@ -859,19 +859,19 @@ nfp_net_tx_cksum(struct nfp_net_txq *txq, struct nfp_net_tx_desc *txd,
 	ol_flags = mb->ol_flags;
 
 	/* IPv6 does not need checksum */
-	if (ol_flags & PKT_TX_IP_CKSUM)
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 		txd->flags |= PCIE_DESC_TX_IP4_CSUM;
 
-	switch (ol_flags & PKT_TX_L4_MASK) {
-	case PKT_TX_UDP_CKSUM:
+	switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		txd->flags |= PCIE_DESC_TX_UDP_CSUM;
 		break;
-	case PKT_TX_TCP_CKSUM:
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		txd->flags |= PCIE_DESC_TX_TCP_CSUM;
 		break;
 	}
 
-	if (ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK))
+	if (ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK))
 		txd->flags |= PCIE_DESC_TX_CSUM;
 }
 
@@ -935,7 +935,7 @@ nfp_net_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		nfp_net_tx_tso(txq, &txd, pkt);
 		nfp_net_tx_cksum(txq, &txd, pkt);
 
-		if ((pkt->ol_flags & PKT_TX_VLAN) &&
+		if ((pkt->ol_flags & RTE_MBUF_F_TX_VLAN) &&
 		    (hw->cap & NFP_NET_CFG_CTRL_TXVLAN)) {
 			txd.flags |= PCIE_DESC_TX_VLAN;
 			txd.vlan = pkt->vlan_tci;
diff --git a/drivers/net/octeontx/octeontx_rxtx.h b/drivers/net/octeontx/octeontx_rxtx.h
index e0723ac26a..eeadd555c7 100644
--- a/drivers/net/octeontx/octeontx_rxtx.h
+++ b/drivers/net/octeontx/octeontx_rxtx.h
@@ -242,20 +242,20 @@ octeontx_tx_checksum_offload(uint64_t *cmd_buf, const uint16_t flags,
 	 * 0x2 - TCP L4 checksum
 	 * 0x3 - SCTP L4 checksum
 	 */
-	const uint8_t csum = (!(((ol_flags ^ PKT_TX_UDP_CKSUM) >> 52) & 0x3) +
-		      (!(((ol_flags ^ PKT_TX_TCP_CKSUM) >> 52) & 0x3) * 2) +
-		      (!(((ol_flags ^ PKT_TX_SCTP_CKSUM) >> 52) & 0x3) * 3));
-
-	const uint8_t is_tunnel_parsed = (!!(ol_flags & PKT_TX_TUNNEL_GTP) ||
-				      !!(ol_flags & PKT_TX_TUNNEL_VXLAN_GPE) ||
-				      !!(ol_flags & PKT_TX_TUNNEL_VXLAN) ||
-				      !!(ol_flags & PKT_TX_TUNNEL_GRE) ||
-				      !!(ol_flags & PKT_TX_TUNNEL_GENEVE) ||
-				      !!(ol_flags & PKT_TX_TUNNEL_IP) ||
-				      !!(ol_flags & PKT_TX_TUNNEL_IPIP));
-
-	const uint8_t csum_outer = (!!(ol_flags & PKT_TX_OUTER_UDP_CKSUM) ||
-				    !!(ol_flags & PKT_TX_TUNNEL_UDP));
+	const uint8_t csum = (!(((ol_flags ^ RTE_MBUF_F_TX_UDP_CKSUM) >> 52) & 0x3) +
+		      (!(((ol_flags ^ RTE_MBUF_F_TX_TCP_CKSUM) >> 52) & 0x3) * 2) +
+		      (!(((ol_flags ^ RTE_MBUF_F_TX_SCTP_CKSUM) >> 52) & 0x3) * 3));
+
+	const uint8_t is_tunnel_parsed = (!!(ol_flags & RTE_MBUF_F_TX_TUNNEL_GTP) ||
+				      !!(ol_flags & RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE) ||
+				      !!(ol_flags & RTE_MBUF_F_TX_TUNNEL_VXLAN) ||
+				      !!(ol_flags & RTE_MBUF_F_TX_TUNNEL_GRE) ||
+				      !!(ol_flags & RTE_MBUF_F_TX_TUNNEL_GENEVE) ||
+				      !!(ol_flags & RTE_MBUF_F_TX_TUNNEL_IP) ||
+				      !!(ol_flags & RTE_MBUF_F_TX_TUNNEL_IPIP));
+
+	const uint8_t csum_outer = (!!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM) ||
+				    !!(ol_flags & RTE_MBUF_F_TX_TUNNEL_UDP));
 	const uint8_t outer_l2_len = m->outer_l2_len;
 	const uint8_t l2_len = m->l2_len;
 
@@ -266,7 +266,7 @@ octeontx_tx_checksum_offload(uint64_t *cmd_buf, const uint16_t flags,
 			send_hdr->w0.l3ptr = outer_l2_len;
 			send_hdr->w0.l4ptr = outer_l2_len + m->outer_l3_len;
 			/* Set clk3 for PKO to calculate IPV4 header checksum */
-			send_hdr->w0.ckl3 = !!(ol_flags & PKT_TX_OUTER_IPV4);
+			send_hdr->w0.ckl3 = !!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4);
 
 			/* Outer L4 */
 			send_hdr->w0.ckl4 = csum_outer;
@@ -277,7 +277,7 @@ octeontx_tx_checksum_offload(uint64_t *cmd_buf, const uint16_t flags,
 			/* Set clke for PKO to calculate inner IPV4 header
 			 * checksum.
 			 */
-			send_hdr->w0.ckle = !!(ol_flags & PKT_TX_IPV4);
+			send_hdr->w0.ckle = !!(ol_flags & RTE_MBUF_F_TX_IPV4);
 
 			/* Inner L4 */
 			send_hdr->w0.cklf = csum;
@@ -286,7 +286,7 @@ octeontx_tx_checksum_offload(uint64_t *cmd_buf, const uint16_t flags,
 			send_hdr->w0.l3ptr = l2_len;
 			send_hdr->w0.l4ptr = l2_len + m->l3_len;
 			/* Set clk3 for PKO to calculate IPV4 header checksum */
-			send_hdr->w0.ckl3 = !!(ol_flags & PKT_TX_IPV4);
+			send_hdr->w0.ckl3 = !!(ol_flags & RTE_MBUF_F_TX_IPV4);
 
 			/* Inner L4 */
 			send_hdr->w0.ckl4 = csum;
@@ -296,7 +296,7 @@ octeontx_tx_checksum_offload(uint64_t *cmd_buf, const uint16_t flags,
 		send_hdr->w0.l3ptr = outer_l2_len;
 		send_hdr->w0.l4ptr = outer_l2_len + m->outer_l3_len;
 		/* Set clk3 for PKO to calculate IPV4 header checksum */
-		send_hdr->w0.ckl3 = !!(ol_flags & PKT_TX_OUTER_IPV4);
+		send_hdr->w0.ckl3 = !!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4);
 
 		/* Outer L4 */
 		send_hdr->w0.ckl4 = csum_outer;
@@ -305,7 +305,7 @@ octeontx_tx_checksum_offload(uint64_t *cmd_buf, const uint16_t flags,
 		send_hdr->w0.l3ptr = l2_len;
 		send_hdr->w0.l4ptr = l2_len + m->l3_len;
 		/* Set clk3 for PKO to calculate IPV4 header checksum */
-		send_hdr->w0.ckl3 = !!(ol_flags & PKT_TX_IPV4);
+		send_hdr->w0.ckl3 = !!(ol_flags & RTE_MBUF_F_TX_IPV4);
 
 		/* Inner L4 */
 		send_hdr->w0.ckl4 = csum;
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index d576bc6989..8efeb154b4 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -746,15 +746,15 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	uint16_t flags = 0;
 
 	/* Fastpath is dependent on these enums */
-	RTE_BUILD_BUG_ON(PKT_TX_TCP_CKSUM != (1ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_SCTP_CKSUM != (2ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_UDP_CKSUM != (3ULL << 52));
-	RTE_BUILD_BUG_ON(PKT_TX_IP_CKSUM != (1ULL << 54));
-	RTE_BUILD_BUG_ON(PKT_TX_IPV4 != (1ULL << 55));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IP_CKSUM != (1ULL << 58));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IPV4 != (1ULL << 59));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_IPV6 != (1ULL << 60));
-	RTE_BUILD_BUG_ON(PKT_TX_OUTER_UDP_CKSUM != (1ULL << 41));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_TCP_CKSUM != (1ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_SCTP_CKSUM != (2ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_UDP_CKSUM != (3ULL << 52));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IP_CKSUM != (1ULL << 54));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IPV4 != (1ULL << 55));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IP_CKSUM != (1ULL << 58));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV4 != (1ULL << 59));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV6 != (1ULL << 60));
+	RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_UDP_CKSUM != (1ULL << 41));
 	RTE_BUILD_BUG_ON(RTE_MBUF_L2_LEN_BITS != 7);
 	RTE_BUILD_BUG_ON(RTE_MBUF_L3_LEN_BITS != 9);
 	RTE_BUILD_BUG_ON(RTE_MBUF_OUTL2_LEN_BITS != 7);
diff --git a/drivers/net/octeontx2/otx2_lookup.c b/drivers/net/octeontx2/otx2_lookup.c
index 4764608c2d..5fa9ae1396 100644
--- a/drivers/net/octeontx2/otx2_lookup.c
+++ b/drivers/net/octeontx2/otx2_lookup.c
@@ -264,9 +264,9 @@ nix_create_rx_ol_flags_array(void *mem)
 		errlev = idx & 0xf;
 		errcode = (idx & 0xff0) >> 4;
 
-		val = PKT_RX_IP_CKSUM_UNKNOWN;
-		val |= PKT_RX_L4_CKSUM_UNKNOWN;
-		val |= PKT_RX_OUTER_L4_CKSUM_UNKNOWN;
+		val = RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
+		val |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
+		val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN;
 
 		switch (errlev) {
 		case NPC_ERRLEV_RE:
@@ -274,46 +274,46 @@ nix_create_rx_ol_flags_array(void *mem)
 			 * including Outer L2 length mismatch error
 			 */
 			if (errcode) {
-				val |= PKT_RX_IP_CKSUM_BAD;
-				val |= PKT_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 			break;
 		case NPC_ERRLEV_LC:
 			if (errcode == NPC_EC_OIP4_CSUM ||
 			    errcode == NPC_EC_IP_FRAG_OFFSET_1) {
-				val |= PKT_RX_IP_CKSUM_BAD;
-				val |= PKT_RX_OUTER_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			}
 			break;
 		case NPC_ERRLEV_LG:
 			if (errcode == NPC_EC_IIP4_CSUM)
-				val |= PKT_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			else
-				val |= PKT_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			break;
 		case NPC_ERRLEV_NIX:
 			if (errcode == NIX_RX_PERRCODE_OL4_CHK ||
 			    errcode == NIX_RX_PERRCODE_OL4_LEN ||
 			    errcode == NIX_RX_PERRCODE_OL4_PORT) {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_BAD;
-				val |= PKT_RX_OUTER_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 			} else if (errcode == NIX_RX_PERRCODE_IL4_CHK ||
 				   errcode == NIX_RX_PERRCODE_IL4_LEN ||
 				   errcode == NIX_RX_PERRCODE_IL4_PORT) {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			} else if (errcode == NIX_RX_PERRCODE_IL3_LEN ||
 				   errcode == NIX_RX_PERRCODE_OL3_LEN) {
-				val |= PKT_RX_IP_CKSUM_BAD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			} else {
-				val |= PKT_RX_IP_CKSUM_GOOD;
-				val |= PKT_RX_L4_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+				val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 			break;
 		}
diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
index ffeade5952..5a7d220e22 100644
--- a/drivers/net/octeontx2/otx2_rx.c
+++ b/drivers/net/octeontx2/otx2_rx.c
@@ -92,7 +92,7 @@ static __rte_always_inline uint64_t
 nix_vlan_update(const uint64_t w2, uint64_t ol_flags, uint8x16_t *f)
 {
 	if (w2 & BIT_ULL(21) /* vtag0_gone */) {
-		ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		*f = vsetq_lane_u16((uint16_t)(w2 >> 32), *f, 5);
 	}
 
@@ -103,7 +103,7 @@ static __rte_always_inline uint64_t
 nix_qinq_update(const uint64_t w2, uint64_t ol_flags, struct rte_mbuf *mbuf)
 {
 	if (w2 & BIT_ULL(23) /* vtag1_gone */) {
-		ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED;
+		ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
 		mbuf->vlan_tci_outer = (uint16_t)(w2 >> 48);
 	}
 
@@ -205,10 +205,10 @@ nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
 			f1 = vsetq_lane_u32(cq1_w0, f1, 3);
 			f2 = vsetq_lane_u32(cq2_w0, f2, 3);
 			f3 = vsetq_lane_u32(cq3_w0, f3, 3);
-			ol_flags0 = PKT_RX_RSS_HASH;
-			ol_flags1 = PKT_RX_RSS_HASH;
-			ol_flags2 = PKT_RX_RSS_HASH;
-			ol_flags3 = PKT_RX_RSS_HASH;
+			ol_flags0 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags1 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags2 = RTE_MBUF_F_RX_RSS_HASH;
+			ol_flags3 = RTE_MBUF_F_RX_RSS_HASH;
 		} else {
 			ol_flags0 = 0; ol_flags1 = 0;
 			ol_flags2 = 0; ol_flags3 = 0;
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
index ea29aec62f..530bf0082f 100644
--- a/drivers/net/octeontx2/otx2_rx.h
+++ b/drivers/net/octeontx2/otx2_rx.h
@@ -88,15 +88,15 @@ otx2_nix_mbuf_to_tstamp(struct rte_mbuf *mbuf,
 		 */
 		*otx2_timestamp_dynfield(mbuf, tstamp) =
 				rte_be_to_cpu_64(*tstamp_ptr);
-		/* PKT_RX_IEEE1588_TMST flag needs to be set only in case
+		/* RTE_MBUF_F_RX_IEEE1588_TMST flag needs to be set only in case
 		 * PTP packets are received.
 		 */
 		if (mbuf->packet_type == RTE_PTYPE_L2_ETHER_TIMESYNC) {
 			tstamp->rx_tstamp =
 					*otx2_timestamp_dynfield(mbuf, tstamp);
 			tstamp->rx_ready = 1;
-			mbuf->ol_flags |= PKT_RX_IEEE1588_PTP |
-				PKT_RX_IEEE1588_TMST |
+			mbuf->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP |
+				RTE_MBUF_F_RX_IEEE1588_TMST |
 				tstamp->rx_tstamp_dynflag;
 		}
 	}
@@ -161,9 +161,9 @@ nix_update_match_id(const uint16_t match_id, uint64_t ol_flags,
 	 * 0 to OTX2_FLOW_ACTION_FLAG_DEFAULT - 2
 	 */
 	if (likely(match_id)) {
-		ol_flags |= PKT_RX_FDIR;
+		ol_flags |= RTE_MBUF_F_RX_FDIR;
 		if (match_id != OTX2_FLOW_ACTION_FLAG_DEFAULT) {
-			ol_flags |= PKT_RX_FDIR_ID;
+			ol_flags |= RTE_MBUF_F_RX_FDIR_ID;
 			mbuf->hash.fdir.hi = match_id - 1;
 		}
 	}
@@ -252,7 +252,7 @@ nix_rx_sec_mbuf_update(const struct nix_rx_parse_s *rx,
 	int i;
 
 	if (unlikely(nix_rx_sec_cptres_get(cq) != OTX2_SEC_COMP_GOOD))
-		return PKT_RX_SEC_OFFLOAD | PKT_RX_SEC_OFFLOAD_FAILED;
+		return RTE_MBUF_F_RX_SEC_OFFLOAD | RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
 
 	/* 20 bits of tag would have the SPI */
 	spi = cq->tag & 0xFFFFF;
@@ -266,7 +266,7 @@ nix_rx_sec_mbuf_update(const struct nix_rx_parse_s *rx,
 
 	if (sa->replay_win_sz) {
 		if (cpt_ipsec_ip_antireplay_check(sa, l3_ptr) < 0)
-			return PKT_RX_SEC_OFFLOAD | PKT_RX_SEC_OFFLOAD_FAILED;
+			return RTE_MBUF_F_RX_SEC_OFFLOAD | RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
 	}
 
 	l2_ptr_actual = RTE_PTR_ADD(l2_ptr,
@@ -294,7 +294,7 @@ nix_rx_sec_mbuf_update(const struct nix_rx_parse_s *rx,
 	m_len = ip_len + l2_len;
 	m->data_len = m_len;
 	m->pkt_len = m_len;
-	return PKT_RX_SEC_OFFLOAD;
+	return RTE_MBUF_F_RX_SEC_OFFLOAD;
 }
 
 static __rte_always_inline void
@@ -318,7 +318,7 @@ otx2_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 
 	if (flag & NIX_RX_OFFLOAD_RSS_F) {
 		mbuf->hash.rss = tag;
-		ol_flags |= PKT_RX_RSS_HASH;
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 	}
 
 	if (flag & NIX_RX_OFFLOAD_CHECKSUM_F)
@@ -326,11 +326,11 @@ otx2_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 
 	if (flag & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
 		if (rx->vtag0_gone) {
-			ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+			ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 			mbuf->vlan_tci = rx->vtag0_tci;
 		}
 		if (rx->vtag1_gone) {
-			ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED;
+			ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
 			mbuf->vlan_tci_outer = rx->vtag1_tci;
 		}
 	}
diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c
index ff299f00b9..afc47ca888 100644
--- a/drivers/net/octeontx2/otx2_tx.c
+++ b/drivers/net/octeontx2/otx2_tx.c
@@ -364,26 +364,26 @@ nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			const uint8x16_t tbl = {
 				/* [0-15] = il4type:il3type */
 				0x04, /* none (IPv6 assumed) */
-				0x14, /* PKT_TX_TCP_CKSUM (IPv6 assumed) */
-				0x24, /* PKT_TX_SCTP_CKSUM (IPv6 assumed) */
-				0x34, /* PKT_TX_UDP_CKSUM (IPv6 assumed) */
-				0x03, /* PKT_TX_IP_CKSUM */
-				0x13, /* PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM */
-				0x23, /* PKT_TX_IP_CKSUM | PKT_TX_SCTP_CKSUM */
-				0x33, /* PKT_TX_IP_CKSUM | PKT_TX_UDP_CKSUM */
-				0x02, /* PKT_TX_IPV4  */
-				0x12, /* PKT_TX_IPV4 | PKT_TX_TCP_CKSUM */
-				0x22, /* PKT_TX_IPV4 | PKT_TX_SCTP_CKSUM */
-				0x32, /* PKT_TX_IPV4 | PKT_TX_UDP_CKSUM */
-				0x03, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM */
-				0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_TCP_CKSUM
+				0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6 assumed) */
+				0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6 assumed) */
+				0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6 assumed) */
+				0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
+				0x13, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM */
+				0x23, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_SCTP_CKSUM */
+				0x33, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM */
+				0x02, /* RTE_MBUF_F_TX_IPV4  */
+				0x12, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_TCP_CKSUM */
+				0x22, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_SCTP_CKSUM */
+				0x32, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_UDP_CKSUM */
+				0x03, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM */
+				0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_TCP_CKSUM
 				       */
-				0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_SCTP_CKSUM
+				0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_SCTP_CKSUM
 				       */
-				0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-				       * PKT_TX_UDP_CKSUM
+				0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+				       * RTE_MBUF_F_TX_UDP_CKSUM
 				       */
 			};
 
@@ -655,40 +655,40 @@ nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 				{
 					/* [0-15] = il4type:il3type */
 					0x04, /* none (IPv6) */
-					0x14, /* PKT_TX_TCP_CKSUM (IPv6) */
-					0x24, /* PKT_TX_SCTP_CKSUM (IPv6) */
-					0x34, /* PKT_TX_UDP_CKSUM (IPv6) */
-					0x03, /* PKT_TX_IP_CKSUM */
-					0x13, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_TCP_CKSUM
+					0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6) */
+					0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6) */
+					0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6) */
+					0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
+					0x13, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x23, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_SCTP_CKSUM
+					0x23, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x33, /* PKT_TX_IP_CKSUM |
-					       * PKT_TX_UDP_CKSUM
+					0x33, /* RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
-					0x02, /* PKT_TX_IPV4 */
-					0x12, /* PKT_TX_IPV4 |
-					       * PKT_TX_TCP_CKSUM
+					0x02, /* RTE_MBUF_F_TX_IPV4 */
+					0x12, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x22, /* PKT_TX_IPV4 |
-					       * PKT_TX_SCTP_CKSUM
+					0x22, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x32, /* PKT_TX_IPV4 |
-					       * PKT_TX_UDP_CKSUM
+					0x32, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
-					0x03, /* PKT_TX_IPV4 |
-					       * PKT_TX_IP_CKSUM
+					0x03, /* RTE_MBUF_F_TX_IPV4 |
+					       * RTE_MBUF_F_TX_IP_CKSUM
 					       */
-					0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_TCP_CKSUM
+					0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_TCP_CKSUM
 					       */
-					0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_SCTP_CKSUM
+					0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_SCTP_CKSUM
 					       */
-					0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM |
-					       * PKT_TX_UDP_CKSUM
+					0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
+					       * RTE_MBUF_F_TX_UDP_CKSUM
 					       */
 				},
 
diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h
index 486248dff7..c9558e50a7 100644
--- a/drivers/net/octeontx2/otx2_tx.h
+++ b/drivers/net/octeontx2/otx2_tx.h
@@ -29,8 +29,8 @@
 	 NIX_TX_OFFLOAD_TSO_F)
 
 #define NIX_UDP_TUN_BITMASK \
-	((1ull << (PKT_TX_TUNNEL_VXLAN >> 45)) | \
-	 (1ull << (PKT_TX_TUNNEL_GENEVE >> 45)))
+	((1ull << (RTE_MBUF_F_TX_TUNNEL_VXLAN >> 45)) | \
+	 (1ull << (RTE_MBUF_F_TX_TUNNEL_GENEVE >> 45)))
 
 #define NIX_LSO_FORMAT_IDX_TSOV4	(0)
 #define NIX_LSO_FORMAT_IDX_TSOV6	(1)
@@ -54,7 +54,7 @@ otx2_nix_xmit_prepare_tstamp(uint64_t *cmd,  const uint64_t *send_mem_desc,
 	if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
 		struct nix_send_mem_s *send_mem;
 		uint16_t off = (no_segdw - 1) << 1;
-		const uint8_t is_ol_tstamp = !(ol_flags & PKT_TX_IEEE1588_TMST);
+		const uint8_t is_ol_tstamp = !(ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST);
 
 		send_mem = (struct nix_send_mem_s *)(cmd + off);
 		if (flags & NIX_TX_MULTI_SEG_F) {
@@ -67,7 +67,7 @@ otx2_nix_xmit_prepare_tstamp(uint64_t *cmd,  const uint64_t *send_mem_desc,
 			rte_compiler_barrier();
 		}
 
-		/* Packets for which PKT_TX_IEEE1588_TMST is not set, tx tstamp
+		/* Packets for which RTE_MBUF_F_TX_IEEE1588_TMST is not set, tx tstamp
 		 * should not be recorded, hence changing the alg type to
 		 * NIX_SENDMEMALG_SET and also changing send mem addr field to
 		 * next 8 bytes as it corrpt the actual tx tstamp registered
@@ -152,12 +152,12 @@ otx2_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 	uint64_t mask, ol_flags = m->ol_flags;
 
 	if (flags & NIX_TX_OFFLOAD_TSO_F &&
-	    (ol_flags & PKT_TX_TCP_SEG)) {
+	    (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		uintptr_t mdata = rte_pktmbuf_mtod(m, uintptr_t);
 		uint16_t *iplen, *oiplen, *oudplen;
 		uint16_t lso_sb, paylen;
 
-		mask = -!!(ol_flags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6));
+		mask = -!!(ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IPV6));
 		lso_sb = (mask & (m->outer_l2_len + m->outer_l3_len)) +
 			m->l2_len + m->l3_len + m->l4_len;
 
@@ -166,15 +166,15 @@ otx2_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 
 		/* Get iplen position assuming no tunnel hdr */
 		iplen = (uint16_t *)(mdata + m->l2_len +
-				     (2 << !!(ol_flags & PKT_TX_IPV6)));
+				     (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
 		/* Handle tunnel tso */
 		if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-		    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+		    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 			const uint8_t is_udp_tun = (NIX_UDP_TUN_BITMASK >>
-				((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) & 0x1;
+				((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) & 0x1;
 
 			oiplen = (uint16_t *)(mdata + m->outer_l2_len +
-				(2 << !!(ol_flags & PKT_TX_OUTER_IPV6)));
+				(2 << !!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)));
 			*oiplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*oiplen) -
 						   paylen);
 
@@ -189,7 +189,7 @@ otx2_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 
 			/* Update iplen position to inner ip hdr */
 			iplen = (uint16_t *)(mdata + lso_sb - m->l3_len -
-				m->l4_len + (2 << !!(ol_flags & PKT_TX_IPV6)));
+				m->l4_len + (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
 		}
 
 		*iplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*iplen) - paylen);
@@ -239,11 +239,11 @@ otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 
 	if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
 	    (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F)) {
-		const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+		const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
 		const uint8_t ol3type =
-			((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) +
-			((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) +
-			!!(ol_flags & PKT_TX_OUTER_IP_CKSUM);
+			((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
+			((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
+			!!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
 
 		/* Outer L3 */
 		w1.ol3type = ol3type;
@@ -255,15 +255,15 @@ otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		w1.ol4type = csum + (csum << 1);
 
 		/* Inner L3 */
-		w1.il3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) +
-			((!!(ol_flags & PKT_TX_IPV6)) << 2);
+		w1.il3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
+			((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2);
 		w1.il3ptr = w1.ol4ptr + m->l2_len;
 		w1.il4ptr = w1.il3ptr + m->l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.il3type = w1.il3type + !!(ol_flags & PKT_TX_IP_CKSUM);
+		w1.il3type = w1.il3type + !!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
 
 		/* Inner L4 */
-		w1.il4type =  (ol_flags & PKT_TX_L4_MASK) >> 52;
+		w1.il4type =  (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
 
 		/* In case of no tunnel header use only
 		 * shift IL3/IL4 fields a bit to use
@@ -274,16 +274,16 @@ otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 			((w1.u & 0X00000000FFFFFFFF) >> (mask << 4));
 
 	} else if (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) {
-		const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+		const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
 		const uint8_t outer_l2_len = m->outer_l2_len;
 
 		/* Outer L3 */
 		w1.ol3ptr = outer_l2_len;
 		w1.ol4ptr = outer_l2_len + m->outer_l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.ol3type = ((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) +
-			((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) +
-			!!(ol_flags & PKT_TX_OUTER_IP_CKSUM);
+		w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
+			((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
+			!!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
 
 		/* Outer L4 */
 		w1.ol4type = csum + (csum << 1);
@@ -299,29 +299,29 @@ otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		w1.ol3ptr = l2_len;
 		w1.ol4ptr = l2_len + m->l3_len;
 		/* Increment it by 1 if it is IPV4 as 3 is with csum */
-		w1.ol3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) +
-			((!!(ol_flags & PKT_TX_IPV6)) << 2) +
-			!!(ol_flags & PKT_TX_IP_CKSUM);
+		w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
+			((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2) +
+			!!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
 
 		/* Inner L4 */
-		w1.ol4type =  (ol_flags & PKT_TX_L4_MASK) >> 52;
+		w1.ol4type =  (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
 	}
 
 	if (flags & NIX_TX_NEED_EXT_HDR &&
 	    flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) {
-		send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & PKT_TX_VLAN);
+		send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_VLAN);
 		/* HW will update ptr after vlan0 update */
 		send_hdr_ext->w1.vlan1_ins_ptr = 12;
 		send_hdr_ext->w1.vlan1_ins_tci = m->vlan_tci;
 
-		send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & PKT_TX_QINQ);
+		send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_QINQ);
 		/* 2B before end of l2 header */
 		send_hdr_ext->w1.vlan0_ins_ptr = 12;
 		send_hdr_ext->w1.vlan0_ins_tci = m->vlan_tci_outer;
 	}
 
 	if (flags & NIX_TX_OFFLOAD_TSO_F &&
-	    (ol_flags & PKT_TX_TCP_SEG)) {
+	    (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 		uint16_t lso_sb;
 		uint64_t mask;
 
@@ -332,18 +332,18 @@ otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
 		send_hdr_ext->w0.lso = 1;
 		send_hdr_ext->w0.lso_mps = m->tso_segsz;
 		send_hdr_ext->w0.lso_format =
-			NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & PKT_TX_IPV6);
+			NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & RTE_MBUF_F_TX_IPV6);
 		w1.ol4type = NIX_SENDL4TYPE_TCP_CKSUM;
 
 		/* Handle tunnel tso */
 		if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
-		    (ol_flags & PKT_TX_TUNNEL_MASK)) {
+		    (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
 			const uint8_t is_udp_tun = (NIX_UDP_TUN_BITMASK >>
-				((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) & 0x1;
+				((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) & 0x1;
 			uint8_t shift = is_udp_tun ? 32 : 0;
 
-			shift += (!!(ol_flags & PKT_TX_OUTER_IPV6) << 4);
-			shift += (!!(ol_flags & PKT_TX_IPV6) << 3);
+			shift += (!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) << 4);
+			shift += (!!(ol_flags & RTE_MBUF_F_TX_IPV6) << 3);
 
 			w1.il4type = NIX_SENDL4TYPE_TCP_CKSUM;
 			w1.ol4type = is_udp_tun ? NIX_SENDL4TYPE_UDP_CKSUM : 0;
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 050c6f5c32..1b4dfff3c3 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -1639,9 +1639,9 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 					    "L4 csum failed, flags = 0x%x\n",
 					    parse_flag);
 				rxq->rx_hw_errors++;
-				ol_flags |= PKT_RX_L4_CKSUM_BAD;
+				ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			} else {
-				ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+				ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 
 			if (unlikely(qede_check_tunn_csum_l3(parse_flag))) {
@@ -1649,9 +1649,9 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 					"Outer L3 csum failed, flags = 0x%x\n",
 					parse_flag);
 				rxq->rx_hw_errors++;
-				ol_flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+				ol_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 			} else {
-				ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+				ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			}
 
 			flags = fp_cqe->tunnel_pars_flags.flags;
@@ -1684,31 +1684,31 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 				    "L4 csum failed, flags = 0x%x\n",
 				    parse_flag);
 			rxq->rx_hw_errors++;
-			ol_flags |= PKT_RX_L4_CKSUM_BAD;
+			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		} else {
-			ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		}
 		if (unlikely(qede_check_notunn_csum_l3(rx_mb, parse_flag))) {
 			PMD_RX_LOG(ERR, rxq, "IP csum failed, flags = 0x%x\n",
 				   parse_flag);
 			rxq->rx_hw_errors++;
-			ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		} else {
-			ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 		}
 
 		if (unlikely(CQE_HAS_VLAN(parse_flag) ||
 			     CQE_HAS_OUTER_VLAN(parse_flag))) {
 			/* Note: FW doesn't indicate Q-in-Q packet */
-			ol_flags |= PKT_RX_VLAN;
+			ol_flags |= RTE_MBUF_F_RX_VLAN;
 			if (qdev->vlan_strip_flg) {
-				ol_flags |= PKT_RX_VLAN_STRIPPED;
+				ol_flags |= RTE_MBUF_F_RX_VLAN_STRIPPED;
 				rx_mb->vlan_tci = vlan_tci;
 			}
 		}
 
 		if (rss_enable) {
-			ol_flags |= PKT_RX_RSS_HASH;
+			ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 			rx_mb->hash.rss = rss_hash;
 		}
 
@@ -1837,7 +1837,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 			tpa_info = &rxq->tpa_info[cqe_start_tpa->tpa_agg_index];
 			tpa_start_flg = true;
 			/* Mark it as LRO packet */
-			ol_flags |= PKT_RX_LRO;
+			ol_flags |= RTE_MBUF_F_RX_LRO;
 			/* In split mode,  seg_len is same as len_on_first_bd
 			 * and bw_ext_bd_len_list will be empty since there are
 			 * no additional buffers
@@ -1908,9 +1908,9 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 					    "L4 csum failed, flags = 0x%x\n",
 					    parse_flag);
 				rxq->rx_hw_errors++;
-				ol_flags |= PKT_RX_L4_CKSUM_BAD;
+				ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 			} else {
-				ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+				ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 			}
 
 			if (unlikely(qede_check_tunn_csum_l3(parse_flag))) {
@@ -1918,9 +1918,9 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 					"Outer L3 csum failed, flags = 0x%x\n",
 					parse_flag);
 				  rxq->rx_hw_errors++;
-				  ol_flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+				  ol_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 			} else {
-				  ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+				  ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			}
 
 			if (tpa_start_flg)
@@ -1957,32 +1957,32 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 				    "L4 csum failed, flags = 0x%x\n",
 				    parse_flag);
 			rxq->rx_hw_errors++;
-			ol_flags |= PKT_RX_L4_CKSUM_BAD;
+			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		} else {
-			ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		}
 		if (unlikely(qede_check_notunn_csum_l3(rx_mb, parse_flag))) {
 			PMD_RX_LOG(ERR, rxq, "IP csum failed, flags = 0x%x\n",
 				   parse_flag);
 			rxq->rx_hw_errors++;
-			ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		} else {
-			ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 		}
 
 		if (CQE_HAS_VLAN(parse_flag) ||
 		    CQE_HAS_OUTER_VLAN(parse_flag)) {
 			/* Note: FW doesn't indicate Q-in-Q packet */
-			ol_flags |= PKT_RX_VLAN;
+			ol_flags |= RTE_MBUF_F_RX_VLAN;
 			if (qdev->vlan_strip_flg) {
-				ol_flags |= PKT_RX_VLAN_STRIPPED;
+				ol_flags |= RTE_MBUF_F_RX_VLAN_STRIPPED;
 				rx_mb->vlan_tci = vlan_tci;
 			}
 		}
 
 		/* RSS Hash */
 		if (qdev->rss_enable) {
-			ol_flags |= PKT_RX_RSS_HASH;
+			ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 			rx_mb->hash.rss = rss_hash;
 		}
 
@@ -2178,7 +2178,7 @@ qede_xmit_prep_pkts(__rte_unused void *p_txq, struct rte_mbuf **tx_pkts,
 	for (i = 0; i < nb_pkts; i++) {
 		m = tx_pkts[i];
 		ol_flags = m->ol_flags;
-		if (ol_flags & PKT_TX_TCP_SEG) {
+		if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			if (m->nb_segs >= ETH_TX_MAX_BDS_PER_LSO_PACKET) {
 				rte_errno = EINVAL;
 				break;
@@ -2196,14 +2196,14 @@ qede_xmit_prep_pkts(__rte_unused void *p_txq, struct rte_mbuf **tx_pkts,
 		}
 		if (ol_flags & QEDE_TX_OFFLOAD_NOTSUP_MASK) {
 			/* We support only limited tunnel protocols */
-			if (ol_flags & PKT_TX_TUNNEL_MASK) {
+			if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
 				uint64_t temp;
 
-				temp = ol_flags & PKT_TX_TUNNEL_MASK;
-				if (temp == PKT_TX_TUNNEL_VXLAN ||
-				    temp == PKT_TX_TUNNEL_GENEVE ||
-				    temp == PKT_TX_TUNNEL_MPLSINUDP ||
-				    temp == PKT_TX_TUNNEL_GRE)
+				temp = ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK;
+				if (temp == RTE_MBUF_F_TX_TUNNEL_VXLAN ||
+				    temp == RTE_MBUF_F_TX_TUNNEL_GENEVE ||
+				    temp == RTE_MBUF_F_TX_TUNNEL_MPLSINUDP ||
+				    temp == RTE_MBUF_F_TX_TUNNEL_GRE)
 					continue;
 			}
 
@@ -2311,13 +2311,13 @@ qede_xmit_pkts_regular(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			<< ETH_TX_DATA_1ST_BD_PKT_LEN_SHIFT;
 
 		/* Offload the IP checksum in the hardware */
-		if (tx_ol_flags & PKT_TX_IP_CKSUM)
+		if (tx_ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			bd1_bd_flags_bf |=
 				1 << ETH_TX_1ST_BD_FLAGS_IP_CSUM_SHIFT;
 
 		/* L4 checksum offload (tcp or udp) */
-		if ((tx_ol_flags & (PKT_TX_IPV4 | PKT_TX_IPV6)) &&
-		    (tx_ol_flags & (PKT_TX_UDP_CKSUM | PKT_TX_TCP_CKSUM)))
+		if ((tx_ol_flags & (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IPV6)) &&
+		    (tx_ol_flags & (RTE_MBUF_F_TX_UDP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM)))
 			bd1_bd_flags_bf |=
 				1 << ETH_TX_1ST_BD_FLAGS_L4_CSUM_SHIFT;
 
@@ -2456,7 +2456,7 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		 * offloads. Don't rely on pkt_type marked by Rx, instead use
 		 * tx_ol_flags to decide.
 		 */
-		tunn_flg = !!(tx_ol_flags & PKT_TX_TUNNEL_MASK);
+		tunn_flg = !!(tx_ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK);
 
 		if (tunn_flg) {
 			/* Check against max which is Tunnel IPv6 + ext */
@@ -2477,8 +2477,8 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			}
 
 			/* Outer IP checksum offload */
-			if (tx_ol_flags & (PKT_TX_OUTER_IP_CKSUM |
-					   PKT_TX_OUTER_IPV4)) {
+			if (tx_ol_flags & (RTE_MBUF_F_TX_OUTER_IP_CKSUM |
+					   RTE_MBUF_F_TX_OUTER_IPV4)) {
 				bd1_bd_flags_bf |=
 					ETH_TX_1ST_BD_FLAGS_TUNN_IP_CSUM_MASK <<
 					ETH_TX_1ST_BD_FLAGS_TUNN_IP_CSUM_SHIFT;
@@ -2490,8 +2490,8 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			 * and inner layers  lengths need to be provided in
 			 * mbuf.
 			 */
-			if ((tx_ol_flags & PKT_TX_TUNNEL_MASK) ==
-						PKT_TX_TUNNEL_MPLSINUDP) {
+			if ((tx_ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ==
+						RTE_MBUF_F_TX_TUNNEL_MPLSINUDP) {
 				mplsoudp_flg = true;
 #ifdef RTE_LIBRTE_QEDE_DEBUG_TX
 				qede_mpls_tunn_tx_sanity_check(mbuf, txq);
@@ -2524,18 +2524,18 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 				    1 << ETH_TX_DATA_2ND_BD_TUNN_IPV6_EXT_SHIFT;
 
 				/* Mark inner IPv6 if present */
-				if (tx_ol_flags & PKT_TX_IPV6)
+				if (tx_ol_flags & RTE_MBUF_F_TX_IPV6)
 					bd2_bf1 |=
 						1 << ETH_TX_DATA_2ND_BD_TUNN_INNER_IPV6_SHIFT;
 
 				/* Inner L4 offsets */
-				if ((tx_ol_flags & (PKT_TX_IPV4 | PKT_TX_IPV6)) &&
-				     (tx_ol_flags & (PKT_TX_UDP_CKSUM |
-							PKT_TX_TCP_CKSUM))) {
+				if ((tx_ol_flags & (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IPV6)) &&
+				     (tx_ol_flags & (RTE_MBUF_F_TX_UDP_CKSUM |
+							RTE_MBUF_F_TX_TCP_CKSUM))) {
 					/* Determines if BD3 is needed */
 					tunn_ipv6_ext_flg = true;
-					if ((tx_ol_flags & PKT_TX_L4_MASK) ==
-							PKT_TX_UDP_CKSUM) {
+					if ((tx_ol_flags & RTE_MBUF_F_TX_L4_MASK) ==
+							RTE_MBUF_F_TX_UDP_CKSUM) {
 						bd2_bf1 |=
 							1 << ETH_TX_DATA_2ND_BD_L4_UDP_SHIFT;
 					}
@@ -2553,7 +2553,7 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			} /* End MPLSoUDP */
 		} /* End Tunnel handling */
 
-		if (tx_ol_flags & PKT_TX_TCP_SEG) {
+		if (tx_ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			lso_flg = true;
 			if (unlikely(txq->nb_tx_avail <
 						ETH_TX_MIN_BDS_PER_LSO_PKT))
@@ -2570,7 +2570,7 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			bd1_bd_flags_bf |= 1 << ETH_TX_1ST_BD_FLAGS_LSO_SHIFT;
 			bd1_bd_flags_bf |=
 					1 << ETH_TX_1ST_BD_FLAGS_IP_CSUM_SHIFT;
-			/* PKT_TX_TCP_SEG implies PKT_TX_TCP_CKSUM */
+			/* RTE_MBUF_F_TX_TCP_SEG implies RTE_MBUF_F_TX_TCP_CKSUM */
 			bd1_bd_flags_bf |=
 					1 << ETH_TX_1ST_BD_FLAGS_L4_CSUM_SHIFT;
 			mss = rte_cpu_to_le_16(mbuf->tso_segsz);
@@ -2587,14 +2587,14 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Descriptor based VLAN insertion */
-		if (tx_ol_flags & PKT_TX_VLAN) {
+		if (tx_ol_flags & RTE_MBUF_F_TX_VLAN) {
 			vlan = rte_cpu_to_le_16(mbuf->vlan_tci);
 			bd1_bd_flags_bf |=
 			    1 << ETH_TX_1ST_BD_FLAGS_VLAN_INSERTION_SHIFT;
 		}
 
 		/* Offload the IP checksum in the hardware */
-		if (tx_ol_flags & PKT_TX_IP_CKSUM) {
+		if (tx_ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 			bd1_bd_flags_bf |=
 				1 << ETH_TX_1ST_BD_FLAGS_IP_CSUM_SHIFT;
 			/* There's no DPDK flag to request outer-L4 csum
@@ -2602,8 +2602,8 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			 * csum offload is requested then we need to force
 			 * recalculation of L4 tunnel header csum also.
 			 */
-			if (tunn_flg && ((tx_ol_flags & PKT_TX_TUNNEL_MASK) !=
-							PKT_TX_TUNNEL_GRE)) {
+			if (tunn_flg && ((tx_ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) !=
+							RTE_MBUF_F_TX_TUNNEL_GRE)) {
 				bd1_bd_flags_bf |=
 					ETH_TX_1ST_BD_FLAGS_TUNN_L4_CSUM_MASK <<
 					ETH_TX_1ST_BD_FLAGS_TUNN_L4_CSUM_SHIFT;
@@ -2611,8 +2611,8 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* L4 checksum offload (tcp or udp) */
-		if ((tx_ol_flags & (PKT_TX_IPV4 | PKT_TX_IPV6)) &&
-		    (tx_ol_flags & (PKT_TX_UDP_CKSUM | PKT_TX_TCP_CKSUM))) {
+		if ((tx_ol_flags & (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IPV6)) &&
+		    (tx_ol_flags & (RTE_MBUF_F_TX_UDP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM))) {
 			bd1_bd_flags_bf |=
 				1 << ETH_TX_1ST_BD_FLAGS_L4_CSUM_SHIFT;
 			/* There's no DPDK flag to request outer-L4 csum
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index 025ed6fff2..828df1cf99 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -144,20 +144,20 @@
 
 #define QEDE_PKT_TYPE_TUNN_MAX_TYPE			0x20 /* 2^5 */
 
-#define QEDE_TX_CSUM_OFFLOAD_MASK (PKT_TX_IP_CKSUM              | \
-				   PKT_TX_TCP_CKSUM             | \
-				   PKT_TX_UDP_CKSUM             | \
-				   PKT_TX_OUTER_IP_CKSUM        | \
-				   PKT_TX_TCP_SEG		| \
-				   PKT_TX_IPV4			| \
-				   PKT_TX_IPV6)
+#define QEDE_TX_CSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM              | \
+				   RTE_MBUF_F_TX_TCP_CKSUM             | \
+				   RTE_MBUF_F_TX_UDP_CKSUM             | \
+				   RTE_MBUF_F_TX_OUTER_IP_CKSUM        | \
+				   RTE_MBUF_F_TX_TCP_SEG		| \
+				   RTE_MBUF_F_TX_IPV4			| \
+				   RTE_MBUF_F_TX_IPV6)
 
 #define QEDE_TX_OFFLOAD_MASK (QEDE_TX_CSUM_OFFLOAD_MASK | \
-			      PKT_TX_VLAN		| \
-			      PKT_TX_TUNNEL_MASK)
+			      RTE_MBUF_F_TX_VLAN		| \
+			      RTE_MBUF_F_TX_TUNNEL_MASK)
 
 #define QEDE_TX_OFFLOAD_NOTSUP_MASK \
-	(PKT_TX_OFFLOAD_MASK ^ QEDE_TX_OFFLOAD_MASK)
+	(RTE_MBUF_F_TX_OFFLOAD_MASK ^ QEDE_TX_OFFLOAD_MASK)
 
 /* TPA related structures */
 struct qede_agg_info {
diff --git a/drivers/net/sfc/sfc_dp_tx.h b/drivers/net/sfc/sfc_dp_tx.h
index 777807985b..20f3b4eaba 100644
--- a/drivers/net/sfc/sfc_dp_tx.h
+++ b/drivers/net/sfc/sfc_dp_tx.h
@@ -241,7 +241,7 @@ sfc_dp_tx_prepare_pkt(struct rte_mbuf *m,
 			   unsigned int nb_vlan_descs)
 {
 	unsigned int descs_required = m->nb_segs;
-	unsigned int tcph_off = ((m->ol_flags & PKT_TX_TUNNEL_MASK) ?
+	unsigned int tcph_off = ((m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 				 m->outer_l2_len + m->outer_l3_len : 0) +
 				m->l2_len + m->l3_len;
 	unsigned int header_len = tcph_off + m->l4_len;
@@ -279,21 +279,21 @@ sfc_dp_tx_prepare_pkt(struct rte_mbuf *m,
 			 * to proceed with additional checks below.
 			 * Otherwise, throw an error.
 			 */
-			if ((m->ol_flags & PKT_TX_TCP_SEG) == 0 ||
+			if ((m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) == 0 ||
 			    tso_bounce_buffer_len == 0)
 				return EINVAL;
 		}
 	}
 
-	if (m->ol_flags & PKT_TX_TCP_SEG) {
-		switch (m->ol_flags & PKT_TX_TUNNEL_MASK) {
+	if (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
+		switch (m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
 		case 0:
 			break;
-		case PKT_TX_TUNNEL_VXLAN:
+		case RTE_MBUF_F_TX_TUNNEL_VXLAN:
 			/* FALLTHROUGH */
-		case PKT_TX_TUNNEL_GENEVE:
+		case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 			if (!(m->ol_flags &
-			      (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6)))
+			      (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IPV6)))
 				return EINVAL;
 		}
 
diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index 1bf04f565a..35e1650851 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -203,7 +203,7 @@ sfc_ef100_rx_nt_or_inner_l4_csum(const efx_word_t class)
 	return EFX_WORD_FIELD(class,
 			      ESF_GZ_RX_PREFIX_HCLASS_NT_OR_INNER_L4_CSUM) ==
 		ESE_GZ_RH_HCLASS_L4_CSUM_GOOD ?
-		PKT_RX_L4_CKSUM_GOOD : PKT_RX_L4_CKSUM_BAD;
+		RTE_MBUF_F_RX_L4_CKSUM_GOOD : RTE_MBUF_F_RX_L4_CKSUM_BAD;
 }
 
 static inline uint64_t
@@ -212,7 +212,7 @@ sfc_ef100_rx_tun_outer_l4_csum(const efx_word_t class)
 	return EFX_WORD_FIELD(class,
 			      ESF_GZ_RX_PREFIX_HCLASS_TUN_OUTER_L4_CSUM) ==
 		ESE_GZ_RH_HCLASS_L4_CSUM_GOOD ?
-		PKT_RX_OUTER_L4_CKSUM_GOOD : PKT_RX_OUTER_L4_CKSUM_BAD;
+		RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD : RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
 }
 
 static uint32_t
@@ -268,11 +268,11 @@ sfc_ef100_rx_class_decode(const efx_word_t class, uint64_t *ol_flags)
 			ESF_GZ_RX_PREFIX_HCLASS_NT_OR_INNER_L3_CLASS)) {
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP4GOOD:
 			ptype |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
-			*ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			*ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			break;
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP4BAD:
 			ptype |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
-			*ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			*ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			break;
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP6:
 			ptype |= RTE_PTYPE_L3_IPV6_EXT_UNKNOWN;
@@ -309,7 +309,7 @@ sfc_ef100_rx_class_decode(const efx_word_t class, uint64_t *ol_flags)
 			break;
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP4BAD:
 			ptype |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
-			*ol_flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+			*ol_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 			break;
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP6:
 			ptype |= RTE_PTYPE_L3_IPV6_EXT_UNKNOWN;
@@ -320,11 +320,11 @@ sfc_ef100_rx_class_decode(const efx_word_t class, uint64_t *ol_flags)
 			ESF_GZ_RX_PREFIX_HCLASS_NT_OR_INNER_L3_CLASS)) {
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP4GOOD:
 			ptype |= RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN;
-			*ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			*ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 			break;
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP4BAD:
 			ptype |= RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN;
-			*ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			*ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 			break;
 		case ESE_GZ_RH_HCLASS_L3_CLASS_IP6:
 			ptype |= RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN;
@@ -401,7 +401,7 @@ sfc_ef100_rx_prefix_to_offloads(const struct sfc_ef100_rxq *rxq,
 	if ((rxq->flags & SFC_EF100_RXQ_RSS_HASH) &&
 	    EFX_TEST_OWORD_BIT(rx_prefix[0],
 			       ESF_GZ_RX_PREFIX_RSS_HASH_VALID_LBN)) {
-		ol_flags |= PKT_RX_RSS_HASH;
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		/* EFX_OWORD_FIELD converts little-endian to CPU */
 		m->hash.rss = EFX_OWORD_FIELD(rx_prefix[0],
 					      ESF_GZ_RX_PREFIX_RSS_HASH);
@@ -414,7 +414,7 @@ sfc_ef100_rx_prefix_to_offloads(const struct sfc_ef100_rxq *rxq,
 		user_mark = EFX_OWORD_FIELD(rx_prefix[0],
 					    ESF_GZ_RX_PREFIX_USER_MARK);
 		if (user_mark != SFC_EF100_USER_MARK_INVALID) {
-			ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+			ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 			m->hash.fdir.hi = user_mark;
 		}
 	}
diff --git a/drivers/net/sfc/sfc_ef100_tx.c b/drivers/net/sfc/sfc_ef100_tx.c
index 53d01612d1..78c16168ed 100644
--- a/drivers/net/sfc/sfc_ef100_tx.c
+++ b/drivers/net/sfc/sfc_ef100_tx.c
@@ -98,7 +98,7 @@ static int
 sfc_ef100_tx_prepare_pkt_tso(struct sfc_ef100_txq * const txq,
 			     struct rte_mbuf *m)
 {
-	size_t header_len = ((m->ol_flags & PKT_TX_TUNNEL_MASK) ?
+	size_t header_len = ((m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 			     m->outer_l2_len + m->outer_l3_len : 0) +
 			    m->l2_len + m->l3_len + m->l4_len;
 	size_t payload_len = m->pkt_len - header_len;
@@ -106,12 +106,12 @@ sfc_ef100_tx_prepare_pkt_tso(struct sfc_ef100_txq * const txq,
 	unsigned int nb_payload_descs;
 
 #ifdef RTE_LIBRTE_SFC_EFX_DEBUG
-	switch (m->ol_flags & PKT_TX_TUNNEL_MASK) {
+	switch (m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
 	case 0:
 		/* FALLTHROUGH */
-	case PKT_TX_TUNNEL_VXLAN:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
 		/* FALLTHROUGH */
-	case PKT_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 		break;
 	default:
 		return ENOTSUP;
@@ -164,11 +164,11 @@ sfc_ef100_tx_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		 * pseudo-header checksum which is calculated below,
 		 * but requires contiguous packet headers.
 		 */
-		if ((m->ol_flags & PKT_TX_TUNNEL_MASK) &&
-		    (m->ol_flags & PKT_TX_L4_MASK)) {
+		if ((m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) &&
+		    (m->ol_flags & RTE_MBUF_F_TX_L4_MASK)) {
 			calc_phdr_cksum = true;
 			max_nb_header_segs = 1;
-		} else if (m->ol_flags & PKT_TX_TCP_SEG) {
+		} else if (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			max_nb_header_segs = txq->tso_max_nb_header_descs;
 		}
 
@@ -180,7 +180,7 @@ sfc_ef100_tx_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			break;
 		}
 
-		if (m->ol_flags & PKT_TX_TCP_SEG) {
+		if (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			ret = sfc_ef100_tx_prepare_pkt_tso(txq, m);
 			if (unlikely(ret != 0)) {
 				rte_errno = ret;
@@ -197,7 +197,7 @@ sfc_ef100_tx_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			 * and does not require any assistance.
 			 */
 			ret = rte_net_intel_cksum_flags_prepare(m,
-					m->ol_flags & ~PKT_TX_IP_CKSUM);
+					m->ol_flags & ~RTE_MBUF_F_TX_IP_CKSUM);
 			if (unlikely(ret != 0)) {
 				rte_errno = -ret;
 				break;
@@ -315,10 +315,10 @@ sfc_ef100_tx_qdesc_cso_inner_l3(uint64_t tx_tunnel)
 	uint8_t inner_l3;
 
 	switch (tx_tunnel) {
-	case PKT_TX_TUNNEL_VXLAN:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
 		inner_l3 = ESE_GZ_TX_DESC_CS_INNER_L3_VXLAN;
 		break;
-	case PKT_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 		inner_l3 = ESE_GZ_TX_DESC_CS_INNER_L3_GENEVE;
 		break;
 	default:
@@ -338,25 +338,25 @@ sfc_ef100_tx_qdesc_send_create(const struct rte_mbuf *m, efx_oword_t *tx_desc)
 	uint16_t part_cksum_w;
 	uint16_t l4_offset_w;
 
-	if ((m->ol_flags & PKT_TX_TUNNEL_MASK) == 0) {
-		outer_l3 = (m->ol_flags & PKT_TX_IP_CKSUM);
-		outer_l4 = (m->ol_flags & PKT_TX_L4_MASK);
+	if ((m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) == 0) {
+		outer_l3 = (m->ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
+		outer_l4 = (m->ol_flags & RTE_MBUF_F_TX_L4_MASK);
 		inner_l3 = ESE_GZ_TX_DESC_CS_INNER_L3_OFF;
 		partial_en = ESE_GZ_TX_DESC_CSO_PARTIAL_EN_OFF;
 		part_cksum_w = 0;
 		l4_offset_w = 0;
 	} else {
-		outer_l3 = (m->ol_flags & PKT_TX_OUTER_IP_CKSUM);
-		outer_l4 = (m->ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+		outer_l3 = (m->ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
+		outer_l4 = (m->ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
 		inner_l3 = sfc_ef100_tx_qdesc_cso_inner_l3(m->ol_flags &
-							   PKT_TX_TUNNEL_MASK);
+							   RTE_MBUF_F_TX_TUNNEL_MASK);
 
-		switch (m->ol_flags & PKT_TX_L4_MASK) {
-		case PKT_TX_TCP_CKSUM:
+		switch (m->ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+		case RTE_MBUF_F_TX_TCP_CKSUM:
 			partial_en = ESE_GZ_TX_DESC_CSO_PARTIAL_EN_TCP;
 			part_cksum_w = offsetof(struct rte_tcp_hdr, cksum) >> 1;
 			break;
-		case PKT_TX_UDP_CKSUM:
+		case RTE_MBUF_F_TX_UDP_CKSUM:
 			partial_en = ESE_GZ_TX_DESC_CSO_PARTIAL_EN_UDP;
 			part_cksum_w = offsetof(struct rte_udp_hdr,
 						dgram_cksum) >> 1;
@@ -382,7 +382,7 @@ sfc_ef100_tx_qdesc_send_create(const struct rte_mbuf *m, efx_oword_t *tx_desc)
 			ESF_GZ_TX_SEND_CSO_OUTER_L4, outer_l4,
 			ESF_GZ_TX_DESC_TYPE, ESE_GZ_TX_DESC_TYPE_SEND);
 
-	if (m->ol_flags & PKT_TX_VLAN) {
+	if (m->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		efx_oword_t tx_desc_extra_fields;
 
 		EFX_POPULATE_OWORD_2(tx_desc_extra_fields,
@@ -423,7 +423,7 @@ sfc_ef100_tx_qdesc_tso_create(const struct rte_mbuf *m,
 	 */
 	int ed_inner_ip_id = ESE_GZ_TX_DESC_IP4_ID_INC_MOD16;
 	uint8_t inner_l3 = sfc_ef100_tx_qdesc_cso_inner_l3(
-					m->ol_flags & PKT_TX_TUNNEL_MASK);
+					m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK);
 
 	EFX_POPULATE_OWORD_10(*tx_desc,
 			ESF_GZ_TX_TSO_MSS, m->tso_segsz,
@@ -464,7 +464,7 @@ sfc_ef100_tx_qdesc_tso_create(const struct rte_mbuf *m,
 
 	EFX_OR_OWORD(*tx_desc, tx_desc_extra_fields);
 
-	if (m->ol_flags & PKT_TX_VLAN) {
+	if (m->ol_flags & RTE_MBUF_F_TX_VLAN) {
 		EFX_POPULATE_OWORD_2(tx_desc_extra_fields,
 				ESF_GZ_TX_TSO_VLAN_INSERT_EN, 1,
 				ESF_GZ_TX_TSO_VLAN_INSERT_TCI, m->vlan_tci);
@@ -505,7 +505,7 @@ sfc_ef100_tx_pkt_descs_max(const struct rte_mbuf *m)
 #define SFC_MBUF_SEG_LEN_MAX		UINT16_MAX
 	RTE_BUILD_BUG_ON(sizeof(m->data_len) != 2);
 
-	if (m->ol_flags & PKT_TX_TCP_SEG) {
+	if (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		/* Tx TSO descriptor */
 		extra_descs++;
 		/*
@@ -552,7 +552,7 @@ sfc_ef100_xmit_tso_pkt(struct sfc_ef100_txq * const txq,
 	size_t header_len;
 	size_t remaining_hdr_len;
 
-	if (m->ol_flags & PKT_TX_TUNNEL_MASK) {
+	if (m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
 		outer_iph_off = m->outer_l2_len;
 		outer_udph_off = outer_iph_off + m->outer_l3_len;
 	} else {
@@ -671,7 +671,7 @@ sfc_ef100_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 				break;
 		}
 
-		if (m_seg->ol_flags & PKT_TX_TCP_SEG) {
+		if (m_seg->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			m_seg = sfc_ef100_xmit_tso_pkt(txq, m_seg, &added);
 		} else {
 			id = added++ & txq->ptr_mask;
diff --git a/drivers/net/sfc/sfc_ef10_essb_rx.c b/drivers/net/sfc/sfc_ef10_essb_rx.c
index 991329e86f..eda468df3f 100644
--- a/drivers/net/sfc/sfc_ef10_essb_rx.c
+++ b/drivers/net/sfc/sfc_ef10_essb_rx.c
@@ -374,13 +374,13 @@ sfc_ef10_essb_rx_get_pending(struct sfc_ef10_essb_rxq *rxq,
 			rte_pktmbuf_data_len(m) = pkt_len;
 
 			m->ol_flags |=
-				(PKT_RX_RSS_HASH *
+				(RTE_MBUF_F_RX_RSS_HASH *
 				 !!EFX_TEST_QWORD_BIT(*qwordp,
 					ES_EZ_ESSB_RX_PREFIX_HASH_VALID_LBN)) |
-				(PKT_RX_FDIR_ID *
+				(RTE_MBUF_F_RX_FDIR_ID *
 				 !!EFX_TEST_QWORD_BIT(*qwordp,
 					ES_EZ_ESSB_RX_PREFIX_MARK_VALID_LBN)) |
-				(PKT_RX_FDIR *
+				(RTE_MBUF_F_RX_FDIR *
 				 !!EFX_TEST_QWORD_BIT(*qwordp,
 					ES_EZ_ESSB_RX_PREFIX_MATCH_FLAG_LBN));
 
diff --git a/drivers/net/sfc/sfc_ef10_rx.c b/drivers/net/sfc/sfc_ef10_rx.c
index 49a7d4fb42..8ddb830642 100644
--- a/drivers/net/sfc/sfc_ef10_rx.c
+++ b/drivers/net/sfc/sfc_ef10_rx.c
@@ -330,7 +330,7 @@ sfc_ef10_rx_process_event(struct sfc_ef10_rxq *rxq, efx_qword_t rx_ev,
 	/* Mask RSS hash offload flag if RSS is not enabled */
 	sfc_ef10_rx_ev_to_offloads(rx_ev, m,
 				   (rxq->flags & SFC_EF10_RXQ_RSS_HASH) ?
-				   ~0ull : ~PKT_RX_RSS_HASH);
+				   ~0ull : ~RTE_MBUF_F_RX_RSS_HASH);
 
 	/* data_off already moved past pseudo header */
 	pseudo_hdr = (uint8_t *)m->buf_addr + RTE_PKTMBUF_HEADROOM;
@@ -338,7 +338,7 @@ sfc_ef10_rx_process_event(struct sfc_ef10_rxq *rxq, efx_qword_t rx_ev,
 	/*
 	 * Always get RSS hash from pseudo header to avoid
 	 * condition/branching. If it is valid or not depends on
-	 * PKT_RX_RSS_HASH in m->ol_flags.
+	 * RTE_MBUF_F_RX_RSS_HASH in m->ol_flags.
 	 */
 	m->hash.rss = sfc_ef10_rx_pseudo_hdr_get_hash(pseudo_hdr);
 
@@ -392,7 +392,7 @@ sfc_ef10_rx_process_event(struct sfc_ef10_rxq *rxq, efx_qword_t rx_ev,
 		/*
 		 * Always get RSS hash from pseudo header to avoid
 		 * condition/branching. If it is valid or not depends on
-		 * PKT_RX_RSS_HASH in m->ol_flags.
+		 * RTE_MBUF_F_RX_RSS_HASH in m->ol_flags.
 		 */
 		m->hash.rss = sfc_ef10_rx_pseudo_hdr_get_hash(pseudo_hdr);
 
diff --git a/drivers/net/sfc/sfc_ef10_rx_ev.h b/drivers/net/sfc/sfc_ef10_rx_ev.h
index a7f5b9168b..821e2227bb 100644
--- a/drivers/net/sfc/sfc_ef10_rx_ev.h
+++ b/drivers/net/sfc/sfc_ef10_rx_ev.h
@@ -27,9 +27,9 @@ sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m,
 			   uint64_t ol_mask)
 {
 	uint32_t tun_ptype = 0;
-	/* Which event bit is mapped to PKT_RX_IP_CKSUM_* */
+	/* Which event bit is mapped to RTE_MBUF_F_RX_IP_CKSUM_* */
 	int8_t ip_csum_err_bit;
-	/* Which event bit is mapped to PKT_RX_L4_CKSUM_* */
+	/* Which event bit is mapped to RTE_MBUF_F_RX_L4_CKSUM_* */
 	int8_t l4_csum_err_bit;
 	uint32_t l2_ptype = 0;
 	uint32_t l3_ptype = 0;
@@ -76,7 +76,7 @@ sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m,
 		l4_csum_err_bit = ESF_EZ_RX_TCP_UDP_INNER_CHKSUM_ERR_LBN;
 		if (unlikely(EFX_TEST_QWORD_BIT(rx_ev,
 						ESF_DZ_RX_IPCKSUM_ERR_LBN)))
-			ol_flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+			ol_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 	}
 
 	switch (EFX_QWORD_FIELD(rx_ev, ESF_DZ_RX_ETH_TAG_CLASS)) {
@@ -105,9 +105,9 @@ sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m,
 	case ESE_DZ_L3_CLASS_IP4:
 		l3_ptype = (tun_ptype == 0) ? RTE_PTYPE_L3_IPV4_EXT_UNKNOWN :
 			RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN;
-		ol_flags |= PKT_RX_RSS_HASH |
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH |
 			((EFX_TEST_QWORD_BIT(rx_ev, ip_csum_err_bit)) ?
-			 PKT_RX_IP_CKSUM_BAD : PKT_RX_IP_CKSUM_GOOD);
+			 RTE_MBUF_F_RX_IP_CKSUM_BAD : RTE_MBUF_F_RX_IP_CKSUM_GOOD);
 		break;
 	case ESE_DZ_L3_CLASS_IP6_FRAG:
 		l4_ptype = (tun_ptype == 0) ? RTE_PTYPE_L4_FRAG :
@@ -116,7 +116,7 @@ sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m,
 	case ESE_DZ_L3_CLASS_IP6:
 		l3_ptype = (tun_ptype == 0) ? RTE_PTYPE_L3_IPV6_EXT_UNKNOWN :
 			RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN;
-		ol_flags |= PKT_RX_RSS_HASH;
+		ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 		break;
 	case ESE_DZ_L3_CLASS_ARP:
 		/* Override Layer 2 packet type */
@@ -144,7 +144,7 @@ sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m,
 			RTE_PTYPE_INNER_L4_TCP;
 		ol_flags |=
 			(EFX_TEST_QWORD_BIT(rx_ev, l4_csum_err_bit)) ?
-			PKT_RX_L4_CKSUM_BAD : PKT_RX_L4_CKSUM_GOOD;
+			RTE_MBUF_F_RX_L4_CKSUM_BAD : RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		break;
 	case ESE_FZ_L4_CLASS_UDP:
 		 RTE_BUILD_BUG_ON(ESE_FZ_L4_CLASS_UDP != ESE_DE_L4_CLASS_UDP);
@@ -152,7 +152,7 @@ sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m,
 			RTE_PTYPE_INNER_L4_UDP;
 		ol_flags |=
 			(EFX_TEST_QWORD_BIT(rx_ev, l4_csum_err_bit)) ?
-			PKT_RX_L4_CKSUM_BAD : PKT_RX_L4_CKSUM_GOOD;
+			RTE_MBUF_F_RX_L4_CKSUM_BAD : RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		break;
 	case ESE_FZ_L4_CLASS_UNKNOWN:
 		 RTE_BUILD_BUG_ON(ESE_FZ_L4_CLASS_UNKNOWN !=
diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index 277fe6c6ca..e58f8bbe8c 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -341,7 +341,7 @@ sfc_ef10_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		 * the size limit. Perform the check in debug mode since MTU
 		 * more than 9k is not supported, but the limit here is 16k-1.
 		 */
-		if (!(m->ol_flags & PKT_TX_TCP_SEG)) {
+		if (!(m->ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
 			struct rte_mbuf *m_seg;
 
 			for (m_seg = m; m_seg != NULL; m_seg = m_seg->next) {
@@ -371,7 +371,7 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 		      unsigned int *added, unsigned int *dma_desc_space,
 		      bool *reap_done)
 {
-	size_t iph_off = ((m_seg->ol_flags & PKT_TX_TUNNEL_MASK) ?
+	size_t iph_off = ((m_seg->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
 			  m_seg->outer_l2_len + m_seg->outer_l3_len : 0) +
 			 m_seg->l2_len;
 	size_t tcph_off = iph_off + m_seg->l3_len;
@@ -489,10 +489,10 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 	 *
 	 * The same concern applies to outer UDP datagram length field.
 	 */
-	switch (m_seg->ol_flags & PKT_TX_TUNNEL_MASK) {
-	case PKT_TX_TUNNEL_VXLAN:
+	switch (m_seg->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
 		/* FALLTHROUGH */
-	case PKT_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 		sfc_tso_outer_udp_fix_len(first_m_seg, hdr_addr);
 		break;
 	default:
@@ -506,10 +506,10 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
 	 * filled in in TSO mbuf. Use zero IPID if there is no IPv4 flag.
 	 * If the packet is still IPv4, HW will simply start from zero IPID.
 	 */
-	if (first_m_seg->ol_flags & PKT_TX_IPV4)
+	if (first_m_seg->ol_flags & RTE_MBUF_F_TX_IPV4)
 		packet_id = sfc_tso_ip4_get_ipid(hdr_addr, iph_off);
 
-	if (first_m_seg->ol_flags & PKT_TX_OUTER_IPV4)
+	if (first_m_seg->ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)
 		outer_packet_id = sfc_tso_ip4_get_ipid(hdr_addr,
 						first_m_seg->outer_l2_len);
 
@@ -648,7 +648,7 @@ sfc_ef10_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		if (likely(pktp + 1 != pktp_end))
 			rte_mbuf_prefetch_part1(pktp[1]);
 
-		if (m_seg->ol_flags & PKT_TX_TCP_SEG) {
+		if (m_seg->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			int rc;
 
 			rc = sfc_ef10_xmit_tso_pkt(txq, m_seg, &added,
@@ -805,7 +805,7 @@ sfc_ef10_simple_prepare_pkts(__rte_unused void *tx_queue,
 
 		/* ef10_simple does not support TSO and VLAN insertion */
 		if (unlikely(m->ol_flags &
-			     (PKT_TX_TCP_SEG | PKT_TX_VLAN))) {
+			     (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_VLAN))) {
 			rte_errno = ENOTSUP;
 			break;
 		}
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 280e8a61f9..66024f3e53 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -148,15 +148,15 @@ sfc_efx_rx_desc_flags_to_offload_flags(const unsigned int desc_flags)
 
 	switch (desc_flags & (EFX_PKT_IPV4 | EFX_CKSUM_IPV4)) {
 	case (EFX_PKT_IPV4 | EFX_CKSUM_IPV4):
-		mbuf_flags |= PKT_RX_IP_CKSUM_GOOD;
+		mbuf_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 		break;
 	case EFX_PKT_IPV4:
-		mbuf_flags |= PKT_RX_IP_CKSUM_BAD;
+		mbuf_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 		break;
 	default:
-		RTE_BUILD_BUG_ON(PKT_RX_IP_CKSUM_UNKNOWN != 0);
-		SFC_ASSERT((mbuf_flags & PKT_RX_IP_CKSUM_MASK) ==
-			   PKT_RX_IP_CKSUM_UNKNOWN);
+		RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN != 0);
+		SFC_ASSERT((mbuf_flags & RTE_MBUF_F_RX_IP_CKSUM_MASK) ==
+			   RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN);
 		break;
 	}
 
@@ -164,16 +164,16 @@ sfc_efx_rx_desc_flags_to_offload_flags(const unsigned int desc_flags)
 		 (EFX_PKT_TCP | EFX_PKT_UDP | EFX_CKSUM_TCPUDP))) {
 	case (EFX_PKT_TCP | EFX_CKSUM_TCPUDP):
 	case (EFX_PKT_UDP | EFX_CKSUM_TCPUDP):
-		mbuf_flags |= PKT_RX_L4_CKSUM_GOOD;
+		mbuf_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		break;
 	case EFX_PKT_TCP:
 	case EFX_PKT_UDP:
-		mbuf_flags |= PKT_RX_L4_CKSUM_BAD;
+		mbuf_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 		break;
 	default:
-		RTE_BUILD_BUG_ON(PKT_RX_L4_CKSUM_UNKNOWN != 0);
-		SFC_ASSERT((mbuf_flags & PKT_RX_L4_CKSUM_MASK) ==
-			   PKT_RX_L4_CKSUM_UNKNOWN);
+		RTE_BUILD_BUG_ON(RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN != 0);
+		SFC_ASSERT((mbuf_flags & RTE_MBUF_F_RX_L4_CKSUM_MASK) ==
+			   RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN);
 		break;
 	}
 
@@ -224,7 +224,7 @@ sfc_efx_rx_set_rss_hash(struct sfc_efx_rxq *rxq, unsigned int flags,
 						      EFX_RX_HASHALG_TOEPLITZ,
 						      mbuf_data);
 
-		m->ol_flags |= PKT_RX_RSS_HASH;
+		m->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 	}
 }
 
diff --git a/drivers/net/sfc/sfc_tso.c b/drivers/net/sfc/sfc_tso.c
index 29d0836b65..927e351a6e 100644
--- a/drivers/net/sfc/sfc_tso.c
+++ b/drivers/net/sfc/sfc_tso.c
@@ -153,7 +153,7 @@ sfc_efx_tso_do(struct sfc_efx_txq *txq, unsigned int idx,
 	 * IPv4 flag. If the packet is still IPv4, HW will simply start from
 	 * zero IPID.
 	 */
-	if (m->ol_flags & PKT_TX_IPV4)
+	if (m->ol_flags & RTE_MBUF_F_TX_IPV4)
 		packet_id = sfc_tso_ip4_get_ipid(tsoh, nh_off);
 
 	/* Handle TCP header */
diff --git a/drivers/net/sfc/sfc_tso.h b/drivers/net/sfc/sfc_tso.h
index f081e856e1..9029ad1590 100644
--- a/drivers/net/sfc/sfc_tso.h
+++ b/drivers/net/sfc/sfc_tso.h
@@ -59,7 +59,7 @@ sfc_tso_innermost_ip_fix_len(const struct rte_mbuf *m, uint8_t *tsoh,
 	size_t field_ofst;
 	rte_be16_t len;
 
-	if (m->ol_flags & PKT_TX_IPV4) {
+	if (m->ol_flags & RTE_MBUF_F_TX_IPV4) {
 		field_ofst = offsetof(struct rte_ipv4_hdr, total_length);
 		len = rte_cpu_to_be_16(m->l3_len + ip_payload_len);
 	} else {
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 936ae815ea..fd79e67efa 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -766,7 +766,7 @@ static unsigned int
 sfc_efx_tx_maybe_insert_tag(struct sfc_efx_txq *txq, struct rte_mbuf *m,
 			    efx_desc_t **pend)
 {
-	uint16_t this_tag = ((m->ol_flags & PKT_TX_VLAN) ?
+	uint16_t this_tag = ((m->ol_flags & RTE_MBUF_F_TX_VLAN) ?
 			     m->vlan_tci : 0);
 
 	if (this_tag == txq->hw_vlan_tci)
@@ -869,7 +869,7 @@ sfc_efx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		 */
 		pkt_descs += sfc_efx_tx_maybe_insert_tag(txq, m_seg, &pend);
 
-		if (m_seg->ol_flags & PKT_TX_TCP_SEG) {
+		if (m_seg->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 			/*
 			 * We expect correct 'pkt->l[2, 3, 4]_len' values
 			 * to be set correctly by the caller
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index 046f17669d..19236e574e 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -340,8 +340,8 @@ tap_verify_csum(struct rte_mbuf *mbuf)
 
 		cksum = ~rte_raw_cksum(iph, l3_len);
 		mbuf->ol_flags |= cksum ?
-			PKT_RX_IP_CKSUM_BAD :
-			PKT_RX_IP_CKSUM_GOOD;
+			RTE_MBUF_F_RX_IP_CKSUM_BAD :
+			RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 	} else if (l3 == RTE_PTYPE_L3_IPV6) {
 		struct rte_ipv6_hdr *iph = l3_hdr;
 
@@ -376,7 +376,7 @@ tap_verify_csum(struct rte_mbuf *mbuf)
 					 * indicates that the sender did not
 					 * generate one [RFC 768].
 					 */
-					mbuf->ol_flags |= PKT_RX_L4_CKSUM_NONE;
+					mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_NONE;
 					return;
 				}
 			}
@@ -387,7 +387,7 @@ tap_verify_csum(struct rte_mbuf *mbuf)
 								 l4_hdr);
 		}
 		mbuf->ol_flags |= cksum_ok ?
-			PKT_RX_L4_CKSUM_GOOD : PKT_RX_L4_CKSUM_BAD;
+			RTE_MBUF_F_RX_L4_CKSUM_GOOD : RTE_MBUF_F_RX_L4_CKSUM_BAD;
 	}
 }
 
@@ -544,7 +544,7 @@ tap_tx_l3_cksum(char *packet, uint64_t ol_flags, unsigned int l2_len,
 {
 	void *l3_hdr = packet + l2_len;
 
-	if (ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_IPV4)) {
+	if (ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_IPV4)) {
 		struct rte_ipv4_hdr *iph = l3_hdr;
 		uint16_t cksum;
 
@@ -552,18 +552,18 @@ tap_tx_l3_cksum(char *packet, uint64_t ol_flags, unsigned int l2_len,
 		cksum = rte_raw_cksum(iph, l3_len);
 		iph->hdr_checksum = (cksum == 0xffff) ? cksum : ~cksum;
 	}
-	if (ol_flags & PKT_TX_L4_MASK) {
+	if (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
 		void *l4_hdr;
 
 		l4_hdr = packet + l2_len + l3_len;
-		if ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_UDP_CKSUM)
+		if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_UDP_CKSUM)
 			*l4_cksum = &((struct rte_udp_hdr *)l4_hdr)->dgram_cksum;
-		else if ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM)
+		else if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_TCP_CKSUM)
 			*l4_cksum = &((struct rte_tcp_hdr *)l4_hdr)->cksum;
 		else
 			return;
 		**l4_cksum = 0;
-		if (ol_flags & PKT_TX_IPV4)
+		if (ol_flags & RTE_MBUF_F_TX_IPV4)
 			*l4_phdr_cksum = rte_ipv4_phdr_cksum(l3_hdr, 0);
 		else
 			*l4_phdr_cksum = rte_ipv6_phdr_cksum(l3_hdr, 0);
@@ -627,9 +627,9 @@ tap_write_mbufs(struct tx_queue *txq, uint16_t num_mbufs,
 
 		nb_segs = mbuf->nb_segs;
 		if (txq->csum &&
-		    ((mbuf->ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_IPV4) ||
-		     (mbuf->ol_flags & PKT_TX_L4_MASK) == PKT_TX_UDP_CKSUM ||
-		     (mbuf->ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM))) {
+		    ((mbuf->ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_IPV4) ||
+		      (mbuf->ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_UDP_CKSUM ||
+		      (mbuf->ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_TCP_CKSUM))) {
 			is_cksum = 1;
 
 			/* Support only packets with at least layer 4
@@ -719,12 +719,12 @@ pmd_tx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		uint16_t hdrs_len;
 		uint64_t tso;
 
-		tso = mbuf_in->ol_flags & PKT_TX_TCP_SEG;
+		tso = mbuf_in->ol_flags & RTE_MBUF_F_TX_TCP_SEG;
 		if (tso) {
 			struct rte_gso_ctx *gso_ctx = &txq->gso_ctx;
 
 			/* TCP segmentation implies TCP checksum offload */
-			mbuf_in->ol_flags |= PKT_TX_TCP_CKSUM;
+			mbuf_in->ol_flags |= RTE_MBUF_F_TX_TCP_CKSUM;
 
 			/* gso size is calculated without RTE_ETHER_CRC_LEN */
 			hdrs_len = mbuf_in->l2_len + mbuf_in->l3_len +
diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index 91e09ff8d5..4a433435c6 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -42,10 +42,10 @@ fill_sq_desc_header(union sq_entry_t *entry, struct rte_mbuf *pkt)
 	ol_flags = pkt->ol_flags & NICVF_TX_OFFLOAD_MASK;
 	if (unlikely(ol_flags)) {
 		/* L4 cksum */
-		uint64_t l4_flags = ol_flags & PKT_TX_L4_MASK;
-		if (l4_flags == PKT_TX_TCP_CKSUM)
+		uint64_t l4_flags = ol_flags & RTE_MBUF_F_TX_L4_MASK;
+		if (l4_flags == RTE_MBUF_F_TX_TCP_CKSUM)
 			sqe.hdr.csum_l4 = SEND_L4_CSUM_TCP;
-		else if (l4_flags == PKT_TX_UDP_CKSUM)
+		else if (l4_flags == RTE_MBUF_F_TX_UDP_CKSUM)
 			sqe.hdr.csum_l4 = SEND_L4_CSUM_UDP;
 		else
 			sqe.hdr.csum_l4 = SEND_L4_CSUM_DISABLE;
@@ -54,7 +54,7 @@ fill_sq_desc_header(union sq_entry_t *entry, struct rte_mbuf *pkt)
 		sqe.hdr.l4_offset = pkt->l3_len + pkt->l2_len;
 
 		/* L3 cksum */
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			sqe.hdr.csum_l3 = 1;
 	}
 
@@ -343,9 +343,9 @@ static inline uint64_t __rte_hot
 nicvf_set_olflags(const cqe_rx_word0_t cqe_rx_w0)
 {
 	static const uint64_t flag_table[3] __rte_cache_aligned = {
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD,
-		PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_UNKNOWN,
-		PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_BAD,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD,
+		RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN,
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD,
 	};
 
 	const uint8_t idx = (cqe_rx_w0.err_opcode == CQE_RX_ERR_L4_CHK) << 1 |
@@ -409,7 +409,7 @@ nicvf_rx_offload(cqe_rx_word0_t cqe_rx_w0, cqe_rx_word2_t cqe_rx_w2,
 {
 	if (likely(cqe_rx_w0.rss_alg)) {
 		pkt->hash.rss = cqe_rx_w2.rss_tag;
-		pkt->ol_flags |= PKT_RX_RSS_HASH;
+		pkt->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 
 	}
 }
@@ -454,8 +454,8 @@ nicvf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts,
 			pkt->ol_flags = nicvf_set_olflags(cqe_rx_w0);
 		if (flag & NICVF_RX_OFFLOAD_VLAN_STRIP) {
 			if (unlikely(cqe_rx_w0.vlan_stripped)) {
-				pkt->ol_flags |= PKT_RX_VLAN
-							| PKT_RX_VLAN_STRIPPED;
+				pkt->ol_flags |= RTE_MBUF_F_RX_VLAN
+							| RTE_MBUF_F_RX_VLAN_STRIPPED;
 				pkt->vlan_tci =
 					rte_cpu_to_be_16(cqe_rx_w2.vlan_tci);
 			}
@@ -549,8 +549,8 @@ nicvf_process_cq_mseg_entry(struct cqe_rx_t *cqe_rx,
 		pkt->ol_flags = nicvf_set_olflags(cqe_rx_w0);
 	if (flag & NICVF_RX_OFFLOAD_VLAN_STRIP) {
 		if (unlikely(cqe_rx_w0.vlan_stripped)) {
-			pkt->ol_flags |= PKT_RX_VLAN
-				| PKT_RX_VLAN_STRIPPED;
+			pkt->ol_flags |= RTE_MBUF_F_RX_VLAN
+				| RTE_MBUF_F_RX_VLAN_STRIPPED;
 			pkt->vlan_tci = rte_cpu_to_be_16(cqe_rx_w2.vlan_tci);
 		}
 	}
diff --git a/drivers/net/thunderx/nicvf_rxtx.h b/drivers/net/thunderx/nicvf_rxtx.h
index d6ed660b4e..3e1d40bbeb 100644
--- a/drivers/net/thunderx/nicvf_rxtx.h
+++ b/drivers/net/thunderx/nicvf_rxtx.h
@@ -12,7 +12,7 @@
 #define NICVF_RX_OFFLOAD_CKSUM          0x2
 #define NICVF_RX_OFFLOAD_VLAN_STRIP     0x4
 
-#define NICVF_TX_OFFLOAD_MASK (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK)
+#define NICVF_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK)
 
 #if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
 static inline uint16_t __attribute__((const))
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index b267da462b..bb300bae40 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -1136,10 +1136,10 @@ txgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
 	rxq = dev->data->rx_queues[queue];
 
 	if (on) {
-		rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		rxq->vlan_flags = RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
 	} else {
-		rxq->vlan_flags = PKT_RX_VLAN;
+		rxq->vlan_flags = RTE_MBUF_F_RX_VLAN;
 		rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
 	}
 }
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index f8c1ad3937..33774bc6fa 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -43,30 +43,30 @@
 #include "txgbe_rxtx.h"
 
 #ifdef RTE_LIBRTE_IEEE1588
-#define TXGBE_TX_IEEE1588_TMST PKT_TX_IEEE1588_TMST
+#define TXGBE_TX_IEEE1588_TMST RTE_MBUF_F_TX_IEEE1588_TMST
 #else
 #define TXGBE_TX_IEEE1588_TMST 0
 #endif
 
 /* Bit Mask to indicate what bits required for building TX context */
-static const u64 TXGBE_TX_OFFLOAD_MASK = (PKT_TX_IP_CKSUM |
-		PKT_TX_OUTER_IPV6 |
-		PKT_TX_OUTER_IPV4 |
-		PKT_TX_IPV6 |
-		PKT_TX_IPV4 |
-		PKT_TX_VLAN |
-		PKT_TX_L4_MASK |
-		PKT_TX_TCP_SEG |
-		PKT_TX_TUNNEL_MASK |
-		PKT_TX_OUTER_IP_CKSUM |
-		PKT_TX_OUTER_UDP_CKSUM |
+static const u64 TXGBE_TX_OFFLOAD_MASK = (RTE_MBUF_F_TX_IP_CKSUM |
+		RTE_MBUF_F_TX_OUTER_IPV6 |
+		RTE_MBUF_F_TX_OUTER_IPV4 |
+		RTE_MBUF_F_TX_IPV6 |
+		RTE_MBUF_F_TX_IPV4 |
+		RTE_MBUF_F_TX_VLAN |
+		RTE_MBUF_F_TX_L4_MASK |
+		RTE_MBUF_F_TX_TCP_SEG |
+		RTE_MBUF_F_TX_TUNNEL_MASK |
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM |
+		RTE_MBUF_F_TX_OUTER_UDP_CKSUM |
 #ifdef RTE_LIB_SECURITY
-		PKT_TX_SEC_OFFLOAD |
+		RTE_MBUF_F_TX_SEC_OFFLOAD |
 #endif
 		TXGBE_TX_IEEE1588_TMST);
 
 #define TXGBE_TX_OFFLOAD_NOTSUP_MASK \
-		(PKT_TX_OFFLOAD_MASK ^ TXGBE_TX_OFFLOAD_MASK)
+		(RTE_MBUF_F_TX_OFFLOAD_MASK ^ TXGBE_TX_OFFLOAD_MASK)
 
 /*
  * Prefetch a cache line into all cache levels.
@@ -339,7 +339,7 @@ txgbe_set_xmit_ctx(struct txgbe_tx_queue *txq,
 	type_tucmd_mlhl |= TXGBE_TXD_PTID(tx_offload.ptid);
 
 	/* check if TCP segmentation required for this packet */
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		tx_offload_mask.l2_len |= ~0;
 		tx_offload_mask.l3_len |= ~0;
 		tx_offload_mask.l4_len |= ~0;
@@ -347,25 +347,25 @@ txgbe_set_xmit_ctx(struct txgbe_tx_queue *txq,
 		mss_l4len_idx |= TXGBE_TXD_MSS(tx_offload.tso_segsz);
 		mss_l4len_idx |= TXGBE_TXD_L4LEN(tx_offload.l4_len);
 	} else { /* no TSO, check if hardware checksum is needed */
-		if (ol_flags & PKT_TX_IP_CKSUM) {
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 			tx_offload_mask.l2_len |= ~0;
 			tx_offload_mask.l3_len |= ~0;
 		}
 
-		switch (ol_flags & PKT_TX_L4_MASK) {
-		case PKT_TX_UDP_CKSUM:
+		switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+		case RTE_MBUF_F_TX_UDP_CKSUM:
 			mss_l4len_idx |=
 				TXGBE_TXD_L4LEN(sizeof(struct rte_udp_hdr));
 			tx_offload_mask.l2_len |= ~0;
 			tx_offload_mask.l3_len |= ~0;
 			break;
-		case PKT_TX_TCP_CKSUM:
+		case RTE_MBUF_F_TX_TCP_CKSUM:
 			mss_l4len_idx |=
 				TXGBE_TXD_L4LEN(sizeof(struct rte_tcp_hdr));
 			tx_offload_mask.l2_len |= ~0;
 			tx_offload_mask.l3_len |= ~0;
 			break;
-		case PKT_TX_SCTP_CKSUM:
+		case RTE_MBUF_F_TX_SCTP_CKSUM:
 			mss_l4len_idx |=
 				TXGBE_TXD_L4LEN(sizeof(struct rte_sctp_hdr));
 			tx_offload_mask.l2_len |= ~0;
@@ -378,7 +378,7 @@ txgbe_set_xmit_ctx(struct txgbe_tx_queue *txq,
 
 	vlan_macip_lens = TXGBE_TXD_IPLEN(tx_offload.l3_len >> 1);
 
-	if (ol_flags & PKT_TX_TUNNEL_MASK) {
+	if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
 		tx_offload_mask.outer_tun_len |= ~0;
 		tx_offload_mask.outer_l2_len |= ~0;
 		tx_offload_mask.outer_l3_len |= ~0;
@@ -386,16 +386,16 @@ txgbe_set_xmit_ctx(struct txgbe_tx_queue *txq,
 		tunnel_seed = TXGBE_TXD_ETUNLEN(tx_offload.outer_tun_len >> 1);
 		tunnel_seed |= TXGBE_TXD_EIPLEN(tx_offload.outer_l3_len >> 2);
 
-		switch (ol_flags & PKT_TX_TUNNEL_MASK) {
-		case PKT_TX_TUNNEL_IPIP:
+		switch (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+		case RTE_MBUF_F_TX_TUNNEL_IPIP:
 			/* for non UDP / GRE tunneling, set to 0b */
 			break;
-		case PKT_TX_TUNNEL_VXLAN:
-		case PKT_TX_TUNNEL_VXLAN_GPE:
-		case PKT_TX_TUNNEL_GENEVE:
+		case RTE_MBUF_F_TX_TUNNEL_VXLAN:
+		case RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE:
+		case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 			tunnel_seed |= TXGBE_TXD_ETYPE_UDP;
 			break;
-		case PKT_TX_TUNNEL_GRE:
+		case RTE_MBUF_F_TX_TUNNEL_GRE:
 			tunnel_seed |= TXGBE_TXD_ETYPE_GRE;
 			break;
 		default:
@@ -408,13 +408,13 @@ txgbe_set_xmit_ctx(struct txgbe_tx_queue *txq,
 		vlan_macip_lens |= TXGBE_TXD_MACLEN(tx_offload.l2_len);
 	}
 
-	if (ol_flags & PKT_TX_VLAN) {
+	if (ol_flags & RTE_MBUF_F_TX_VLAN) {
 		tx_offload_mask.vlan_tci |= ~0;
 		vlan_macip_lens |= TXGBE_TXD_VLAN(tx_offload.vlan_tci);
 	}
 
 #ifdef RTE_LIB_SECURITY
-	if (ol_flags & PKT_TX_SEC_OFFLOAD) {
+	if (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) {
 		union txgbe_crypto_tx_desc_md *md =
 				(union txgbe_crypto_tx_desc_md *)mdata;
 		tunnel_seed |= TXGBE_TXD_IPSEC_SAIDX(md->sa_idx);
@@ -477,26 +477,26 @@ tx_desc_cksum_flags_to_olinfo(uint64_t ol_flags)
 {
 	uint32_t tmp = 0;
 
-	if ((ol_flags & PKT_TX_L4_MASK) != PKT_TX_L4_NO_CKSUM) {
+	if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) != RTE_MBUF_F_TX_L4_NO_CKSUM) {
 		tmp |= TXGBE_TXD_CC;
 		tmp |= TXGBE_TXD_L4CS;
 	}
-	if (ol_flags & PKT_TX_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		tmp |= TXGBE_TXD_CC;
 		tmp |= TXGBE_TXD_IPCS;
 	}
-	if (ol_flags & PKT_TX_OUTER_IP_CKSUM) {
+	if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) {
 		tmp |= TXGBE_TXD_CC;
 		tmp |= TXGBE_TXD_EIPCS;
 	}
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		tmp |= TXGBE_TXD_CC;
 		/* implies IPv4 cksum */
-		if (ol_flags & PKT_TX_IPV4)
+		if (ol_flags & RTE_MBUF_F_TX_IPV4)
 			tmp |= TXGBE_TXD_IPCS;
 		tmp |= TXGBE_TXD_L4CS;
 	}
-	if (ol_flags & PKT_TX_VLAN)
+	if (ol_flags & RTE_MBUF_F_TX_VLAN)
 		tmp |= TXGBE_TXD_CC;
 
 	return tmp;
@@ -507,11 +507,11 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
 {
 	uint32_t cmdtype = 0;
 
-	if (ol_flags & PKT_TX_VLAN)
+	if (ol_flags & RTE_MBUF_F_TX_VLAN)
 		cmdtype |= TXGBE_TXD_VLE;
-	if (ol_flags & PKT_TX_TCP_SEG)
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 		cmdtype |= TXGBE_TXD_TSE;
-	if (ol_flags & PKT_TX_MACSEC)
+	if (ol_flags & RTE_MBUF_F_TX_MACSEC)
 		cmdtype |= TXGBE_TXD_LINKSEC;
 	return cmdtype;
 }
@@ -525,67 +525,67 @@ tx_desc_ol_flags_to_ptid(uint64_t oflags, uint32_t ptype)
 		return txgbe_encode_ptype(ptype);
 
 	/* Only support flags in TXGBE_TX_OFFLOAD_MASK */
-	tun = !!(oflags & PKT_TX_TUNNEL_MASK);
+	tun = !!(oflags & RTE_MBUF_F_TX_TUNNEL_MASK);
 
 	/* L2 level */
 	ptype = RTE_PTYPE_L2_ETHER;
-	if (oflags & PKT_TX_VLAN)
+	if (oflags & RTE_MBUF_F_TX_VLAN)
 		ptype |= RTE_PTYPE_L2_ETHER_VLAN;
 
 	/* L3 level */
-	if (oflags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IP_CKSUM))
+	if (oflags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IP_CKSUM))
 		ptype |= RTE_PTYPE_L3_IPV4;
-	else if (oflags & (PKT_TX_OUTER_IPV6))
+	else if (oflags & (RTE_MBUF_F_TX_OUTER_IPV6))
 		ptype |= RTE_PTYPE_L3_IPV6;
 
-	if (oflags & (PKT_TX_IPV4 | PKT_TX_IP_CKSUM))
+	if (oflags & (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM))
 		ptype |= (tun ? RTE_PTYPE_INNER_L3_IPV4 : RTE_PTYPE_L3_IPV4);
-	else if (oflags & (PKT_TX_IPV6))
+	else if (oflags & (RTE_MBUF_F_TX_IPV6))
 		ptype |= (tun ? RTE_PTYPE_INNER_L3_IPV6 : RTE_PTYPE_L3_IPV6);
 
 	/* L4 level */
-	switch (oflags & (PKT_TX_L4_MASK)) {
-	case PKT_TX_TCP_CKSUM:
+	switch (oflags & (RTE_MBUF_F_TX_L4_MASK)) {
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		ptype |= (tun ? RTE_PTYPE_INNER_L4_TCP : RTE_PTYPE_L4_TCP);
 		break;
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		ptype |= (tun ? RTE_PTYPE_INNER_L4_UDP : RTE_PTYPE_L4_UDP);
 		break;
-	case PKT_TX_SCTP_CKSUM:
+	case RTE_MBUF_F_TX_SCTP_CKSUM:
 		ptype |= (tun ? RTE_PTYPE_INNER_L4_SCTP : RTE_PTYPE_L4_SCTP);
 		break;
 	}
 
-	if (oflags & PKT_TX_TCP_SEG)
+	if (oflags & RTE_MBUF_F_TX_TCP_SEG)
 		ptype |= (tun ? RTE_PTYPE_INNER_L4_TCP : RTE_PTYPE_L4_TCP);
 
 	/* Tunnel */
-	switch (oflags & PKT_TX_TUNNEL_MASK) {
-	case PKT_TX_TUNNEL_VXLAN:
+	switch (oflags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
 		ptype |= RTE_PTYPE_L2_ETHER |
 			 RTE_PTYPE_L3_IPV4 |
 			 RTE_PTYPE_TUNNEL_VXLAN;
 		ptype |= RTE_PTYPE_INNER_L2_ETHER;
 		break;
-	case PKT_TX_TUNNEL_GRE:
+	case RTE_MBUF_F_TX_TUNNEL_GRE:
 		ptype |= RTE_PTYPE_L2_ETHER |
 			 RTE_PTYPE_L3_IPV4 |
 			 RTE_PTYPE_TUNNEL_GRE;
 		ptype |= RTE_PTYPE_INNER_L2_ETHER;
 		break;
-	case PKT_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 		ptype |= RTE_PTYPE_L2_ETHER |
 			 RTE_PTYPE_L3_IPV4 |
 			 RTE_PTYPE_TUNNEL_GENEVE;
 		ptype |= RTE_PTYPE_INNER_L2_ETHER;
 		break;
-	case PKT_TX_TUNNEL_VXLAN_GPE:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE:
 		ptype |= RTE_PTYPE_L2_ETHER |
 			 RTE_PTYPE_L3_IPV4 |
 			 RTE_PTYPE_TUNNEL_VXLAN_GPE;
 		break;
-	case PKT_TX_TUNNEL_IPIP:
-	case PKT_TX_TUNNEL_IP:
+	case RTE_MBUF_F_TX_TUNNEL_IPIP:
+	case RTE_MBUF_F_TX_TUNNEL_IP:
 		ptype |= RTE_PTYPE_L2_ETHER |
 			 RTE_PTYPE_L3_IPV4 |
 			 RTE_PTYPE_TUNNEL_IP;
@@ -669,19 +669,19 @@ txgbe_get_tun_len(struct rte_mbuf *mbuf)
 	const struct txgbe_genevehdr *gh;
 	uint8_t tun_len;
 
-	switch (mbuf->ol_flags & PKT_TX_TUNNEL_MASK) {
-	case PKT_TX_TUNNEL_IPIP:
+	switch (mbuf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+	case RTE_MBUF_F_TX_TUNNEL_IPIP:
 		tun_len = 0;
 		break;
-	case PKT_TX_TUNNEL_VXLAN:
-	case PKT_TX_TUNNEL_VXLAN_GPE:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN:
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE:
 		tun_len = sizeof(struct txgbe_udphdr)
 			+ sizeof(struct txgbe_vxlanhdr);
 		break;
-	case PKT_TX_TUNNEL_GRE:
+	case RTE_MBUF_F_TX_TUNNEL_GRE:
 		tun_len = sizeof(struct txgbe_nvgrehdr);
 		break;
-	case PKT_TX_TUNNEL_GENEVE:
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE:
 		gh = rte_pktmbuf_read(mbuf,
 			mbuf->outer_l2_len + mbuf->outer_l3_len,
 			sizeof(genevehdr), &genevehdr);
@@ -751,7 +751,7 @@ txgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		 */
 		ol_flags = tx_pkt->ol_flags;
 #ifdef RTE_LIB_SECURITY
-		use_ipsec = txq->using_ipsec && (ol_flags & PKT_TX_SEC_OFFLOAD);
+		use_ipsec = txq->using_ipsec && (ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD);
 #endif
 
 		/* If hardware offload required */
@@ -895,20 +895,20 @@ txgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		cmd_type_len = TXGBE_TXD_FCS;
 
 #ifdef RTE_LIBRTE_IEEE1588
-		if (ol_flags & PKT_TX_IEEE1588_TMST)
+		if (ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST)
 			cmd_type_len |= TXGBE_TXD_1588;
 #endif
 
 		olinfo_status = 0;
 		if (tx_ol_req) {
-			if (ol_flags & PKT_TX_TCP_SEG) {
+			if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 				/* when TSO is on, paylen in descriptor is the
 				 * not the packet len but the tcp payload len
 				 */
 				pkt_len -= (tx_offload.l2_len +
 					tx_offload.l3_len + tx_offload.l4_len);
 				pkt_len -=
-					(tx_pkt->ol_flags & PKT_TX_TUNNEL_MASK)
+					(tx_pkt->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 					? tx_offload.outer_l2_len +
 					  tx_offload.outer_l3_len : 0;
 			}
@@ -1076,14 +1076,14 @@ static inline uint64_t
 txgbe_rxd_pkt_info_to_pkt_flags(uint32_t pkt_info)
 {
 	static uint64_t ip_rss_types_map[16] __rte_cache_aligned = {
-		0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
-		0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
-		PKT_RX_RSS_HASH, 0, 0, 0,
-		0, 0, 0,  PKT_RX_FDIR,
+		0, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH,
+		0, RTE_MBUF_F_RX_RSS_HASH, 0, RTE_MBUF_F_RX_RSS_HASH,
+		RTE_MBUF_F_RX_RSS_HASH, 0, 0, 0,
+		0, 0, 0,  RTE_MBUF_F_RX_FDIR,
 	};
 #ifdef RTE_LIBRTE_IEEE1588
 	static uint64_t ip_pkt_etqf_map[8] = {
-		0, 0, 0, PKT_RX_IEEE1588_PTP,
+		0, 0, 0, RTE_MBUF_F_RX_IEEE1588_PTP,
 		0, 0, 0, 0,
 	};
 	int etfid = txgbe_etflt_id(TXGBE_RXD_PTID(pkt_info));
@@ -1108,12 +1108,12 @@ rx_desc_status_to_pkt_flags(uint32_t rx_status, uint64_t vlan_flags)
 	 * That can be found from rte_eth_rxmode.offloads flag
 	 */
 	pkt_flags = (rx_status & TXGBE_RXD_STAT_VLAN &&
-		     vlan_flags & PKT_RX_VLAN_STRIPPED)
+		     vlan_flags & RTE_MBUF_F_RX_VLAN_STRIPPED)
 		    ? vlan_flags : 0;
 
 #ifdef RTE_LIBRTE_IEEE1588
 	if (rx_status & TXGBE_RXD_STAT_1588)
-		pkt_flags = pkt_flags | PKT_RX_IEEE1588_TMST;
+		pkt_flags = pkt_flags | RTE_MBUF_F_RX_IEEE1588_TMST;
 #endif
 	return pkt_flags;
 }
@@ -1126,24 +1126,24 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status)
 	/* checksum offload can't be disabled */
 	if (rx_status & TXGBE_RXD_STAT_IPCS) {
 		pkt_flags |= (rx_status & TXGBE_RXD_ERR_IPCS
-				? PKT_RX_IP_CKSUM_BAD : PKT_RX_IP_CKSUM_GOOD);
+				? RTE_MBUF_F_RX_IP_CKSUM_BAD : RTE_MBUF_F_RX_IP_CKSUM_GOOD);
 	}
 
 	if (rx_status & TXGBE_RXD_STAT_L4CS) {
 		pkt_flags |= (rx_status & TXGBE_RXD_ERR_L4CS
-				? PKT_RX_L4_CKSUM_BAD : PKT_RX_L4_CKSUM_GOOD);
+				? RTE_MBUF_F_RX_L4_CKSUM_BAD : RTE_MBUF_F_RX_L4_CKSUM_GOOD);
 	}
 
 	if (rx_status & TXGBE_RXD_STAT_EIPCS &&
 	    rx_status & TXGBE_RXD_ERR_EIPCS) {
-		pkt_flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+		pkt_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
 	}
 
 #ifdef RTE_LIB_SECURITY
 	if (rx_status & TXGBE_RXD_STAT_SECP) {
-		pkt_flags |= PKT_RX_SEC_OFFLOAD;
+		pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD;
 		if (rx_status & TXGBE_RXD_ERR_SECERR)
-			pkt_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+			pkt_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
 	}
 #endif
 
@@ -1226,10 +1226,10 @@ txgbe_rx_scan_hw_ring(struct txgbe_rx_queue *rxq)
 				txgbe_rxd_pkt_info_to_pkt_type(pkt_info[j],
 				rxq->pkt_type_mask);
 
-			if (likely(pkt_flags & PKT_RX_RSS_HASH))
+			if (likely(pkt_flags & RTE_MBUF_F_RX_RSS_HASH))
 				mb->hash.rss =
 					rte_le_to_cpu_32(rxdp[j].qw0.dw1);
-			else if (pkt_flags & PKT_RX_FDIR) {
+			else if (pkt_flags & RTE_MBUF_F_RX_FDIR) {
 				mb->hash.fdir.hash =
 					rte_le_to_cpu_16(rxdp[j].qw0.hi.csum) &
 					TXGBE_ATR_HASH_MASK;
@@ -1541,7 +1541,7 @@ txgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		rxm->port = rxq->port_id;
 
 		pkt_info = rte_le_to_cpu_32(rxd.qw0.dw0);
-		/* Only valid if PKT_RX_VLAN set in pkt_flags */
+		/* Only valid if RTE_MBUF_F_RX_VLAN set in pkt_flags */
 		rxm->vlan_tci = rte_le_to_cpu_16(rxd.qw1.hi.tag);
 
 		pkt_flags = rx_desc_status_to_pkt_flags(staterr,
@@ -1552,9 +1552,9 @@ txgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		rxm->packet_type = txgbe_rxd_pkt_info_to_pkt_type(pkt_info,
 						       rxq->pkt_type_mask);
 
-		if (likely(pkt_flags & PKT_RX_RSS_HASH)) {
+		if (likely(pkt_flags & RTE_MBUF_F_RX_RSS_HASH)) {
 			rxm->hash.rss = rte_le_to_cpu_32(rxd.qw0.dw1);
-		} else if (pkt_flags & PKT_RX_FDIR) {
+		} else if (pkt_flags & RTE_MBUF_F_RX_FDIR) {
 			rxm->hash.fdir.hash =
 				rte_le_to_cpu_16(rxd.qw0.hi.csum) &
 				TXGBE_ATR_HASH_MASK;
@@ -1616,7 +1616,7 @@ txgbe_fill_cluster_head_buf(struct rte_mbuf *head, struct txgbe_rx_desc *desc,
 
 	head->port = rxq->port_id;
 
-	/* The vlan_tci field is only valid when PKT_RX_VLAN is
+	/* The vlan_tci field is only valid when RTE_MBUF_F_RX_VLAN is
 	 * set in the pkt_flags field.
 	 */
 	head->vlan_tci = rte_le_to_cpu_16(desc->qw1.hi.tag);
@@ -1628,9 +1628,9 @@ txgbe_fill_cluster_head_buf(struct rte_mbuf *head, struct txgbe_rx_desc *desc,
 	head->packet_type = txgbe_rxd_pkt_info_to_pkt_type(pkt_info,
 						rxq->pkt_type_mask);
 
-	if (likely(pkt_flags & PKT_RX_RSS_HASH)) {
+	if (likely(pkt_flags & RTE_MBUF_F_RX_RSS_HASH)) {
 		head->hash.rss = rte_le_to_cpu_32(desc->qw0.dw1);
-	} else if (pkt_flags & PKT_RX_FDIR) {
+	} else if (pkt_flags & RTE_MBUF_F_RX_FDIR) {
 		head->hash.fdir.hash = rte_le_to_cpu_16(desc->qw0.hi.csum)
 				& TXGBE_ATR_HASH_MASK;
 		head->hash.fdir.id = rte_le_to_cpu_16(desc->qw0.hi.ipid);
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index 2be5edea86..98bdad3e9f 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -444,7 +444,7 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 		struct rte_mbuf *m = bufs[i];
 
 		/* Do VLAN tag insertion */
-		if (m->ol_flags & PKT_TX_VLAN) {
+		if (m->ol_flags & RTE_MBUF_F_TX_VLAN) {
 			int error = rte_vlan_insert(&m);
 			if (unlikely(error)) {
 				rte_pktmbuf_free(m);
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index 63f70fc13d..b235749840 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -926,7 +926,7 @@ virtio_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr)
 	if (hdr->flags == 0 && hdr->gso_type == VIRTIO_NET_HDR_GSO_NONE)
 		return 0;
 
-	m->ol_flags |= PKT_RX_IP_CKSUM_UNKNOWN;
+	m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 
 	ptype = rte_net_get_ptype(m, &hdr_lens, RTE_PTYPE_ALL_MASK);
 	m->packet_type = ptype;
@@ -938,7 +938,7 @@ virtio_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr)
 	if (hdr->flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) {
 		hdrlen = hdr_lens.l2_len + hdr_lens.l3_len + hdr_lens.l4_len;
 		if (hdr->csum_start <= hdrlen && l4_supported) {
-			m->ol_flags |= PKT_RX_L4_CKSUM_NONE;
+			m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_NONE;
 		} else {
 			/* Unknown proto or tunnel, do sw cksum. We can assume
 			 * the cksum field is in the first segment since the
@@ -960,7 +960,7 @@ virtio_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr)
 					off) = csum;
 		}
 	} else if (hdr->flags & VIRTIO_NET_HDR_F_DATA_VALID && l4_supported) {
-		m->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+		m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 	}
 
 	/* GSO request, save required information in mbuf */
@@ -976,8 +976,8 @@ virtio_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr)
 		switch (hdr->gso_type & ~VIRTIO_NET_HDR_GSO_ECN) {
 			case VIRTIO_NET_HDR_GSO_TCPV4:
 			case VIRTIO_NET_HDR_GSO_TCPV6:
-				m->ol_flags |= PKT_RX_LRO | \
-					PKT_RX_L4_CKSUM_NONE;
+				m->ol_flags |= RTE_MBUF_F_RX_LRO | \
+					RTE_MBUF_F_RX_L4_CKSUM_NONE;
 				break;
 			default:
 				return -EINVAL;
@@ -1744,7 +1744,7 @@ virtio_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts,
 #endif
 
 		/* Do VLAN tag insertion */
-		if (unlikely(m->ol_flags & PKT_TX_VLAN)) {
+		if (unlikely(m->ol_flags & RTE_MBUF_F_TX_VLAN)) {
 			error = rte_vlan_insert(&m);
 			/* rte_vlan_insert() may change pointer
 			 * even in the case of failure
@@ -1763,7 +1763,7 @@ virtio_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts,
 			break;
 		}
 
-		if (m->ol_flags & PKT_TX_TCP_SEG)
+		if (m->ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 			virtio_tso_fix_cksum(m);
 	}
 
diff --git a/drivers/net/virtio/virtio_rxtx_packed.h b/drivers/net/virtio/virtio_rxtx_packed.h
index 77e5cb37e7..d5c259a1f6 100644
--- a/drivers/net/virtio/virtio_rxtx_packed.h
+++ b/drivers/net/virtio/virtio_rxtx_packed.h
@@ -166,7 +166,7 @@ virtio_vec_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr)
 		return 0;
 
 	/* GSO not support in vec path, skip check */
-	m->ol_flags |= PKT_RX_IP_CKSUM_UNKNOWN;
+	m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 
 	ptype = rte_net_get_ptype(m, &hdr_lens, RTE_PTYPE_ALL_MASK);
 	m->packet_type = ptype;
@@ -178,7 +178,7 @@ virtio_vec_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr)
 	if (hdr->flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) {
 		hdrlen = hdr_lens.l2_len + hdr_lens.l3_len + hdr_lens.l4_len;
 		if (hdr->csum_start <= hdrlen && l4_supported) {
-			m->ol_flags |= PKT_RX_L4_CKSUM_NONE;
+			m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_NONE;
 		} else {
 			/* Unknown proto or tunnel, do sw cksum. We can assume
 			 * the cksum field is in the first segment since the
@@ -200,7 +200,7 @@ virtio_vec_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr)
 					off) = csum;
 		}
 	} else if (hdr->flags & VIRTIO_NET_HDR_F_DATA_VALID && l4_supported) {
-		m->ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+		m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 	}
 
 	return 0;
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 5baac221f7..96cc8e79f0 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -639,19 +639,19 @@ virtqueue_notify(struct virtqueue *vq)
 static inline void
 virtqueue_xmit_offload(struct virtio_net_hdr *hdr, struct rte_mbuf *cookie)
 {
-	uint64_t csum_l4 = cookie->ol_flags & PKT_TX_L4_MASK;
+	uint64_t csum_l4 = cookie->ol_flags & RTE_MBUF_F_TX_L4_MASK;
 
-	if (cookie->ol_flags & PKT_TX_TCP_SEG)
-		csum_l4 |= PKT_TX_TCP_CKSUM;
+	if (cookie->ol_flags & RTE_MBUF_F_TX_TCP_SEG)
+		csum_l4 |= RTE_MBUF_F_TX_TCP_CKSUM;
 
 	switch (csum_l4) {
-	case PKT_TX_UDP_CKSUM:
+	case RTE_MBUF_F_TX_UDP_CKSUM:
 		hdr->csum_start = cookie->l2_len + cookie->l3_len;
 		hdr->csum_offset = offsetof(struct rte_udp_hdr, dgram_cksum);
 		hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
 		break;
 
-	case PKT_TX_TCP_CKSUM:
+	case RTE_MBUF_F_TX_TCP_CKSUM:
 		hdr->csum_start = cookie->l2_len + cookie->l3_len;
 		hdr->csum_offset = offsetof(struct rte_tcp_hdr, cksum);
 		hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
@@ -665,8 +665,8 @@ virtqueue_xmit_offload(struct virtio_net_hdr *hdr, struct rte_mbuf *cookie)
 	}
 
 	/* TCP Segmentation Offload */
-	if (cookie->ol_flags & PKT_TX_TCP_SEG) {
-		hdr->gso_type = (cookie->ol_flags & PKT_TX_IPV6) ?
+	if (cookie->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
+		hdr->gso_type = (cookie->ol_flags & RTE_MBUF_F_TX_IPV6) ?
 			VIRTIO_NET_HDR_GSO_TCPV6 :
 			VIRTIO_NET_HDR_GSO_TCPV4;
 		hdr->gso_size = cookie->tso_segsz;
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index 0c9f881d8a..b769902393 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -48,15 +48,14 @@
 #include "vmxnet3_logs.h"
 #include "vmxnet3_ethdev.h"
 
-#define	VMXNET3_TX_OFFLOAD_MASK	( \
-		PKT_TX_VLAN | \
-		PKT_TX_IPV6 |     \
-		PKT_TX_IPV4 |     \
-		PKT_TX_L4_MASK |  \
-		PKT_TX_TCP_SEG)
+#define	VMXNET3_TX_OFFLOAD_MASK	(RTE_MBUF_F_TX_VLAN | \
+		RTE_MBUF_F_TX_IPV6 |     \
+		RTE_MBUF_F_TX_IPV4 |     \
+		RTE_MBUF_F_TX_L4_MASK |  \
+		RTE_MBUF_F_TX_TCP_SEG)
 
 #define	VMXNET3_TX_OFFLOAD_NOTSUP_MASK	\
-	(PKT_TX_OFFLOAD_MASK ^ VMXNET3_TX_OFFLOAD_MASK)
+	(RTE_MBUF_F_TX_OFFLOAD_MASK ^ VMXNET3_TX_OFFLOAD_MASK)
 
 static const uint32_t rxprod_reg[2] = {VMXNET3_REG_RXPROD, VMXNET3_REG_RXPROD2};
 
@@ -359,7 +358,7 @@ vmxnet3_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 		/* Non-TSO packet cannot occupy more than
 		 * VMXNET3_MAX_TXD_PER_PKT TX descriptors.
 		 */
-		if ((ol_flags & PKT_TX_TCP_SEG) == 0 &&
+		if ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) == 0 &&
 				m->nb_segs > VMXNET3_MAX_TXD_PER_PKT) {
 			rte_errno = EINVAL;
 			return i;
@@ -367,8 +366,8 @@ vmxnet3_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
 
 		/* check that only supported TX offloads are requested. */
 		if ((ol_flags & VMXNET3_TX_OFFLOAD_NOTSUP_MASK) != 0 ||
-				(ol_flags & PKT_TX_L4_MASK) ==
-				PKT_TX_SCTP_CKSUM) {
+				(ol_flags & RTE_MBUF_F_TX_L4_MASK) ==
+				RTE_MBUF_F_TX_SCTP_CKSUM) {
 			rte_errno = ENOTSUP;
 			return i;
 		}
@@ -416,7 +415,7 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		struct rte_mbuf *txm = tx_pkts[nb_tx];
 		struct rte_mbuf *m_seg = txm;
 		int copy_size = 0;
-		bool tso = (txm->ol_flags & PKT_TX_TCP_SEG) != 0;
+		bool tso = (txm->ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0;
 		/* # of descriptors needed for a packet. */
 		unsigned count = txm->nb_segs;
 
@@ -520,7 +519,7 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 		/* Add VLAN tag if present */
 		gdesc = txq->cmd_ring.base + first2fill;
-		if (txm->ol_flags & PKT_TX_VLAN) {
+		if (txm->ol_flags & RTE_MBUF_F_TX_VLAN) {
 			gdesc->txd.ti = 1;
 			gdesc->txd.tci = txm->vlan_tci;
 		}
@@ -535,23 +534,23 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			gdesc->txd.msscof = mss;
 
 			deferred += (rte_pktmbuf_pkt_len(txm) - gdesc->txd.hlen + mss - 1) / mss;
-		} else if (txm->ol_flags & PKT_TX_L4_MASK) {
+		} else if (txm->ol_flags & RTE_MBUF_F_TX_L4_MASK) {
 			gdesc->txd.om = VMXNET3_OM_CSUM;
 			gdesc->txd.hlen = txm->l2_len + txm->l3_len;
 
-			switch (txm->ol_flags & PKT_TX_L4_MASK) {
-			case PKT_TX_TCP_CKSUM:
+			switch (txm->ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+			case RTE_MBUF_F_TX_TCP_CKSUM:
 				gdesc->txd.msscof = gdesc->txd.hlen +
 					offsetof(struct rte_tcp_hdr, cksum);
 				break;
-			case PKT_TX_UDP_CKSUM:
+			case RTE_MBUF_F_TX_UDP_CKSUM:
 				gdesc->txd.msscof = gdesc->txd.hlen +
 					offsetof(struct rte_udp_hdr,
 						dgram_cksum);
 				break;
 			default:
 				PMD_TX_LOG(WARNING, "requested cksum offload not supported %#llx",
-					   txm->ol_flags & PKT_TX_L4_MASK);
+					   txm->ol_flags & RTE_MBUF_F_TX_L4_MASK);
 				abort();
 			}
 			deferred++;
@@ -739,35 +738,35 @@ vmxnet3_rx_offload(struct vmxnet3_hw *hw, const Vmxnet3_RxCompDesc *rcd,
 
 			rxm->tso_segsz = rcde->mss;
 			*vmxnet3_segs_dynfield(rxm) = rcde->segCnt;
-			ol_flags |= PKT_RX_LRO;
+			ol_flags |= RTE_MBUF_F_RX_LRO;
 		}
 	} else { /* Offloads set in eop */
 		/* Check for RSS */
 		if (rcd->rssType != VMXNET3_RCD_RSS_TYPE_NONE) {
-			ol_flags |= PKT_RX_RSS_HASH;
+			ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 			rxm->hash.rss = rcd->rssHash;
 		}
 
 		/* Check for hardware stripped VLAN tag */
 		if (rcd->ts) {
-			ol_flags |= (PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+			ol_flags |= (RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 			rxm->vlan_tci = rte_le_to_cpu_16((uint16_t)rcd->tci);
 		}
 
 		/* Check packet type, checksum errors, etc. */
 		if (rcd->cnc) {
-			ol_flags |= PKT_RX_L4_CKSUM_UNKNOWN;
+			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
 		} else {
 			if (rcd->v4) {
 				packet_type |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
 
 				if (rcd->ipc)
-					ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+					ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 				else
-					ol_flags |= PKT_RX_IP_CKSUM_BAD;
+					ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 
 				if (rcd->tuc) {
-					ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+					ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 					if (rcd->tcp)
 						packet_type |= RTE_PTYPE_L4_TCP;
 					else
@@ -775,17 +774,17 @@ vmxnet3_rx_offload(struct vmxnet3_hw *hw, const Vmxnet3_RxCompDesc *rcd,
 				} else {
 					if (rcd->tcp) {
 						packet_type |= RTE_PTYPE_L4_TCP;
-						ol_flags |= PKT_RX_L4_CKSUM_BAD;
+						ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 					} else if (rcd->udp) {
 						packet_type |= RTE_PTYPE_L4_UDP;
-						ol_flags |= PKT_RX_L4_CKSUM_BAD;
+						ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 					}
 				}
 			} else if (rcd->v6) {
 				packet_type |= RTE_PTYPE_L3_IPV6_EXT_UNKNOWN;
 
 				if (rcd->tuc) {
-					ol_flags |= PKT_RX_L4_CKSUM_GOOD;
+					ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 					if (rcd->tcp)
 						packet_type |= RTE_PTYPE_L4_TCP;
 					else
@@ -793,10 +792,10 @@ vmxnet3_rx_offload(struct vmxnet3_hw *hw, const Vmxnet3_RxCompDesc *rcd,
 				} else {
 					if (rcd->tcp) {
 						packet_type |= RTE_PTYPE_L4_TCP;
-						ol_flags |= PKT_RX_L4_CKSUM_BAD;
+						ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 					} else if (rcd->udp) {
 						packet_type |= RTE_PTYPE_L4_UDP;
-						ol_flags |= PKT_RX_L4_CKSUM_BAD;
+						ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
 					}
 				}
 			} else {
@@ -804,7 +803,7 @@ vmxnet3_rx_offload(struct vmxnet3_hw *hw, const Vmxnet3_RxCompDesc *rcd,
 			}
 
 			/* Old variants of vmxnet3 do not provide MSS */
-			if ((ol_flags & PKT_RX_LRO) && rxm->tso_segsz == 0)
+			if ((ol_flags & RTE_MBUF_F_RX_LRO) && rxm->tso_segsz == 0)
 				rxm->tso_segsz = vmxnet3_guess_mss(hw,
 						rcd, rxm);
 		}
diff --git a/drivers/regex/mlx5/mlx5_regex_fastpath.c b/drivers/regex/mlx5/mlx5_regex_fastpath.c
index 0833b2817e..32c64dda65 100644
--- a/drivers/regex/mlx5/mlx5_regex_fastpath.c
+++ b/drivers/regex/mlx5/mlx5_regex_fastpath.c
@@ -139,7 +139,7 @@ mlx5_regex_addr2mr(struct mlx5_regex_priv *priv, struct mlx5_mr_ctrl *mr_ctrl,
 		return lkey;
 	/* Take slower bottom-half on miss. */
 	return mlx5_mr_addr2mr_bh(priv->pd, 0, &priv->mr_scache, mr_ctrl, addr,
-				  !!(mbuf->ol_flags & EXT_ATTACHED_MBUF));
+				  !!(mbuf->ol_flags & RTE_MBUF_F_EXTERNAL));
 }
 
 
diff --git a/examples/bpf/t2.c b/examples/bpf/t2.c
index b9bce746c0..67cd908cd6 100644
--- a/examples/bpf/t2.c
+++ b/examples/bpf/t2.c
@@ -6,7 +6,7 @@
  * eBPF program sample.
  * Accepts pointer to struct rte_mbuf as an input parameter.
  * cleanup mbuf's vlan_tci and all related RX flags
- * (PKT_RX_VLAN_PKT | PKT_RX_VLAN_STRIPPED).
+ * (RTE_MBUF_F_RX_VLAN_PKT | RTE_MBUF_F_RX_VLAN_STRIPPED).
  * Doesn't touch contents of packet data.
  * To compile:
  * clang -O2 -target bpf -Wno-int-to-void-pointer-cast -c t2.c
@@ -27,7 +27,7 @@ entry(void *pkt)
 
 	mb = pkt;
 	mb->vlan_tci = 0;
-	mb->ol_flags &= ~(PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED);
+	mb->ol_flags &= ~(RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED);
 
 	return 1;
 }
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index a7f40970f2..d850e7b97d 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -299,7 +299,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
 			rte_pktmbuf_free(m);
 
 			/* request HW to regenerate IPv4 cksum */
-			ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+			ol_flags |= (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM);
 
 			/* If we fail to fragment the packet */
 			if (unlikely (len2 < 0))
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index d611c7d016..0ebb4b09e4 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -359,7 +359,7 @@ reassemble(struct rte_mbuf *m, uint16_t portid, uint32_t queue,
 			}
 
 			/* update offloading flags */
-			m->ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM);
+			m->ol_flags |= (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM);
 		}
 		ip_dst = rte_be_to_cpu_32(ip_hdr->dst_addr);
 
diff --git a/examples/ipsec-secgw/esp.c b/examples/ipsec-secgw/esp.c
index bfa7ff7217..bd233752c8 100644
--- a/examples/ipsec-secgw/esp.c
+++ b/examples/ipsec-secgw/esp.c
@@ -159,8 +159,8 @@ esp_inbound_post(struct rte_mbuf *m, struct ipsec_sa *sa,
 
 	if ((ips->type == RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) ||
 			(ips->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO)) {
-		if (m->ol_flags & PKT_RX_SEC_OFFLOAD) {
-			if (m->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED)
+		if (m->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD) {
+			if (m->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED)
 				cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
 			else
 				cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
@@ -464,7 +464,7 @@ esp_outbound_post(struct rte_mbuf *m,
 
 	if ((type == RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) ||
 			(type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO)) {
-		m->ol_flags |= PKT_TX_SEC_OFFLOAD;
+		m->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
 	} else {
 		RTE_ASSERT(cop != NULL);
 		if (cop->status != RTE_CRYPTO_OP_STATUS_SUCCESS) {
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 7b01872c6f..f6b384a8f4 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -466,7 +466,7 @@ prepare_one_packet(struct rte_mbuf *pkt, struct ipsec_traffic *t)
 	 * with the security session.
 	 */
 
-	if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD &&
+	if (pkt->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD &&
 			rte_security_dynfield_is_registered()) {
 		struct ipsec_sa *sa;
 		struct ipsec_mbuf_metadata *priv;
@@ -533,7 +533,7 @@ prepare_tx_pkt(struct rte_mbuf *pkt, uint16_t port,
 		ip->ip_sum = 0;
 
 		/* calculate IPv4 cksum in SW */
-		if ((pkt->ol_flags & PKT_TX_IP_CKSUM) == 0)
+		if ((pkt->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) == 0)
 			ip->ip_sum = rte_ipv4_cksum((struct rte_ipv4_hdr *)ip);
 
 		ethhdr->ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
@@ -696,7 +696,7 @@ inbound_sp_sa(struct sp_ctx *sp, struct sa_ctx *sa, struct traffic_type *ip,
 		}
 
 		/* Only check SPI match for processed IPSec packets */
-		if (i < lim && ((m->ol_flags & PKT_RX_SEC_OFFLOAD) == 0)) {
+		if (i < lim && ((m->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD) == 0)) {
 			free_pkts(&m, 1);
 			continue;
 		}
@@ -978,7 +978,7 @@ route4_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint8_t nb_pkts)
 	 */
 
 	for (i = 0; i < nb_pkts; i++) {
-		if (!(pkts[i]->ol_flags & PKT_TX_SEC_OFFLOAD)) {
+		if (!(pkts[i]->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD)) {
 			/* Security offload not enabled. So an LPM lookup is
 			 * required to get the hop
 			 */
@@ -995,7 +995,7 @@ route4_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint8_t nb_pkts)
 	lpm_pkts = 0;
 
 	for (i = 0; i < nb_pkts; i++) {
-		if (pkts[i]->ol_flags & PKT_TX_SEC_OFFLOAD) {
+		if (pkts[i]->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) {
 			/* Read hop from the SA */
 			pkt_hop = get_hop_for_offload_pkt(pkts[i], 0);
 		} else {
@@ -1029,7 +1029,7 @@ route6_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint8_t nb_pkts)
 	 */
 
 	for (i = 0; i < nb_pkts; i++) {
-		if (!(pkts[i]->ol_flags & PKT_TX_SEC_OFFLOAD)) {
+		if (!(pkts[i]->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD)) {
 			/* Security offload not enabled. So an LPM lookup is
 			 * required to get the hop
 			 */
@@ -1047,7 +1047,7 @@ route6_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint8_t nb_pkts)
 	lpm_pkts = 0;
 
 	for (i = 0; i < nb_pkts; i++) {
-		if (pkts[i]->ol_flags & PKT_TX_SEC_OFFLOAD) {
+		if (pkts[i]->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) {
 			/* Read hop from the SA */
 			pkt_hop = get_hop_for_offload_pkt(pkts[i], 1);
 		} else {
@@ -2302,10 +2302,10 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
 		qconf->tx_queue_id[portid] = tx_queueid;
 
 		/* Pre-populate pkt offloads based on capabilities */
-		qconf->outbound.ipv4_offloads = PKT_TX_IPV4;
-		qconf->outbound.ipv6_offloads = PKT_TX_IPV6;
+		qconf->outbound.ipv4_offloads = RTE_MBUF_F_TX_IPV4;
+		qconf->outbound.ipv6_offloads = RTE_MBUF_F_TX_IPV6;
 		if (local_port_conf.txmode.offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
-			qconf->outbound.ipv4_offloads |= PKT_TX_IP_CKSUM;
+			qconf->outbound.ipv4_offloads |= RTE_MBUF_F_TX_IP_CKSUM;
 
 		tx_queueid++;
 
diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c
index 6f49239c4a..6d3f72a047 100644
--- a/examples/ipsec-secgw/ipsec_worker.c
+++ b/examples/ipsec-secgw/ipsec_worker.c
@@ -211,9 +211,9 @@ process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt,
 
 	switch (type) {
 	case PKT_TYPE_PLAIN_IPV4:
-		if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) {
+		if (pkt->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD) {
 			if (unlikely(pkt->ol_flags &
-				     PKT_RX_SEC_OFFLOAD_FAILED)) {
+				     RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED)) {
 				RTE_LOG(ERR, IPSEC,
 					"Inbound security offload failed\n");
 				goto drop_pkt_and_exit;
@@ -229,9 +229,9 @@ process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt,
 		break;
 
 	case PKT_TYPE_PLAIN_IPV6:
-		if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) {
+		if (pkt->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD) {
 			if (unlikely(pkt->ol_flags &
-				     PKT_RX_SEC_OFFLOAD_FAILED)) {
+				     RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED)) {
 				RTE_LOG(ERR, IPSEC,
 					"Inbound security offload failed\n");
 				goto drop_pkt_and_exit;
@@ -370,7 +370,7 @@ process_ipsec_ev_outbound(struct ipsec_ctx *ctx, struct route_table *rt,
 				      sess->security.ses, pkt, NULL);
 
 	/* Mark the packet for Tx security offload */
-	pkt->ol_flags |= PKT_TX_SEC_OFFLOAD;
+	pkt->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
 
 	/* Get the port to which this pkt need to be submitted */
 	port_id = sa->portid;
@@ -485,7 +485,7 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct eh_event_link_info *links,
 						      NULL);
 
 			/* Mark the packet for Tx security offload */
-			pkt->ol_flags |= PKT_TX_SEC_OFFLOAD;
+			pkt->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
 
 			/* Provide L2 len for Outbound processing */
 			pkt->l2_len = RTE_ETHER_HDR_LEN;
diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c
index 17a28556c9..7f2199290e 100644
--- a/examples/ipsec-secgw/sa.c
+++ b/examples/ipsec-secgw/sa.c
@@ -32,7 +32,7 @@
 
 #define IP6_FULL_MASK (sizeof(((struct ip_addr *)NULL)->ip.ip6.ip6) * CHAR_BIT)
 
-#define MBUF_NO_SEC_OFFLOAD(m) ((m->ol_flags & PKT_RX_SEC_OFFLOAD) == 0)
+#define MBUF_NO_SEC_OFFLOAD(m) ((m->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD) == 0)
 
 struct supported_cipher_algo {
 	const char *keyword;
diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
index d94eca0353..3d8c82bef1 100644
--- a/examples/ptpclient/ptpclient.c
+++ b/examples/ptpclient/ptpclient.c
@@ -468,7 +468,7 @@ parse_fup(struct ptpv2_data_slave_ordinary *ptp_data)
 			   sizeof(struct clock_id));
 
 		/* Enable flag for hardware timestamping. */
-		created_pkt->ol_flags |= PKT_TX_IEEE1588_TMST;
+		created_pkt->ol_flags |= RTE_MBUF_F_TX_IEEE1588_TMST;
 
 		/*Read value from NIC to prevent latching with old value. */
 		rte_eth_timesync_read_tx_timestamp(ptp_data->portid,
@@ -630,7 +630,7 @@ lcore_main(void)
 				continue;
 
 			/* Packet is parsed to determine which type. 8< */
-			if (m->ol_flags & PKT_RX_IEEE1588_PTP)
+			if (m->ol_flags & RTE_MBUF_F_RX_IEEE1588_PTP)
 				parse_ptp_frames(portid, m);
 			/* >8 End of packet is parsed to determine which type. */
 
diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c
index 7ffccc8369..41e8fcdc30 100644
--- a/examples/qos_meter/main.c
+++ b/examples/qos_meter/main.c
@@ -77,13 +77,13 @@ static struct rte_eth_conf port_conf = {
  * Packet RX/TX
  *
  ***/
-#define PKT_RX_BURST_MAX                32
-#define PKT_TX_BURST_MAX                32
+#define RTE_MBUF_F_RX_BURST_MAX                32
+#define RTE_MBUF_F_TX_BURST_MAX                32
 #define TIME_TX_DRAIN                   200000ULL
 
 static uint16_t port_rx;
 static uint16_t port_tx;
-static struct rte_mbuf *pkts_rx[PKT_RX_BURST_MAX];
+static struct rte_mbuf *pkts_rx[RTE_MBUF_F_RX_BURST_MAX];
 struct rte_eth_dev_tx_buffer *tx_buffer;
 
 /* Traffic meter parameters are configured in the application. 8< */
@@ -188,7 +188,7 @@ main_loop(__rte_unused void *dummy)
 		}
 
 		/* Read packet burst from NIC RX */
-		nb_rx = rte_eth_rx_burst(port_rx, NIC_RX_QUEUE, pkts_rx, PKT_RX_BURST_MAX);
+		nb_rx = rte_eth_rx_burst(port_rx, NIC_RX_QUEUE, pkts_rx, RTE_MBUF_F_RX_BURST_MAX);
 
 		/* Handle packets */
 		for (i = 0; i < nb_rx; i ++) {
@@ -420,13 +420,13 @@ main(int argc, char **argv)
 		rte_exit(EXIT_FAILURE, "Port %d TX queue setup error (%d)\n", port_tx, ret);
 
 	tx_buffer = rte_zmalloc_socket("tx_buffer",
-			RTE_ETH_TX_BUFFER_SIZE(PKT_TX_BURST_MAX), 0,
+			RTE_ETH_TX_BUFFER_SIZE(RTE_MBUF_F_TX_BURST_MAX), 0,
 			rte_eth_dev_socket_id(port_tx));
 	if (tx_buffer == NULL)
 		rte_exit(EXIT_FAILURE, "Port %d TX buffer allocation error\n",
 				port_tx);
 
-	rte_eth_tx_buffer_init(tx_buffer, PKT_TX_BURST_MAX);
+	rte_eth_tx_buffer_init(tx_buffer, RTE_MBUF_F_TX_BURST_MAX);
 
 	ret = rte_eth_dev_start(port_rx);
 	if (ret < 0)
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index d2254733bc..efda091406 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -1037,15 +1037,15 @@ static void virtio_tx_offload(struct rte_mbuf *m)
 	tcp_hdr = rte_pktmbuf_mtod_offset(m, struct rte_tcp_hdr *,
 		m->l2_len + m->l3_len);
 
-	m->ol_flags |= PKT_TX_TCP_SEG;
+	m->ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 	if ((ptype & RTE_PTYPE_L3_MASK) == RTE_PTYPE_L3_IPV4) {
-		m->ol_flags |= PKT_TX_IPV4;
-		m->ol_flags |= PKT_TX_IP_CKSUM;
+		m->ol_flags |= RTE_MBUF_F_TX_IPV4;
+		m->ol_flags |= RTE_MBUF_F_TX_IP_CKSUM;
 		ipv4_hdr = l3_hdr;
 		ipv4_hdr->hdr_checksum = 0;
 		tcp_hdr->cksum = rte_ipv4_phdr_cksum(l3_hdr, m->ol_flags);
 	} else { /* assume ethertype == RTE_ETHER_TYPE_IPV6 */
-		m->ol_flags |= PKT_TX_IPV6;
+		m->ol_flags |= RTE_MBUF_F_TX_IPV6;
 		tcp_hdr->cksum = rte_ipv6_phdr_cksum(l3_hdr, m->ol_flags);
 	}
 }
@@ -1116,7 +1116,7 @@ virtio_tx_route(struct vhost_dev *vdev, struct rte_mbuf *m, uint16_t vlan_tag)
 			(vh->vlan_tci != vlan_tag_be))
 			vh->vlan_tci = vlan_tag_be;
 	} else {
-		m->ol_flags |= PKT_TX_VLAN;
+		m->ol_flags |= RTE_MBUF_F_TX_VLAN;
 
 		/*
 		 * Find the right seg to adjust the data len when offset is
@@ -1140,7 +1140,7 @@ virtio_tx_route(struct vhost_dev *vdev, struct rte_mbuf *m, uint16_t vlan_tag)
 		m->vlan_tci = vlan_tag;
 	}
 
-	if (m->ol_flags & PKT_RX_LRO)
+	if (m->ol_flags & RTE_MBUF_F_RX_LRO)
 		virtio_tx_offload(m);
 
 	tx_q->m_table[tx_q->len++] = m;
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 6d80514ba7..ac531af26c 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -1415,13 +1415,13 @@ struct rte_eth_conf {
 #define DEV_TX_OFFLOAD_SECURITY         0x00020000
 /**
  * Device supports generic UDP tunneled packet TSO.
- * Application must set PKT_TX_TUNNEL_UDP and other mbuf fields required
+ * Application must set RTE_MBUF_F_TX_TUNNEL_UDP and other mbuf fields required
  * for tunnel TSO.
  */
 #define DEV_TX_OFFLOAD_UDP_TNL_TSO      0x00040000
 /**
  * Device supports generic IP tunneled packet TSO.
- * Application must set PKT_TX_TUNNEL_IP and other mbuf fields required
+ * Application must set RTE_MBUF_F_TX_TUNNEL_IP and other mbuf fields required
  * for tunnel TSO.
  */
 #define DEV_TX_OFFLOAD_IP_TNL_TSO       0x00080000
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index a89945061a..d6a7fc8f68 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -1422,11 +1422,12 @@ rte_flow_item_icmp6_nd_opt_tla_eth_mask = {
  * RTE_FLOW_ITEM_TYPE_META
  *
  * Matches a specified metadata value. On egress, metadata can be set
- * either by mbuf dynamic metadata field with PKT_TX_DYNF_METADATA flag or
- * RTE_FLOW_ACTION_TYPE_SET_META. On ingress, RTE_FLOW_ACTION_TYPE_SET_META
+ * either by mbuf dynamic metadata field with RTE_MBUF_DYNFLAG_TX_METADATA flag
+ * or RTE_FLOW_ACTION_TYPE_SET_META. On ingress, RTE_FLOW_ACTION_TYPE_SET_META
  * sets metadata for a packet and the metadata will be reported via mbuf
- * metadata dynamic field with PKT_RX_DYNF_METADATA flag. The dynamic mbuf
- * field must be registered in advance by rte_flow_dynf_metadata_register().
+ * metadata dynamic field with RTE_MBUF_DYNFLAG_RX_METADATA flag. The dynamic
+ * mbuf field must be registered in advance by
+ * rte_flow_dynf_metadata_register().
  */
 struct rte_flow_item_meta {
 	uint32_t data;
@@ -1900,8 +1901,8 @@ enum rte_flow_action_type {
 	RTE_FLOW_ACTION_TYPE_JUMP,
 
 	/**
-	 * Attaches an integer value to packets and sets PKT_RX_FDIR and
-	 * PKT_RX_FDIR_ID mbuf flags.
+	 * Attaches an integer value to packets and sets RTE_MBUF_F_RX_FDIR and
+	 * RTE_MBUF_F_RX_FDIR_ID mbuf flags.
 	 *
 	 * See struct rte_flow_action_mark.
 	 */
@@ -1909,7 +1910,7 @@ enum rte_flow_action_type {
 
 	/**
 	 * Flags packets. Similar to MARK without a specific value; only
-	 * sets the PKT_RX_FDIR mbuf flag.
+	 * sets the RTE_MBUF_F_RX_FDIR mbuf flag.
 	 *
 	 * No associated configuration structure.
 	 */
@@ -2414,8 +2415,8 @@ enum rte_flow_action_type {
 /**
  * RTE_FLOW_ACTION_TYPE_MARK
  *
- * Attaches an integer value to packets and sets PKT_RX_FDIR and
- * PKT_RX_FDIR_ID mbuf flags.
+ * Attaches an integer value to packets and sets RTE_MBUF_F_RX_FDIR and
+ * RTE_MBUF_F_RX_FDIR_ID mbuf flags.
  *
  * This value is arbitrary and application-defined. Maximum allowed value
  * depends on the underlying implementation. It is returned in the
@@ -2960,10 +2961,10 @@ struct rte_flow_action_set_tag {
  * RTE_FLOW_ACTION_TYPE_SET_META
  *
  * Set metadata. Metadata set by mbuf metadata dynamic field with
- * PKT_TX_DYNF_DATA flag on egress will be overridden by this action. On
+ * RTE_MBUF_DYNFLAG_TX_DATA flag on egress will be overridden by this action. On
  * ingress, the metadata will be carried by mbuf metadata dynamic field
- * with PKT_RX_DYNF_METADATA flag if set.  The dynamic mbuf field must be
- * registered in advance by rte_flow_dynf_metadata_register().
+ * with RTE_MBUF_DYNFLAG_RX_METADATA flag if set.  The dynamic mbuf field must
+ * be registered in advance by rte_flow_dynf_metadata_register().
  *
  * Altering partial bits is supported with mask. For bits which have never
  * been set, unpredictable value will be seen depending on driver
@@ -3261,8 +3262,12 @@ extern uint64_t rte_flow_dynf_metadata_mask;
 	RTE_MBUF_DYNFIELD((m), rte_flow_dynf_metadata_offs, uint32_t *)
 
 /* Mbuf dynamic flags for metadata. */
-#define PKT_RX_DYNF_METADATA (rte_flow_dynf_metadata_mask)
-#define PKT_TX_DYNF_METADATA (rte_flow_dynf_metadata_mask)
+#define RTE_MBUF_DYNFLAG_RX_METADATA (rte_flow_dynf_metadata_mask)
+#define PKT_RX_DYNF_METADATA RTE_DEPRECATED(PKT_RX_DYNF_METADATA) \
+		RTE_MBUF_DYNFLAG_RX_METADATA
+#define RTE_MBUF_DYNFLAG_TX_METADATA (rte_flow_dynf_metadata_mask)
+#define PKT_TX_DYNF_METADATA RTE_DEPRECATED(PKT_TX_DYNF_METADATA) \
+		RTE_MBUF_DYNFLAG_TX_METADATA
 
 __rte_experimental
 static inline uint32_t
diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c
index 13dfb28401..c67dbdf102 100644
--- a/lib/eventdev/rte_event_eth_rx_adapter.c
+++ b/lib/eventdev/rte_event_eth_rx_adapter.c
@@ -869,8 +869,8 @@ rxa_buffer_mbufs(struct rte_event_eth_rx_adapter *rx_adapter,
 	uint16_t dropped;
 
 	if (!eth_rx_queue_info->ena_vector) {
-		/* 0xffff ffff if PKT_RX_RSS_HASH is set, otherwise 0 */
-		rss_mask = ~(((m->ol_flags & PKT_RX_RSS_HASH) != 0) - 1);
+		/* 0xffff ffff if RTE_MBUF_F_RX_RSS_HASH is set, otherwise 0 */
+		rss_mask = ~(((m->ol_flags & RTE_MBUF_F_RX_RSS_HASH) != 0) - 1);
 		do_rss = !rss_mask && !eth_rx_queue_info->flow_id_mask;
 		for (i = 0; i < num; i++) {
 			m = mbufs[i];
diff --git a/lib/gso/gso_common.h b/lib/gso/gso_common.h
index 4d5f303fa6..2c258b22bf 100644
--- a/lib/gso/gso_common.h
+++ b/lib/gso/gso_common.h
@@ -18,26 +18,26 @@
 #define TCP_HDR_PSH_MASK ((uint8_t)0x08)
 #define TCP_HDR_FIN_MASK ((uint8_t)0x01)
 
-#define IS_IPV4_TCP(flag) (((flag) & (PKT_TX_TCP_SEG | PKT_TX_IPV4)) == \
-		(PKT_TX_TCP_SEG | PKT_TX_IPV4))
-
-#define IS_IPV4_VXLAN_TCP4(flag) (((flag) & (PKT_TX_TCP_SEG | PKT_TX_IPV4 | \
-				PKT_TX_OUTER_IPV4 | PKT_TX_TUNNEL_MASK)) == \
-		(PKT_TX_TCP_SEG | PKT_TX_IPV4 | PKT_TX_OUTER_IPV4 | \
-		 PKT_TX_TUNNEL_VXLAN))
-
-#define IS_IPV4_VXLAN_UDP4(flag) (((flag) & (PKT_TX_UDP_SEG | PKT_TX_IPV4 | \
-				PKT_TX_OUTER_IPV4 | PKT_TX_TUNNEL_MASK)) == \
-		(PKT_TX_UDP_SEG | PKT_TX_IPV4 | PKT_TX_OUTER_IPV4 | \
-		 PKT_TX_TUNNEL_VXLAN))
-
-#define IS_IPV4_GRE_TCP4(flag) (((flag) & (PKT_TX_TCP_SEG | PKT_TX_IPV4 | \
-				PKT_TX_OUTER_IPV4 | PKT_TX_TUNNEL_MASK)) == \
-		(PKT_TX_TCP_SEG | PKT_TX_IPV4 | PKT_TX_OUTER_IPV4 | \
-		 PKT_TX_TUNNEL_GRE))
-
-#define IS_IPV4_UDP(flag) (((flag) & (PKT_TX_UDP_SEG | PKT_TX_IPV4)) == \
-		(PKT_TX_UDP_SEG | PKT_TX_IPV4))
+#define IS_IPV4_TCP(flag) (((flag) & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_IPV4)) == \
+		(RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_IPV4))
+
+#define IS_IPV4_VXLAN_TCP4(flag) (((flag) & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_IPV4 | \
+				RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_TUNNEL_MASK)) == \
+		(RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_OUTER_IPV4 | \
+		 RTE_MBUF_F_TX_TUNNEL_VXLAN))
+
+#define IS_IPV4_VXLAN_UDP4(flag) (((flag) & (RTE_MBUF_F_TX_UDP_SEG | RTE_MBUF_F_TX_IPV4 | \
+				RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_TUNNEL_MASK)) == \
+		(RTE_MBUF_F_TX_UDP_SEG | RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_OUTER_IPV4 | \
+		 RTE_MBUF_F_TX_TUNNEL_VXLAN))
+
+#define IS_IPV4_GRE_TCP4(flag) (((flag) & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_IPV4 | \
+				RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_TUNNEL_MASK)) == \
+		(RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_OUTER_IPV4 | \
+		 RTE_MBUF_F_TX_TUNNEL_GRE))
+
+#define IS_IPV4_UDP(flag) (((flag) & (RTE_MBUF_F_TX_UDP_SEG | RTE_MBUF_F_TX_IPV4)) == \
+		(RTE_MBUF_F_TX_UDP_SEG | RTE_MBUF_F_TX_IPV4))
 
 /**
  * Internal function which updates the UDP header of a packet, following
diff --git a/lib/gso/gso_tunnel_tcp4.c b/lib/gso/gso_tunnel_tcp4.c
index 166aace73a..1a7ef30dde 100644
--- a/lib/gso/gso_tunnel_tcp4.c
+++ b/lib/gso/gso_tunnel_tcp4.c
@@ -37,7 +37,7 @@ update_tunnel_ipv4_tcp_headers(struct rte_mbuf *pkt, uint8_t ipid_delta,
 	tail_idx = nb_segs - 1;
 
 	/* Only update UDP header for VxLAN packets. */
-	update_udp_hdr = (pkt->ol_flags & PKT_TX_TUNNEL_VXLAN) ? 1 : 0;
+	update_udp_hdr = (pkt->ol_flags & RTE_MBUF_F_TX_TUNNEL_VXLAN) ? 1 : 0;
 
 	for (i = 0; i < nb_segs; i++) {
 		update_ipv4_header(segs[i], outer_ipv4_offset, outer_id);
diff --git a/lib/gso/rte_gso.c b/lib/gso/rte_gso.c
index 0d02ec3cee..58037d6b5d 100644
--- a/lib/gso/rte_gso.c
+++ b/lib/gso/rte_gso.c
@@ -43,7 +43,7 @@ rte_gso_segment(struct rte_mbuf *pkt,
 		return -EINVAL;
 
 	if (gso_ctx->gso_size >= pkt->pkt_len) {
-		pkt->ol_flags &= (~(PKT_TX_TCP_SEG | PKT_TX_UDP_SEG));
+		pkt->ol_flags &= (~(RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG));
 		return 0;
 	}
 
@@ -57,26 +57,26 @@ rte_gso_segment(struct rte_mbuf *pkt,
 			(gso_ctx->gso_types & DEV_TX_OFFLOAD_VXLAN_TNL_TSO)) ||
 			((IS_IPV4_GRE_TCP4(pkt->ol_flags) &&
 			 (gso_ctx->gso_types & DEV_TX_OFFLOAD_GRE_TNL_TSO)))) {
-		pkt->ol_flags &= (~PKT_TX_TCP_SEG);
+		pkt->ol_flags &= (~RTE_MBUF_F_TX_TCP_SEG);
 		ret = gso_tunnel_tcp4_segment(pkt, gso_size, ipid_delta,
 				direct_pool, indirect_pool,
 				pkts_out, nb_pkts_out);
 	} else if (IS_IPV4_VXLAN_UDP4(pkt->ol_flags) &&
 			(gso_ctx->gso_types & DEV_TX_OFFLOAD_VXLAN_TNL_TSO) &&
 			(gso_ctx->gso_types & DEV_TX_OFFLOAD_UDP_TSO)) {
-		pkt->ol_flags &= (~PKT_TX_UDP_SEG);
+		pkt->ol_flags &= (~RTE_MBUF_F_TX_UDP_SEG);
 		ret = gso_tunnel_udp4_segment(pkt, gso_size,
 				direct_pool, indirect_pool,
 				pkts_out, nb_pkts_out);
 	} else if (IS_IPV4_TCP(pkt->ol_flags) &&
 			(gso_ctx->gso_types & DEV_TX_OFFLOAD_TCP_TSO)) {
-		pkt->ol_flags &= (~PKT_TX_TCP_SEG);
+		pkt->ol_flags &= (~RTE_MBUF_F_TX_TCP_SEG);
 		ret = gso_tcp4_segment(pkt, gso_size, ipid_delta,
 				direct_pool, indirect_pool,
 				pkts_out, nb_pkts_out);
 	} else if (IS_IPV4_UDP(pkt->ol_flags) &&
 			(gso_ctx->gso_types & DEV_TX_OFFLOAD_UDP_TSO)) {
-		pkt->ol_flags &= (~PKT_TX_UDP_SEG);
+		pkt->ol_flags &= (~RTE_MBUF_F_TX_UDP_SEG);
 		ret = gso_udp4_segment(pkt, gso_size, direct_pool,
 				indirect_pool, pkts_out, nb_pkts_out);
 	} else {
diff --git a/lib/gso/rte_gso.h b/lib/gso/rte_gso.h
index d93ee8e5b1..777d0a55fb 100644
--- a/lib/gso/rte_gso.h
+++ b/lib/gso/rte_gso.h
@@ -77,8 +77,8 @@ struct rte_gso_ctx {
  *
  * Before calling rte_gso_segment(), applications must set proper ol_flags
  * for the packet. The GSO library uses the same macros as that of TSO.
- * For example, set PKT_TX_TCP_SEG and PKT_TX_IPV4 in ol_flags to segment
- * a TCP/IPv4 packet. If rte_gso_segment() succeeds, the PKT_TX_TCP_SEG
+ * For example, set RTE_MBUF_F_TX_TCP_SEG and RTE_MBUF_F_TX_IPV4 in ol_flags to segment
+ * a TCP/IPv4 packet. If rte_gso_segment() succeeds, the RTE_MBUF_F_TX_TCP_SEG
  * flag is removed for all GSO segments and the input packet.
  *
  * Each of the newly-created GSO segments is organized as a two-segment
diff --git a/lib/ipsec/esp_inb.c b/lib/ipsec/esp_inb.c
index 2b1df6a032..17442a98f2 100644
--- a/lib/ipsec/esp_inb.c
+++ b/lib/ipsec/esp_inb.c
@@ -399,7 +399,7 @@ static inline int32_t
 trs_process_check(struct rte_mbuf *mb, struct rte_mbuf **ml,
 	uint32_t *tofs, struct rte_esp_tail espt, uint32_t hlen, uint32_t tlen)
 {
-	if ((mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) != 0 ||
+	if ((mb->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED) != 0 ||
 			tlen + hlen > mb->pkt_len)
 		return -EBADMSG;
 
@@ -487,8 +487,8 @@ trs_process_step3(struct rte_mbuf *mb)
 	/* reset mbuf packet type */
 	mb->packet_type &= (RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK);
 
-	/* clear the PKT_RX_SEC_OFFLOAD flag if set */
-	mb->ol_flags &= ~PKT_RX_SEC_OFFLOAD;
+	/* clear the RTE_MBUF_F_RX_SEC_OFFLOAD flag if set */
+	mb->ol_flags &= ~RTE_MBUF_F_RX_SEC_OFFLOAD;
 }
 
 /*
@@ -505,8 +505,8 @@ tun_process_step3(struct rte_mbuf *mb, uint64_t txof_msk, uint64_t txof_val)
 	mb->packet_type = RTE_PTYPE_UNKNOWN;
 	mb->tx_offload = (mb->tx_offload & txof_msk) | txof_val;
 
-	/* clear the PKT_RX_SEC_OFFLOAD flag if set */
-	mb->ol_flags &= ~PKT_RX_SEC_OFFLOAD;
+	/* clear the RTE_MBUF_F_RX_SEC_OFFLOAD flag if set */
+	mb->ol_flags &= ~RTE_MBUF_F_RX_SEC_OFFLOAD;
 }
 
 /*
diff --git a/lib/ipsec/esp_outb.c b/lib/ipsec/esp_outb.c
index 1e181cf2ce..2bbd5df2b8 100644
--- a/lib/ipsec/esp_outb.c
+++ b/lib/ipsec/esp_outb.c
@@ -544,7 +544,7 @@ esp_outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	icv_len = sa->icv_len;
 
 	for (i = 0; i != num; i++) {
-		if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
+		if ((mb[i]->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED) == 0) {
 			ml = rte_pktmbuf_lastseg(mb[i]);
 			/* remove high-order 32 bits of esn from packet len */
 			mb[i]->pkt_len -= sa->sqh_len;
@@ -580,7 +580,7 @@ inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
 	ol_flags = ss->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA;
 	for (i = 0; i != num; i++) {
 
-		mb[i]->ol_flags |= PKT_TX_SEC_OFFLOAD;
+		mb[i]->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
 		if (ol_flags != 0)
 			rte_security_set_pkt_metadata(ss->security.ctx,
 				ss->security.ses, mb[i], NULL);
diff --git a/lib/ipsec/misc.h b/lib/ipsec/misc.h
index 79b9a20762..8e72ca992d 100644
--- a/lib/ipsec/misc.h
+++ b/lib/ipsec/misc.h
@@ -173,7 +173,7 @@ cpu_crypto_bulk(const struct rte_ipsec_session *ss,
 	j = num - n;
 	for (i = 0; j != 0 && i != num; i++) {
 		if (st[i] != 0) {
-			mb[i]->ol_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+			mb[i]->ol_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
 			j--;
 		}
 	}
diff --git a/lib/ipsec/rte_ipsec_group.h b/lib/ipsec/rte_ipsec_group.h
index ea3bdfad95..60ab297710 100644
--- a/lib/ipsec/rte_ipsec_group.h
+++ b/lib/ipsec/rte_ipsec_group.h
@@ -61,7 +61,7 @@ rte_ipsec_ses_from_crypto(const struct rte_crypto_op *cop)
  * Take as input completed crypto ops, extract related mbufs
  * and group them by rte_ipsec_session they belong to.
  * For mbuf which crypto-op wasn't completed successfully
- * PKT_RX_SEC_OFFLOAD_FAILED will be raised in ol_flags.
+ * RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED will be raised in ol_flags.
  * Note that mbufs with undetermined SA (session-less) are not freed
  * by the function, but are placed beyond mbufs for the last valid group.
  * It is a user responsibility to handle them further.
@@ -95,9 +95,9 @@ rte_ipsec_pkt_crypto_group(const struct rte_crypto_op *cop[],
 		m = cop[i]->sym[0].m_src;
 		ns = cop[i]->sym[0].session;
 
-		m->ol_flags |= PKT_RX_SEC_OFFLOAD;
+		m->ol_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD;
 		if (cop[i]->status != RTE_CRYPTO_OP_STATUS_SUCCESS)
-			m->ol_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+			m->ol_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
 
 		/* no valid session found */
 		if (ns == NULL) {
diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c
index e59189d215..4754093873 100644
--- a/lib/ipsec/sa.c
+++ b/lib/ipsec/sa.c
@@ -590,7 +590,7 @@ pkt_flag_process(const struct rte_ipsec_session *ss,
 
 	k = 0;
 	for (i = 0; i != num; i++) {
-		if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0)
+		if ((mb[i]->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED) == 0)
 			k++;
 		else
 			dr[i - k] = i;
diff --git a/lib/mbuf/rte_mbuf.c b/lib/mbuf/rte_mbuf.c
index f7e3c1a187..f2e740c363 100644
--- a/lib/mbuf/rte_mbuf.c
+++ b/lib/mbuf/rte_mbuf.c
@@ -133,7 +133,7 @@ rte_pktmbuf_free_pinned_extmem(void *addr, void *opaque)
 	RTE_ASSERT(m->shinfo->fcb_opaque == m);
 
 	rte_mbuf_ext_refcnt_set(m->shinfo, 1);
-	m->ol_flags = EXT_ATTACHED_MBUF;
+	m->ol_flags = RTE_MBUF_F_EXTERNAL;
 	if (m->next != NULL) {
 		m->next = NULL;
 		m->nb_segs = 1;
@@ -213,7 +213,7 @@ __rte_pktmbuf_init_extmem(struct rte_mempool *mp,
 	m->pool = mp;
 	m->nb_segs = 1;
 	m->port = RTE_MBUF_PORT_INVALID;
-	m->ol_flags = EXT_ATTACHED_MBUF;
+	m->ol_flags = RTE_MBUF_F_EXTERNAL;
 	rte_mbuf_refcnt_set(m, 1);
 	m->next = NULL;
 
@@ -620,7 +620,7 @@ rte_pktmbuf_copy(const struct rte_mbuf *m, struct rte_mempool *mp,
 	__rte_pktmbuf_copy_hdr(mc, m);
 
 	/* copied mbuf is not indirect or external */
-	mc->ol_flags = m->ol_flags & ~(IND_ATTACHED_MBUF|EXT_ATTACHED_MBUF);
+	mc->ol_flags = m->ol_flags & ~(RTE_MBUF_F_INDIRECT|RTE_MBUF_F_EXTERNAL);
 
 	prev = &mc->next;
 	m_last = mc;
@@ -685,7 +685,7 @@ rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len)
 	fprintf(f, "  pkt_len=%u, ol_flags=%#"PRIx64", nb_segs=%u, port=%u",
 		m->pkt_len, m->ol_flags, m->nb_segs, m->port);
 
-	if (m->ol_flags & (PKT_RX_VLAN | PKT_TX_VLAN))
+	if (m->ol_flags & (RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_TX_VLAN))
 		fprintf(f, ", vlan_tci=%u", m->vlan_tci);
 
 	fprintf(f, ", ptype=%#"PRIx32"\n", m->packet_type);
@@ -751,30 +751,30 @@ const void *__rte_pktmbuf_read(const struct rte_mbuf *m, uint32_t off,
 const char *rte_get_rx_ol_flag_name(uint64_t mask)
 {
 	switch (mask) {
-	case PKT_RX_VLAN: return "PKT_RX_VLAN";
-	case PKT_RX_RSS_HASH: return "PKT_RX_RSS_HASH";
-	case PKT_RX_FDIR: return "PKT_RX_FDIR";
-	case PKT_RX_L4_CKSUM_BAD: return "PKT_RX_L4_CKSUM_BAD";
-	case PKT_RX_L4_CKSUM_GOOD: return "PKT_RX_L4_CKSUM_GOOD";
-	case PKT_RX_L4_CKSUM_NONE: return "PKT_RX_L4_CKSUM_NONE";
-	case PKT_RX_IP_CKSUM_BAD: return "PKT_RX_IP_CKSUM_BAD";
-	case PKT_RX_IP_CKSUM_GOOD: return "PKT_RX_IP_CKSUM_GOOD";
-	case PKT_RX_IP_CKSUM_NONE: return "PKT_RX_IP_CKSUM_NONE";
-	case PKT_RX_OUTER_IP_CKSUM_BAD: return "PKT_RX_OUTER_IP_CKSUM_BAD";
-	case PKT_RX_VLAN_STRIPPED: return "PKT_RX_VLAN_STRIPPED";
-	case PKT_RX_IEEE1588_PTP: return "PKT_RX_IEEE1588_PTP";
-	case PKT_RX_IEEE1588_TMST: return "PKT_RX_IEEE1588_TMST";
-	case PKT_RX_FDIR_ID: return "PKT_RX_FDIR_ID";
-	case PKT_RX_FDIR_FLX: return "PKT_RX_FDIR_FLX";
-	case PKT_RX_QINQ_STRIPPED: return "PKT_RX_QINQ_STRIPPED";
-	case PKT_RX_QINQ: return "PKT_RX_QINQ";
-	case PKT_RX_LRO: return "PKT_RX_LRO";
-	case PKT_RX_SEC_OFFLOAD: return "PKT_RX_SEC_OFFLOAD";
-	case PKT_RX_SEC_OFFLOAD_FAILED: return "PKT_RX_SEC_OFFLOAD_FAILED";
-	case PKT_RX_OUTER_L4_CKSUM_BAD: return "PKT_RX_OUTER_L4_CKSUM_BAD";
-	case PKT_RX_OUTER_L4_CKSUM_GOOD: return "PKT_RX_OUTER_L4_CKSUM_GOOD";
-	case PKT_RX_OUTER_L4_CKSUM_INVALID:
-		return "PKT_RX_OUTER_L4_CKSUM_INVALID";
+	case RTE_MBUF_F_RX_VLAN: return "RTE_MBUF_F_RX_VLAN";
+	case RTE_MBUF_F_RX_RSS_HASH: return "RTE_MBUF_F_RX_RSS_HASH";
+	case RTE_MBUF_F_RX_FDIR: return "RTE_MBUF_F_RX_FDIR";
+	case RTE_MBUF_F_RX_L4_CKSUM_BAD: return "RTE_MBUF_F_RX_L4_CKSUM_BAD";
+	case RTE_MBUF_F_RX_L4_CKSUM_GOOD: return "RTE_MBUF_F_RX_L4_CKSUM_GOOD";
+	case RTE_MBUF_F_RX_L4_CKSUM_NONE: return "RTE_MBUF_F_RX_L4_CKSUM_NONE";
+	case RTE_MBUF_F_RX_IP_CKSUM_BAD: return "RTE_MBUF_F_RX_IP_CKSUM_BAD";
+	case RTE_MBUF_F_RX_IP_CKSUM_GOOD: return "RTE_MBUF_F_RX_IP_CKSUM_GOOD";
+	case RTE_MBUF_F_RX_IP_CKSUM_NONE: return "RTE_MBUF_F_RX_IP_CKSUM_NONE";
+	case RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD: return "RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD";
+	case RTE_MBUF_F_RX_VLAN_STRIPPED: return "RTE_MBUF_F_RX_VLAN_STRIPPED";
+	case RTE_MBUF_F_RX_IEEE1588_PTP: return "RTE_MBUF_F_RX_IEEE1588_PTP";
+	case RTE_MBUF_F_RX_IEEE1588_TMST: return "RTE_MBUF_F_RX_IEEE1588_TMST";
+	case RTE_MBUF_F_RX_FDIR_ID: return "RTE_MBUF_F_RX_FDIR_ID";
+	case RTE_MBUF_F_RX_FDIR_FLX: return "RTE_MBUF_F_RX_FDIR_FLX";
+	case RTE_MBUF_F_RX_QINQ_STRIPPED: return "RTE_MBUF_F_RX_QINQ_STRIPPED";
+	case RTE_MBUF_F_RX_QINQ: return "RTE_MBUF_F_RX_QINQ";
+	case RTE_MBUF_F_RX_LRO: return "RTE_MBUF_F_RX_LRO";
+	case RTE_MBUF_F_RX_SEC_OFFLOAD: return "RTE_MBUF_F_RX_SEC_OFFLOAD";
+	case RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED: return "RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED";
+	case RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD: return "RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD";
+	case RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD: return "RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD";
+	case RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID:
+		return "RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID";
 
 	default: return NULL;
 	}
@@ -791,37 +791,37 @@ int
 rte_get_rx_ol_flag_list(uint64_t mask, char *buf, size_t buflen)
 {
 	const struct flag_mask rx_flags[] = {
-		{ PKT_RX_VLAN, PKT_RX_VLAN, NULL },
-		{ PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, NULL },
-		{ PKT_RX_FDIR, PKT_RX_FDIR, NULL },
-		{ PKT_RX_L4_CKSUM_BAD, PKT_RX_L4_CKSUM_MASK, NULL },
-		{ PKT_RX_L4_CKSUM_GOOD, PKT_RX_L4_CKSUM_MASK, NULL },
-		{ PKT_RX_L4_CKSUM_NONE, PKT_RX_L4_CKSUM_MASK, NULL },
-		{ PKT_RX_L4_CKSUM_UNKNOWN, PKT_RX_L4_CKSUM_MASK,
-		  "PKT_RX_L4_CKSUM_UNKNOWN" },
-		{ PKT_RX_IP_CKSUM_BAD, PKT_RX_IP_CKSUM_MASK, NULL },
-		{ PKT_RX_IP_CKSUM_GOOD, PKT_RX_IP_CKSUM_MASK, NULL },
-		{ PKT_RX_IP_CKSUM_NONE, PKT_RX_IP_CKSUM_MASK, NULL },
-		{ PKT_RX_IP_CKSUM_UNKNOWN, PKT_RX_IP_CKSUM_MASK,
-		  "PKT_RX_IP_CKSUM_UNKNOWN" },
-		{ PKT_RX_OUTER_IP_CKSUM_BAD, PKT_RX_OUTER_IP_CKSUM_BAD, NULL },
-		{ PKT_RX_VLAN_STRIPPED, PKT_RX_VLAN_STRIPPED, NULL },
-		{ PKT_RX_IEEE1588_PTP, PKT_RX_IEEE1588_PTP, NULL },
-		{ PKT_RX_IEEE1588_TMST, PKT_RX_IEEE1588_TMST, NULL },
-		{ PKT_RX_FDIR_ID, PKT_RX_FDIR_ID, NULL },
-		{ PKT_RX_FDIR_FLX, PKT_RX_FDIR_FLX, NULL },
-		{ PKT_RX_QINQ_STRIPPED, PKT_RX_QINQ_STRIPPED, NULL },
-		{ PKT_RX_LRO, PKT_RX_LRO, NULL },
-		{ PKT_RX_SEC_OFFLOAD, PKT_RX_SEC_OFFLOAD, NULL },
-		{ PKT_RX_SEC_OFFLOAD_FAILED, PKT_RX_SEC_OFFLOAD_FAILED, NULL },
-		{ PKT_RX_QINQ, PKT_RX_QINQ, NULL },
-		{ PKT_RX_OUTER_L4_CKSUM_BAD, PKT_RX_OUTER_L4_CKSUM_MASK, NULL },
-		{ PKT_RX_OUTER_L4_CKSUM_GOOD, PKT_RX_OUTER_L4_CKSUM_MASK,
+		{ RTE_MBUF_F_RX_VLAN, RTE_MBUF_F_RX_VLAN, NULL },
+		{ RTE_MBUF_F_RX_RSS_HASH, RTE_MBUF_F_RX_RSS_HASH, NULL },
+		{ RTE_MBUF_F_RX_FDIR, RTE_MBUF_F_RX_FDIR, NULL },
+		{ RTE_MBUF_F_RX_L4_CKSUM_BAD, RTE_MBUF_F_RX_L4_CKSUM_MASK, NULL },
+		{ RTE_MBUF_F_RX_L4_CKSUM_GOOD, RTE_MBUF_F_RX_L4_CKSUM_MASK, NULL },
+		{ RTE_MBUF_F_RX_L4_CKSUM_NONE, RTE_MBUF_F_RX_L4_CKSUM_MASK, NULL },
+		{ RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN, RTE_MBUF_F_RX_L4_CKSUM_MASK,
+		  "RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN" },
+		{ RTE_MBUF_F_RX_IP_CKSUM_BAD, RTE_MBUF_F_RX_IP_CKSUM_MASK, NULL },
+		{ RTE_MBUF_F_RX_IP_CKSUM_GOOD, RTE_MBUF_F_RX_IP_CKSUM_MASK, NULL },
+		{ RTE_MBUF_F_RX_IP_CKSUM_NONE, RTE_MBUF_F_RX_IP_CKSUM_MASK, NULL },
+		{ RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN, RTE_MBUF_F_RX_IP_CKSUM_MASK,
+		  "RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN" },
+		{ RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD, RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD, NULL },
+		{ RTE_MBUF_F_RX_VLAN_STRIPPED, RTE_MBUF_F_RX_VLAN_STRIPPED, NULL },
+		{ RTE_MBUF_F_RX_IEEE1588_PTP, RTE_MBUF_F_RX_IEEE1588_PTP, NULL },
+		{ RTE_MBUF_F_RX_IEEE1588_TMST, RTE_MBUF_F_RX_IEEE1588_TMST, NULL },
+		{ RTE_MBUF_F_RX_FDIR_ID, RTE_MBUF_F_RX_FDIR_ID, NULL },
+		{ RTE_MBUF_F_RX_FDIR_FLX, RTE_MBUF_F_RX_FDIR_FLX, NULL },
+		{ RTE_MBUF_F_RX_QINQ_STRIPPED, RTE_MBUF_F_RX_QINQ_STRIPPED, NULL },
+		{ RTE_MBUF_F_RX_LRO, RTE_MBUF_F_RX_LRO, NULL },
+		{ RTE_MBUF_F_RX_SEC_OFFLOAD, RTE_MBUF_F_RX_SEC_OFFLOAD, NULL },
+		{ RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED, RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED, NULL },
+		{ RTE_MBUF_F_RX_QINQ, RTE_MBUF_F_RX_QINQ, NULL },
+		{ RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD, RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK, NULL },
+		{ RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD, RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK,
 		  NULL },
-		{ PKT_RX_OUTER_L4_CKSUM_INVALID, PKT_RX_OUTER_L4_CKSUM_MASK,
+		{ RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID, RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK,
 		  NULL },
-		{ PKT_RX_OUTER_L4_CKSUM_UNKNOWN, PKT_RX_OUTER_L4_CKSUM_MASK,
-		  "PKT_RX_OUTER_L4_CKSUM_UNKNOWN" },
+		{ RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN, RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK,
+		  "RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN" },
 	};
 	const char *name;
 	unsigned int i;
@@ -856,32 +856,32 @@ rte_get_rx_ol_flag_list(uint64_t mask, char *buf, size_t buflen)
 const char *rte_get_tx_ol_flag_name(uint64_t mask)
 {
 	switch (mask) {
-	case PKT_TX_VLAN: return "PKT_TX_VLAN";
-	case PKT_TX_IP_CKSUM: return "PKT_TX_IP_CKSUM";
-	case PKT_TX_TCP_CKSUM: return "PKT_TX_TCP_CKSUM";
-	case PKT_TX_SCTP_CKSUM: return "PKT_TX_SCTP_CKSUM";
-	case PKT_TX_UDP_CKSUM: return "PKT_TX_UDP_CKSUM";
-	case PKT_TX_IEEE1588_TMST: return "PKT_TX_IEEE1588_TMST";
-	case PKT_TX_TCP_SEG: return "PKT_TX_TCP_SEG";
-	case PKT_TX_IPV4: return "PKT_TX_IPV4";
-	case PKT_TX_IPV6: return "PKT_TX_IPV6";
-	case PKT_TX_OUTER_IP_CKSUM: return "PKT_TX_OUTER_IP_CKSUM";
-	case PKT_TX_OUTER_IPV4: return "PKT_TX_OUTER_IPV4";
-	case PKT_TX_OUTER_IPV6: return "PKT_TX_OUTER_IPV6";
-	case PKT_TX_TUNNEL_VXLAN: return "PKT_TX_TUNNEL_VXLAN";
-	case PKT_TX_TUNNEL_GTP: return "PKT_TX_TUNNEL_GTP";
-	case PKT_TX_TUNNEL_GRE: return "PKT_TX_TUNNEL_GRE";
-	case PKT_TX_TUNNEL_IPIP: return "PKT_TX_TUNNEL_IPIP";
-	case PKT_TX_TUNNEL_GENEVE: return "PKT_TX_TUNNEL_GENEVE";
-	case PKT_TX_TUNNEL_MPLSINUDP: return "PKT_TX_TUNNEL_MPLSINUDP";
-	case PKT_TX_TUNNEL_VXLAN_GPE: return "PKT_TX_TUNNEL_VXLAN_GPE";
-	case PKT_TX_TUNNEL_IP: return "PKT_TX_TUNNEL_IP";
-	case PKT_TX_TUNNEL_UDP: return "PKT_TX_TUNNEL_UDP";
-	case PKT_TX_QINQ: return "PKT_TX_QINQ";
-	case PKT_TX_MACSEC: return "PKT_TX_MACSEC";
-	case PKT_TX_SEC_OFFLOAD: return "PKT_TX_SEC_OFFLOAD";
-	case PKT_TX_UDP_SEG: return "PKT_TX_UDP_SEG";
-	case PKT_TX_OUTER_UDP_CKSUM: return "PKT_TX_OUTER_UDP_CKSUM";
+	case RTE_MBUF_F_TX_VLAN: return "RTE_MBUF_F_TX_VLAN";
+	case RTE_MBUF_F_TX_IP_CKSUM: return "RTE_MBUF_F_TX_IP_CKSUM";
+	case RTE_MBUF_F_TX_TCP_CKSUM: return "RTE_MBUF_F_TX_TCP_CKSUM";
+	case RTE_MBUF_F_TX_SCTP_CKSUM: return "RTE_MBUF_F_TX_SCTP_CKSUM";
+	case RTE_MBUF_F_TX_UDP_CKSUM: return "RTE_MBUF_F_TX_UDP_CKSUM";
+	case RTE_MBUF_F_TX_IEEE1588_TMST: return "RTE_MBUF_F_TX_IEEE1588_TMST";
+	case RTE_MBUF_F_TX_TCP_SEG: return "RTE_MBUF_F_TX_TCP_SEG";
+	case RTE_MBUF_F_TX_IPV4: return "RTE_MBUF_F_TX_IPV4";
+	case RTE_MBUF_F_TX_IPV6: return "RTE_MBUF_F_TX_IPV6";
+	case RTE_MBUF_F_TX_OUTER_IP_CKSUM: return "RTE_MBUF_F_TX_OUTER_IP_CKSUM";
+	case RTE_MBUF_F_TX_OUTER_IPV4: return "RTE_MBUF_F_TX_OUTER_IPV4";
+	case RTE_MBUF_F_TX_OUTER_IPV6: return "RTE_MBUF_F_TX_OUTER_IPV6";
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN: return "RTE_MBUF_F_TX_TUNNEL_VXLAN";
+	case RTE_MBUF_F_TX_TUNNEL_GTP: return "RTE_MBUF_F_TX_TUNNEL_GTP";
+	case RTE_MBUF_F_TX_TUNNEL_GRE: return "RTE_MBUF_F_TX_TUNNEL_GRE";
+	case RTE_MBUF_F_TX_TUNNEL_IPIP: return "RTE_MBUF_F_TX_TUNNEL_IPIP";
+	case RTE_MBUF_F_TX_TUNNEL_GENEVE: return "RTE_MBUF_F_TX_TUNNEL_GENEVE";
+	case RTE_MBUF_F_TX_TUNNEL_MPLSINUDP: return "RTE_MBUF_F_TX_TUNNEL_MPLSINUDP";
+	case RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE: return "RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE";
+	case RTE_MBUF_F_TX_TUNNEL_IP: return "RTE_MBUF_F_TX_TUNNEL_IP";
+	case RTE_MBUF_F_TX_TUNNEL_UDP: return "RTE_MBUF_F_TX_TUNNEL_UDP";
+	case RTE_MBUF_F_TX_QINQ: return "RTE_MBUF_F_TX_QINQ";
+	case RTE_MBUF_F_TX_MACSEC: return "RTE_MBUF_F_TX_MACSEC";
+	case RTE_MBUF_F_TX_SEC_OFFLOAD: return "RTE_MBUF_F_TX_SEC_OFFLOAD";
+	case RTE_MBUF_F_TX_UDP_SEG: return "RTE_MBUF_F_TX_UDP_SEG";
+	case RTE_MBUF_F_TX_OUTER_UDP_CKSUM: return "RTE_MBUF_F_TX_OUTER_UDP_CKSUM";
 	default: return NULL;
 	}
 }
@@ -891,33 +891,33 @@ int
 rte_get_tx_ol_flag_list(uint64_t mask, char *buf, size_t buflen)
 {
 	const struct flag_mask tx_flags[] = {
-		{ PKT_TX_VLAN, PKT_TX_VLAN, NULL },
-		{ PKT_TX_IP_CKSUM, PKT_TX_IP_CKSUM, NULL },
-		{ PKT_TX_TCP_CKSUM, PKT_TX_L4_MASK, NULL },
-		{ PKT_TX_SCTP_CKSUM, PKT_TX_L4_MASK, NULL },
-		{ PKT_TX_UDP_CKSUM, PKT_TX_L4_MASK, NULL },
-		{ PKT_TX_L4_NO_CKSUM, PKT_TX_L4_MASK, "PKT_TX_L4_NO_CKSUM" },
-		{ PKT_TX_IEEE1588_TMST, PKT_TX_IEEE1588_TMST, NULL },
-		{ PKT_TX_TCP_SEG, PKT_TX_TCP_SEG, NULL },
-		{ PKT_TX_IPV4, PKT_TX_IPV4, NULL },
-		{ PKT_TX_IPV6, PKT_TX_IPV6, NULL },
-		{ PKT_TX_OUTER_IP_CKSUM, PKT_TX_OUTER_IP_CKSUM, NULL },
-		{ PKT_TX_OUTER_IPV4, PKT_TX_OUTER_IPV4, NULL },
-		{ PKT_TX_OUTER_IPV6, PKT_TX_OUTER_IPV6, NULL },
-		{ PKT_TX_TUNNEL_VXLAN, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_GTP, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_GRE, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_IPIP, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_GENEVE, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_MPLSINUDP, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_VXLAN_GPE, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_IP, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_TUNNEL_UDP, PKT_TX_TUNNEL_MASK, NULL },
-		{ PKT_TX_QINQ, PKT_TX_QINQ, NULL },
-		{ PKT_TX_MACSEC, PKT_TX_MACSEC, NULL },
-		{ PKT_TX_SEC_OFFLOAD, PKT_TX_SEC_OFFLOAD, NULL },
-		{ PKT_TX_UDP_SEG, PKT_TX_UDP_SEG, NULL },
-		{ PKT_TX_OUTER_UDP_CKSUM, PKT_TX_OUTER_UDP_CKSUM, NULL },
+		{ RTE_MBUF_F_TX_VLAN, RTE_MBUF_F_TX_VLAN, NULL },
+		{ RTE_MBUF_F_TX_IP_CKSUM, RTE_MBUF_F_TX_IP_CKSUM, NULL },
+		{ RTE_MBUF_F_TX_TCP_CKSUM, RTE_MBUF_F_TX_L4_MASK, NULL },
+		{ RTE_MBUF_F_TX_SCTP_CKSUM, RTE_MBUF_F_TX_L4_MASK, NULL },
+		{ RTE_MBUF_F_TX_UDP_CKSUM, RTE_MBUF_F_TX_L4_MASK, NULL },
+		{ RTE_MBUF_F_TX_L4_NO_CKSUM, RTE_MBUF_F_TX_L4_MASK, "RTE_MBUF_F_TX_L4_NO_CKSUM" },
+		{ RTE_MBUF_F_TX_IEEE1588_TMST, RTE_MBUF_F_TX_IEEE1588_TMST, NULL },
+		{ RTE_MBUF_F_TX_TCP_SEG, RTE_MBUF_F_TX_TCP_SEG, NULL },
+		{ RTE_MBUF_F_TX_IPV4, RTE_MBUF_F_TX_IPV4, NULL },
+		{ RTE_MBUF_F_TX_IPV6, RTE_MBUF_F_TX_IPV6, NULL },
+		{ RTE_MBUF_F_TX_OUTER_IP_CKSUM, RTE_MBUF_F_TX_OUTER_IP_CKSUM, NULL },
+		{ RTE_MBUF_F_TX_OUTER_IPV4, RTE_MBUF_F_TX_OUTER_IPV4, NULL },
+		{ RTE_MBUF_F_TX_OUTER_IPV6, RTE_MBUF_F_TX_OUTER_IPV6, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_VXLAN, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_GTP, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_GRE, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_IPIP, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_GENEVE, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_MPLSINUDP, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_IP, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_TUNNEL_UDP, RTE_MBUF_F_TX_TUNNEL_MASK, NULL },
+		{ RTE_MBUF_F_TX_QINQ, RTE_MBUF_F_TX_QINQ, NULL },
+		{ RTE_MBUF_F_TX_MACSEC, RTE_MBUF_F_TX_MACSEC, NULL },
+		{ RTE_MBUF_F_TX_SEC_OFFLOAD, RTE_MBUF_F_TX_SEC_OFFLOAD, NULL },
+		{ RTE_MBUF_F_TX_UDP_SEG, RTE_MBUF_F_TX_UDP_SEG, NULL },
+		{ RTE_MBUF_F_TX_OUTER_UDP_CKSUM, RTE_MBUF_F_TX_OUTER_UDP_CKSUM, NULL },
 	};
 	const char *name;
 	unsigned int i;
diff --git a/lib/mbuf/rte_mbuf.h b/lib/mbuf/rte_mbuf.h
index ec2f4bb188..3ae01c8b3d 100644
--- a/lib/mbuf/rte_mbuf.h
+++ b/lib/mbuf/rte_mbuf.h
@@ -77,7 +77,7 @@ int rte_get_rx_ol_flag_list(uint64_t mask, char *buf, size_t buflen);
  * @param mask
  *   The mask describing the flag. Usually only one bit must be set.
  *   Several bits can be given if they belong to the same mask.
- *   Ex: PKT_TX_L4_MASK.
+ *   Ex: RTE_MBUF_F_TX_L4_MASK.
  * @return
  *   The name of this flag, or NULL if it's not a valid TX flag.
  */
@@ -849,7 +849,7 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
 	m->nb_segs = 1;
 	m->port = RTE_MBUF_PORT_INVALID;
 
-	m->ol_flags &= EXT_ATTACHED_MBUF;
+	m->ol_flags &= RTE_MBUF_F_EXTERNAL;
 	m->packet_type = 0;
 	rte_pktmbuf_reset_headroom(m);
 
@@ -1064,7 +1064,7 @@ rte_pktmbuf_attach_extbuf(struct rte_mbuf *m, void *buf_addr,
 	m->data_len = 0;
 	m->data_off = 0;
 
-	m->ol_flags |= EXT_ATTACHED_MBUF;
+	m->ol_flags |= RTE_MBUF_F_EXTERNAL;
 	m->shinfo = shinfo;
 }
 
@@ -1138,7 +1138,7 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *m)
 		/* if m is not direct, get the mbuf that embeds the data */
 		rte_mbuf_refcnt_update(rte_mbuf_from_indirect(m), 1);
 		mi->priv_size = m->priv_size;
-		mi->ol_flags = m->ol_flags | IND_ATTACHED_MBUF;
+		mi->ol_flags = m->ol_flags | RTE_MBUF_F_INDIRECT;
 	}
 
 	__rte_pktmbuf_copy_hdr(mi, m);
@@ -1272,7 +1272,7 @@ static inline int __rte_pktmbuf_pinned_extbuf_decref(struct rte_mbuf *m)
 	struct rte_mbuf_ext_shared_info *shinfo;
 
 	/* Clear flags, mbuf is being freed. */
-	m->ol_flags = EXT_ATTACHED_MBUF;
+	m->ol_flags = RTE_MBUF_F_EXTERNAL;
 	shinfo = m->shinfo;
 
 	/* Optimize for performance - do not dec/reinit */
@@ -1798,28 +1798,28 @@ rte_validate_tx_offload(const struct rte_mbuf *m)
 	uint64_t ol_flags = m->ol_flags;
 
 	/* Does packet set any of available offloads? */
-	if (!(ol_flags & PKT_TX_OFFLOAD_MASK))
+	if (!(ol_flags & RTE_MBUF_F_TX_OFFLOAD_MASK))
 		return 0;
 
 	/* IP checksum can be counted only for IPv4 packet */
-	if ((ol_flags & PKT_TX_IP_CKSUM) && (ol_flags & PKT_TX_IPV6))
+	if ((ol_flags & RTE_MBUF_F_TX_IP_CKSUM) && (ol_flags & RTE_MBUF_F_TX_IPV6))
 		return -EINVAL;
 
 	/* IP type not set when required */
-	if (ol_flags & (PKT_TX_L4_MASK | PKT_TX_TCP_SEG))
-		if (!(ol_flags & (PKT_TX_IPV4 | PKT_TX_IPV6)))
+	if (ol_flags & (RTE_MBUF_F_TX_L4_MASK | RTE_MBUF_F_TX_TCP_SEG))
+		if (!(ol_flags & (RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IPV6)))
 			return -EINVAL;
 
 	/* Check requirements for TSO packet */
-	if (ol_flags & PKT_TX_TCP_SEG)
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
 		if ((m->tso_segsz == 0) ||
-				((ol_flags & PKT_TX_IPV4) &&
-				!(ol_flags & PKT_TX_IP_CKSUM)))
+				((ol_flags & RTE_MBUF_F_TX_IPV4) &&
+				 !(ol_flags & RTE_MBUF_F_TX_IP_CKSUM)))
 			return -EINVAL;
 
-	/* PKT_TX_OUTER_IP_CKSUM set for non outer IPv4 packet. */
-	if ((ol_flags & PKT_TX_OUTER_IP_CKSUM) &&
-			!(ol_flags & PKT_TX_OUTER_IPV4))
+	/* RTE_MBUF_F_TX_OUTER_IP_CKSUM set for non outer IPv4 packet. */
+	if ((ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) &&
+			!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4))
 		return -EINVAL;
 
 	return 0;
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index 8e7eef319b..1867d580ff 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -43,273 +43,378 @@ extern "C" {
 /**
  * The RX packet is a 802.1q VLAN packet, and the tci has been
  * saved in in mbuf->vlan_tci.
- * If the flag PKT_RX_VLAN_STRIPPED is also present, the VLAN
+ * If the flag RTE_MBUF_F_RX_VLAN_STRIPPED is also present, the VLAN
  * header has been stripped from mbuf data, else it is still
  * present.
  */
-#define PKT_RX_VLAN          (1ULL << 0)
+#define RTE_MBUF_F_RX_VLAN          (1ULL << 0)
+#define PKT_RX_VLAN RTE_DEPRECATED(PKT_RX_VLAN) RTE_MBUF_F_RX_VLAN
 
 /** RX packet with RSS hash result. */
-#define PKT_RX_RSS_HASH      (1ULL << 1)
+#define RTE_MBUF_F_RX_RSS_HASH      (1ULL << 1)
+#define PKT_RX_RSS_HASH RTE_DEPRECATED(PKT_RX_RSS_HASH) RTE_MBUF_F_RX_RSS_HASH
 
  /** RX packet with FDIR match indicate. */
-#define PKT_RX_FDIR          (1ULL << 2)
+#define RTE_MBUF_F_RX_FDIR          (1ULL << 2)
+#define PKT_RX_FDIR RTE_DEPRECATED(PKT_RX_FDIR) RTE_MBUF_F_RX_FDIR
 
 /**
  * This flag is set when the outermost IP header checksum is detected as
  * wrong by the hardware.
  */
-#define PKT_RX_OUTER_IP_CKSUM_BAD (1ULL << 5)
+#define RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD (1ULL << 5)
+#define PKT_RX_OUTER_IP_CKSUM_BAD RTE_DEPRECATED(PKT_RX_OUTER_IP_CKSUM_BAD) \
+		RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD
 
 /**
  * A vlan has been stripped by the hardware and its tci is saved in
  * mbuf->vlan_tci. This can only happen if vlan stripping is enabled
  * in the RX configuration of the PMD.
- * When PKT_RX_VLAN_STRIPPED is set, PKT_RX_VLAN must also be set.
+ * When RTE_MBUF_F_RX_VLAN_STRIPPED is set, RTE_MBUF_F_RX_VLAN must also be set.
  */
-#define PKT_RX_VLAN_STRIPPED (1ULL << 6)
+#define RTE_MBUF_F_RX_VLAN_STRIPPED (1ULL << 6)
+#define PKT_RX_VLAN_STRIPPED RTE_DEPRECATED(PKT_RX_VLAN_STRIPPED) \
+		RTE_MBUF_F_RX_VLAN_STRIPPED
 
 /**
  * Mask of bits used to determine the status of RX IP checksum.
- * - PKT_RX_IP_CKSUM_UNKNOWN: no information about the RX IP checksum
- * - PKT_RX_IP_CKSUM_BAD: the IP checksum in the packet is wrong
- * - PKT_RX_IP_CKSUM_GOOD: the IP checksum in the packet is valid
- * - PKT_RX_IP_CKSUM_NONE: the IP checksum is not correct in the packet
+ * - RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN: no information about the RX IP checksum
+ * - RTE_MBUF_F_RX_IP_CKSUM_BAD: the IP checksum in the packet is wrong
+ * - RTE_MBUF_F_RX_IP_CKSUM_GOOD: the IP checksum in the packet is valid
+ * - RTE_MBUF_F_RX_IP_CKSUM_NONE: the IP checksum is not correct in the packet
  *   data, but the integrity of the IP header is verified.
  */
-#define PKT_RX_IP_CKSUM_MASK ((1ULL << 4) | (1ULL << 7))
+#define RTE_MBUF_F_RX_IP_CKSUM_MASK ((1ULL << 4) | (1ULL << 7))
+#define PKT_RX_IP_CKSUM_MASK RTE_DEPRECATED(PKT_RX_IP_CKSUM_MASK) \
+		RTE_MBUF_F_RX_IP_CKSUM_MASK
 
-#define PKT_RX_IP_CKSUM_UNKNOWN 0
-#define PKT_RX_IP_CKSUM_BAD     (1ULL << 4)
-#define PKT_RX_IP_CKSUM_GOOD    (1ULL << 7)
-#define PKT_RX_IP_CKSUM_NONE    ((1ULL << 4) | (1ULL << 7))
+#define RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN 0
+#define RTE_MBUF_F_RX_IP_CKSUM_BAD     (1ULL << 4)
+#define RTE_MBUF_F_RX_IP_CKSUM_GOOD    (1ULL << 7)
+#define RTE_MBUF_F_RX_IP_CKSUM_NONE    ((1ULL << 4) | (1ULL << 7))
+#define PKT_RX_IP_CKSUM_UNKNOWN RTE_DEPRECATED(PKT_RX_IP_CKSUM_UNKNOWN) \
+		RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN
+#define PKT_RX_IP_CKSUM_BAD RTE_DEPRECATED(PKT_RX_IP_CKSUM_BAD) \
+		RTE_MBUF_F_RX_IP_CKSUM_BAD
+#define PKT_RX_IP_CKSUM_GOOD RTE_DEPRECATED(PKT_RX_IP_CKSUM_GOOD) \
+		RTE_MBUF_F_RX_IP_CKSUM_GOOD
+#define PKT_RX_IP_CKSUM_NONE RTE_DEPRECATED(PKT_RX_IP_CKSUM_NONE) \
+		RTE_MBUF_F_RX_IP_CKSUM_NONE
 
 /**
  * Mask of bits used to determine the status of RX L4 checksum.
- * - PKT_RX_L4_CKSUM_UNKNOWN: no information about the RX L4 checksum
- * - PKT_RX_L4_CKSUM_BAD: the L4 checksum in the packet is wrong
- * - PKT_RX_L4_CKSUM_GOOD: the L4 checksum in the packet is valid
- * - PKT_RX_L4_CKSUM_NONE: the L4 checksum is not correct in the packet
+ * - RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN: no information about the RX L4 checksum
+ * - RTE_MBUF_F_RX_L4_CKSUM_BAD: the L4 checksum in the packet is wrong
+ * - RTE_MBUF_F_RX_L4_CKSUM_GOOD: the L4 checksum in the packet is valid
+ * - RTE_MBUF_F_RX_L4_CKSUM_NONE: the L4 checksum is not correct in the packet
  *   data, but the integrity of the L4 data is verified.
  */
-#define PKT_RX_L4_CKSUM_MASK ((1ULL << 3) | (1ULL << 8))
-
-#define PKT_RX_L4_CKSUM_UNKNOWN 0
-#define PKT_RX_L4_CKSUM_BAD     (1ULL << 3)
-#define PKT_RX_L4_CKSUM_GOOD    (1ULL << 8)
-#define PKT_RX_L4_CKSUM_NONE    ((1ULL << 3) | (1ULL << 8))
+#define RTE_MBUF_F_RX_L4_CKSUM_MASK ((1ULL << 3) | (1ULL << 8))
+#define PKT_RX_L4_CKSUM_MASK RTE_DEPRECATED(PKT_RX_L4_CKSUM_MASK) \
+		RTE_MBUF_F_RX_L4_CKSUM_MASK
+
+#define RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN 0
+#define RTE_MBUF_F_RX_L4_CKSUM_BAD     (1ULL << 3)
+#define RTE_MBUF_F_RX_L4_CKSUM_GOOD    (1ULL << 8)
+#define RTE_MBUF_F_RX_L4_CKSUM_NONE    ((1ULL << 3) | (1ULL << 8))
+#define PKT_RX_L4_CKSUM_UNKNOWN RTE_DEPRECATED(PKT_RX_L4_CKSUM_UNKNOWN) \
+		RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN
+#define PKT_RX_L4_CKSUM_BAD RTE_DEPRECATED(PKT_RX_L4_CKSUM_BAD) \
+		RTE_MBUF_F_RX_L4_CKSUM_BAD
+#define PKT_RX_L4_CKSUM_GOOD RTE_DEPRECATED(PKT_RX_L4_CKSUM_GOOD) \
+		RTE_MBUF_F_RX_L4_CKSUM_GOOD
+#define PKT_RX_L4_CKSUM_NONE RTE_DEPRECATED(PKT_RX_L4_CKSUM_NONE) \
+		RTE_MBUF_F_RX_L4_CKSUM_NONE
 
 /** RX IEEE1588 L2 Ethernet PT Packet. */
-#define PKT_RX_IEEE1588_PTP  (1ULL << 9)
+#define RTE_MBUF_F_RX_IEEE1588_PTP  (1ULL << 9)
+#define PKT_RX_IEEE1588_PTP RTE_DEPRECATED(PKT_RX_IEEE1588_PTP) \
+		RTE_MBUF_F_RX_IEEE1588_PTP
 
 /** RX IEEE1588 L2/L4 timestamped packet.*/
-#define PKT_RX_IEEE1588_TMST (1ULL << 10)
+#define RTE_MBUF_F_RX_IEEE1588_TMST (1ULL << 10)
+#define PKT_RX_IEEE1588_TMST RTE_DEPRECATED(PKT_RX_IEEE1588_TMST) \
+		RTE_MBUF_F_RX_IEEE1588_TMST
 
 /** FD id reported if FDIR match. */
-#define PKT_RX_FDIR_ID       (1ULL << 13)
+#define RTE_MBUF_F_RX_FDIR_ID       (1ULL << 13)
+#define PKT_RX_FDIR_ID RTE_DEPRECATED(PKT_RX_FDIR_ID) \
+		RTE_MBUF_F_RX_FDIR_ID
 
 /** Flexible bytes reported if FDIR match. */
-#define PKT_RX_FDIR_FLX      (1ULL << 14)
+#define RTE_MBUF_F_RX_FDIR_FLX      (1ULL << 14)
+#define PKT_RX_FDIR_FLX RTE_DEPRECATED(PKT_RX_FDIR_FLX) \
+		RTE_MBUF_F_RX_FDIR_FLX
 
 /**
  * The outer VLAN has been stripped by the hardware and its TCI is
  * saved in mbuf->vlan_tci_outer.
  * This can only happen if VLAN stripping is enabled in the Rx
  * configuration of the PMD.
- * When PKT_RX_QINQ_STRIPPED is set, the flags PKT_RX_VLAN and PKT_RX_QINQ
- * must also be set.
+ * When RTE_MBUF_F_RX_QINQ_STRIPPED is set, the flags RTE_MBUF_F_RX_VLAN
+ * and RTE_MBUF_F_RX_QINQ must also be set.
  *
- * - If both PKT_RX_QINQ_STRIPPED and PKT_RX_VLAN_STRIPPED are set, the 2 VLANs
- *   have been stripped by the hardware and their TCIs are saved in
- *   mbuf->vlan_tci (inner) and mbuf->vlan_tci_outer (outer).
- * - If PKT_RX_QINQ_STRIPPED is set and PKT_RX_VLAN_STRIPPED is unset, only the
- *   outer VLAN is removed from packet data, but both tci are saved in
- *   mbuf->vlan_tci (inner) and mbuf->vlan_tci_outer (outer).
+ * - If both RTE_MBUF_F_RX_QINQ_STRIPPED and RTE_MBUF_F_RX_VLAN_STRIPPED are
+ *   set, the 2 VLANs have been stripped by the hardware and their TCIs are
+ *   saved in mbuf->vlan_tci (inner) and mbuf->vlan_tci_outer (outer).
+ * - If RTE_MBUF_F_RX_QINQ_STRIPPED is set and RTE_MBUF_F_RX_VLAN_STRIPPED
+ *   is unset, only the outer VLAN is removed from packet data, but both tci
+ *   are saved in mbuf->vlan_tci (inner) and mbuf->vlan_tci_outer (outer).
  */
-#define PKT_RX_QINQ_STRIPPED (1ULL << 15)
+#define RTE_MBUF_F_RX_QINQ_STRIPPED (1ULL << 15)
+#define PKT_RX_QINQ_STRIPPED RTE_DEPRECATED(PKT_RX_QINQ_STRIPPED) \
+		RTE_MBUF_F_RX_QINQ_STRIPPED
 
 /**
  * When packets are coalesced by a hardware or virtual driver, this flag
  * can be set in the RX mbuf, meaning that the m->tso_segsz field is
  * valid and is set to the segment size of original packets.
  */
-#define PKT_RX_LRO           (1ULL << 16)
+#define RTE_MBUF_F_RX_LRO           (1ULL << 16)
+#define PKT_RX_LRO RTE_DEPRECATED(PKT_RX_LRO) RTE_MBUF_F_RX_LRO
 
 /* There is no flag defined at offset 17. It is free for any future use. */
 
 /**
  * Indicate that security offload processing was applied on the RX packet.
  */
-#define PKT_RX_SEC_OFFLOAD	(1ULL << 18)
+#define RTE_MBUF_F_RX_SEC_OFFLOAD	(1ULL << 18)
+#define PKT_RX_SEC_OFFLOAD RTE_DEPRECATED(PKT_RX_SEC_OFFLOAD) \
+		RTE_MBUF_F_RX_SEC_OFFLOAD
 
 /**
  * Indicate that security offload processing failed on the RX packet.
  */
-#define PKT_RX_SEC_OFFLOAD_FAILED	(1ULL << 19)
+#define RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED	(1ULL << 19)
+#define PKT_RX_SEC_OFFLOAD_FAILED RTE_DEPRECATED(PKT_RX_SEC_OFFLOAD_FAILED) \
+		RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED
 
 /**
  * The RX packet is a double VLAN, and the outer tci has been
- * saved in mbuf->vlan_tci_outer. If this flag is set, PKT_RX_VLAN
+ * saved in mbuf->vlan_tci_outer. If this flag is set, RTE_MBUF_F_RX_VLAN
  * must also be set and the inner tci is saved in mbuf->vlan_tci.
- * If the flag PKT_RX_QINQ_STRIPPED is also present, both VLANs
+ * If the flag RTE_MBUF_F_RX_QINQ_STRIPPED is also present, both VLANs
  * headers have been stripped from mbuf data, else they are still
  * present.
  */
-#define PKT_RX_QINQ          (1ULL << 20)
+#define RTE_MBUF_F_RX_QINQ          (1ULL << 20)
+#define PKT_RX_QINQ RTE_DEPRECATED(PKT_RX_QINQ) RTE_MBUF_F_RX_QINQ
 
 /**
  * Mask of bits used to determine the status of outer RX L4 checksum.
- * - PKT_RX_OUTER_L4_CKSUM_UNKNOWN: no info about the outer RX L4 checksum
- * - PKT_RX_OUTER_L4_CKSUM_BAD: the outer L4 checksum in the packet is wrong
- * - PKT_RX_OUTER_L4_CKSUM_GOOD: the outer L4 checksum in the packet is valid
- * - PKT_RX_OUTER_L4_CKSUM_INVALID: invalid outer L4 checksum state.
+ * - RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN: no info about the outer RX L4
+ *   checksum
+ * - RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD: the outer L4 checksum in the packet
+ *   is wrong
+ * - RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD: the outer L4 checksum in the packet
+ *   is valid
+ * - RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID: invalid outer L4 checksum state.
  *
- * The detection of PKT_RX_OUTER_L4_CKSUM_GOOD shall be based on the given
- * HW capability, At minimum, the PMD should support
- * PKT_RX_OUTER_L4_CKSUM_UNKNOWN and PKT_RX_OUTER_L4_CKSUM_BAD states
- * if the DEV_RX_OFFLOAD_OUTER_UDP_CKSUM offload is available.
- */
-#define PKT_RX_OUTER_L4_CKSUM_MASK	((1ULL << 21) | (1ULL << 22))
-
-#define PKT_RX_OUTER_L4_CKSUM_UNKNOWN	0
-#define PKT_RX_OUTER_L4_CKSUM_BAD	(1ULL << 21)
-#define PKT_RX_OUTER_L4_CKSUM_GOOD	(1ULL << 22)
-#define PKT_RX_OUTER_L4_CKSUM_INVALID	((1ULL << 21) | (1ULL << 22))
-
-/* add new RX flags here, don't forget to update PKT_FIRST_FREE */
-
-#define PKT_FIRST_FREE (1ULL << 23)
-#define PKT_LAST_FREE (1ULL << 40)
-
-/* add new TX flags here, don't forget to update PKT_LAST_FREE  */
+ * The detection of RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD shall be based on the
+ * given HW capability, At minimum, the PMD should support
+ * RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN and RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD
+ * states if the DEV_RX_OFFLOAD_OUTER_UDP_CKSUM offload is available.
+ */
+#define RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK	((1ULL << 21) | (1ULL << 22))
+#define PKT_RX_OUTER_L4_CKSUM_MASK RTE_DEPRECATED(PKT_RX_OUTER_L4_CKSUM_MASK) \
+		RTE_MBUF_F_RX_OUTER_L4_CKSUM_MASK
+
+#define RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN	0
+#define RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD	(1ULL << 21)
+#define RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD	(1ULL << 22)
+#define RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID	((1ULL << 21) | (1ULL << 22))
+#define PKT_RX_OUTER_L4_CKSUM_UNKNOWN \
+	RTE_DEPRECATED(PKT_RX_OUTER_L4_CKSUM_UNKNOWN) \
+	RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
+#define PKT_RX_OUTER_L4_CKSUM_BAD RTE_DEPRECATED(PKT_RX_OUTER_L4_CKSUM_BAD) \
+		RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD
+#define PKT_RX_OUTER_L4_CKSUM_GOOD RTE_DEPRECATED(PKT_RX_OUTER_L4_CKSUM_GOOD) \
+		RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD
+#define PKT_RX_OUTER_L4_CKSUM_INVALID \
+	RTE_DEPRECATED(PKT_RX_OUTER_L4_CKSUM_INVALID) \
+	RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID
+
+/* add new RX flags here, don't forget to update RTE_MBUF_F_FIRST_FREE */
+
+#define RTE_MBUF_F_FIRST_FREE (1ULL << 23)
+#define PKT_FIRST_FREE RTE_DEPRECATED(PKT_FIRST_FREE) RTE_MBUF_F_FIRST_FREE
+#define RTE_MBUF_F_LAST_FREE (1ULL << 40)
+#define PKT_LAST_FREE RTE_DEPRECATED(PKT_LAST_FREE) RTE_MBUF_F_LAST_FREE
+
+/* add new TX flags here, don't forget to update RTE_MBUF_F_LAST_FREE  */
 
 /**
  * Outer UDP checksum offload flag. This flag is used for enabling
  * outer UDP checksum in PMD. To use outer UDP checksum, the user needs to
  * 1) Enable the following in mbuf,
  * a) Fill outer_l2_len and outer_l3_len in mbuf.
- * b) Set the PKT_TX_OUTER_UDP_CKSUM flag.
- * c) Set the PKT_TX_OUTER_IPV4 or PKT_TX_OUTER_IPV6 flag.
+ * b) Set the RTE_MBUF_F_TX_OUTER_UDP_CKSUM flag.
+ * c) Set the RTE_MBUF_F_TX_OUTER_IPV4 or RTE_MBUF_F_TX_OUTER_IPV6 flag.
  * 2) Configure DEV_TX_OFFLOAD_OUTER_UDP_CKSUM offload flag.
  */
-#define PKT_TX_OUTER_UDP_CKSUM     (1ULL << 41)
+#define RTE_MBUF_F_TX_OUTER_UDP_CKSUM     (1ULL << 41)
+#define PKT_TX_OUTER_UDP_CKSUM RTE_DEPRECATED(PKT_TX_OUTER_UDP_CKSUM) \
+		RTE_MBUF_F_TX_OUTER_UDP_CKSUM
 
 /**
  * UDP Fragmentation Offload flag. This flag is used for enabling UDP
  * fragmentation in SW or in HW. When use UFO, mbuf->tso_segsz is used
  * to store the MSS of UDP fragments.
  */
-#define PKT_TX_UDP_SEG	(1ULL << 42)
+#define RTE_MBUF_F_TX_UDP_SEG	(1ULL << 42)
+#define PKT_TX_UDP_SEG RTE_DEPRECATED(PKT_TX_UDP_SEG) RTE_MBUF_F_TX_UDP_SEG
 
 /**
  * Request security offload processing on the TX packet.
  * To use Tx security offload, the user needs to fill l2_len in mbuf
  * indicating L2 header size and where L3 header starts.
  */
-#define PKT_TX_SEC_OFFLOAD	(1ULL << 43)
+#define RTE_MBUF_F_TX_SEC_OFFLOAD	(1ULL << 43)
+#define PKT_TX_SEC_OFFLOAD RTE_DEPRECATED(PKT_TX_SEC_OFFLOAD) \
+		RTE_MBUF_F_TX_SEC_OFFLOAD
 
 /**
  * Offload the MACsec. This flag must be set by the application to enable
  * this offload feature for a packet to be transmitted.
  */
-#define PKT_TX_MACSEC        (1ULL << 44)
+#define RTE_MBUF_F_TX_MACSEC        (1ULL << 44)
+#define PKT_TX_MACSEC RTE_DEPRECATED(PKT_TX_MACSEC) RTE_MBUF_F_TX_MACSEC
 
 /**
  * Bits 45:48 used for the tunnel type.
  * The tunnel type must be specified for TSO or checksum on the inner part
  * of tunnel packets.
- * These flags can be used with PKT_TX_TCP_SEG for TSO, or PKT_TX_xxx_CKSUM.
+ * These flags can be used with RTE_MBUF_F_TX_TCP_SEG for TSO, or
+ * RTE_MBUF_F_TX_xxx_CKSUM.
  * The mbuf fields for inner and outer header lengths are required:
  * outer_l2_len, outer_l3_len, l2_len, l3_len, l4_len and tso_segsz for TSO.
  */
-#define PKT_TX_TUNNEL_VXLAN   (0x1ULL << 45)
-#define PKT_TX_TUNNEL_GRE     (0x2ULL << 45)
-#define PKT_TX_TUNNEL_IPIP    (0x3ULL << 45)
-#define PKT_TX_TUNNEL_GENEVE  (0x4ULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_VXLAN   (0x1ULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_GRE     (0x2ULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_IPIP    (0x3ULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_GENEVE  (0x4ULL << 45)
 /** TX packet with MPLS-in-UDP RFC 7510 header. */
-#define PKT_TX_TUNNEL_MPLSINUDP (0x5ULL << 45)
-#define PKT_TX_TUNNEL_VXLAN_GPE (0x6ULL << 45)
-#define PKT_TX_TUNNEL_GTP       (0x7ULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_MPLSINUDP (0x5ULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE (0x6ULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_GTP       (0x7ULL << 45)
 /**
  * Generic IP encapsulated tunnel type, used for TSO and checksum offload.
  * It can be used for tunnels which are not standards or listed above.
- * It is preferred to use specific tunnel flags like PKT_TX_TUNNEL_GRE
- * or PKT_TX_TUNNEL_IPIP if possible.
+ * It is preferred to use specific tunnel flags like RTE_MBUF_F_TX_TUNNEL_GRE
+ * or RTE_MBUF_F_TX_TUNNEL_IPIP if possible.
  * The ethdev must be configured with DEV_TX_OFFLOAD_IP_TNL_TSO.
  * Outer and inner checksums are done according to the existing flags like
- * PKT_TX_xxx_CKSUM.
+ * RTE_MBUF_F_TX_xxx_CKSUM.
  * Specific tunnel headers that contain payload length, sequence id
  * or checksum are not expected to be updated.
  */
-#define PKT_TX_TUNNEL_IP (0xDULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_IP (0xDULL << 45)
 /**
  * Generic UDP encapsulated tunnel type, used for TSO and checksum offload.
  * UDP tunnel type implies outer IP layer.
  * It can be used for tunnels which are not standards or listed above.
- * It is preferred to use specific tunnel flags like PKT_TX_TUNNEL_VXLAN
+ * It is preferred to use specific tunnel flags like RTE_MBUF_F_TX_TUNNEL_VXLAN
  * if possible.
  * The ethdev must be configured with DEV_TX_OFFLOAD_UDP_TNL_TSO.
  * Outer and inner checksums are done according to the existing flags like
- * PKT_TX_xxx_CKSUM.
+ * RTE_MBUF_F_TX_xxx_CKSUM.
  * Specific tunnel headers that contain payload length, sequence id
  * or checksum are not expected to be updated.
  */
-#define PKT_TX_TUNNEL_UDP (0xEULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_UDP (0xEULL << 45)
 /* add new TX TUNNEL type here */
-#define PKT_TX_TUNNEL_MASK    (0xFULL << 45)
+#define RTE_MBUF_F_TX_TUNNEL_MASK    (0xFULL << 45)
+
+#define PKT_TX_TUNNEL_VXLAN RTE_DEPRECATED(PKT_TX_TUNNEL_VXLAN) \
+		RTE_MBUF_F_TX_TUNNEL_VXLAN
+#define PKT_TX_TUNNEL_GRE RTE_DEPRECATED(PKT_TX_TUNNEL_GRE) \
+		RTE_MBUF_F_TX_TUNNEL_GRE
+#define PKT_TX_TUNNEL_IPIP RTE_DEPRECATED(PKT_TX_TUNNEL_IPIP) \
+		RTE_MBUF_F_TX_TUNNEL_IPIP
+#define PKT_TX_TUNNEL_GENEVE RTE_DEPRECATED(PKT_TX_TUNNEL_GENEVE) \
+		RTE_MBUF_F_TX_TUNNEL_GENEVE
+#define PKT_TX_TUNNEL_MPLSINUDP RTE_DEPRECATED(PKT_TX_TUNNEL_MPLSINUDP) \
+		RTE_MBUF_F_TX_TUNNEL_MPLSINUDP
+#define PKT_TX_TUNNEL_VXLAN_GPE RTE_DEPRECATED(PKT_TX_TUNNEL_VXLAN_GPE) \
+		RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE
+#define PKT_TX_TUNNEL_GTP RTE_DEPRECATED(PKT_TX_TUNNEL_GTP) \
+		RTE_MBUF_F_TX_TUNNEL_GTP
+#define PKT_TX_TUNNEL_IP RTE_DEPRECATED(PKT_TX_TUNNEL_IP) \
+		RTE_MBUF_F_TX_TUNNEL_IP
+#define PKT_TX_TUNNEL_UDP RTE_DEPRECATED(PKT_TX_TUNNEL_UDP) \
+		RTE_MBUF_F_TX_TUNNEL_UDP
+#define PKT_TX_TUNNEL_MASK RTE_DEPRECATED(PKT_TX_TUNNEL_MASK) \
+		RTE_MBUF_F_TX_TUNNEL_MASK
 
 /**
  * Double VLAN insertion (QinQ) request to driver, driver may offload the
  * insertion based on device capability.
  * mbuf 'vlan_tci' & 'vlan_tci_outer' must be valid when this flag is set.
  */
-#define PKT_TX_QINQ        (1ULL << 49)
-/** This old name is deprecated. */
-#define PKT_TX_QINQ_PKT RTE_DEPRECATED(PKT_TX_QINQ_PKT) PKT_TX_QINQ
+#define RTE_MBUF_F_TX_QINQ        (1ULL << 49)
+#define PKT_TX_QINQ RTE_DEPRECATED(PKT_TX_QINQ) RTE_MBUF_F_TX_QINQ
+#define PKT_TX_QINQ_PKT RTE_DEPRECATED(PKT_TX_QINQ_PKT) RTE_MBUF_F_TX_QINQ
 
 /**
  * TCP segmentation offload. To enable this offload feature for a
  * packet to be transmitted on hardware supporting TSO:
- *  - set the PKT_TX_TCP_SEG flag in mbuf->ol_flags (this flag implies
- *    PKT_TX_TCP_CKSUM)
- *  - set the flag PKT_TX_IPV4 or PKT_TX_IPV6
- *  - if it's IPv4, set the PKT_TX_IP_CKSUM flag
+ *  - set the RTE_MBUF_F_TX_TCP_SEG flag in mbuf->ol_flags (this flag implies
+ *    RTE_MBUF_F_TX_TCP_CKSUM)
+ *  - set the flag RTE_MBUF_F_TX_IPV4 or RTE_MBUF_F_TX_IPV6
+ *  - if it's IPv4, set the RTE_MBUF_F_TX_IP_CKSUM flag
  *  - fill the mbuf offload information: l2_len, l3_len, l4_len, tso_segsz
  */
-#define PKT_TX_TCP_SEG       (1ULL << 50)
+#define RTE_MBUF_F_TX_TCP_SEG       (1ULL << 50)
+#define PKT_TX_TCP_SEG RTE_DEPRECATED(PKT_TX_TCP_SEG) RTE_MBUF_F_TX_TCP_SEG
 
 /** TX IEEE1588 packet to timestamp. */
-#define PKT_TX_IEEE1588_TMST (1ULL << 51)
+#define RTE_MBUF_F_TX_IEEE1588_TMST (1ULL << 51)
+#define PKT_TX_IEEE1588_TMST RTE_DEPRECATED(PKT_TX_IEEE1588_TMST) \
+		RTE_MBUF_F_TX_IEEE1588_TMST
 
-/**
+/*
  * Bits 52+53 used for L4 packet type with checksum enabled: 00: Reserved,
  * 01: TCP checksum, 10: SCTP checksum, 11: UDP checksum. To use hardware
  * L4 checksum offload, the user needs to:
  *  - fill l2_len and l3_len in mbuf
- *  - set the flags PKT_TX_TCP_CKSUM, PKT_TX_SCTP_CKSUM or PKT_TX_UDP_CKSUM
- *  - set the flag PKT_TX_IPV4 or PKT_TX_IPV6
+ *  - set the flags RTE_MBUF_F_TX_TCP_CKSUM, RTE_MBUF_F_TX_SCTP_CKSUM or
+ *    RTE_MBUF_F_TX_UDP_CKSUM
+ *  - set the flag RTE_MBUF_F_TX_IPV4 or RTE_MBUF_F_TX_IPV6
  */
-#define PKT_TX_L4_NO_CKSUM   (0ULL << 52) /**< Disable L4 cksum of TX pkt. */
+
+/** Disable L4 cksum of TX pkt. */
+#define RTE_MBUF_F_TX_L4_NO_CKSUM   (0ULL << 52)
 
 /** TCP cksum of TX pkt. computed by NIC. */
-#define PKT_TX_TCP_CKSUM     (1ULL << 52)
+#define RTE_MBUF_F_TX_TCP_CKSUM     (1ULL << 52)
 
 /** SCTP cksum of TX pkt. computed by NIC. */
-#define PKT_TX_SCTP_CKSUM    (2ULL << 52)
+#define RTE_MBUF_F_TX_SCTP_CKSUM    (2ULL << 52)
 
 /** UDP cksum of TX pkt. computed by NIC. */
-#define PKT_TX_UDP_CKSUM     (3ULL << 52)
+#define RTE_MBUF_F_TX_UDP_CKSUM     (3ULL << 52)
 
 /** Mask for L4 cksum offload request. */
-#define PKT_TX_L4_MASK       (3ULL << 52)
+#define RTE_MBUF_F_TX_L4_MASK       (3ULL << 52)
+
+#define PKT_TX_L4_NO_CKSUM RTE_DEPRECATED(PKT_TX_L4_NO_CKSUM) \
+		RTE_MBUF_F_TX_L4_NO_CKSUM
+#define PKT_TX_TCP_CKSUM RTE_DEPRECATED(PKT_TX_TCP_CKSUM) \
+		RTE_MBUF_F_TX_TCP_CKSUM
+#define PKT_TX_SCTP_CKSUM RTE_DEPRECATED(PKT_TX_SCTP_CKSUM) \
+		RTE_MBUF_F_TX_SCTP_CKSUM
+#define PKT_TX_UDP_CKSUM RTE_DEPRECATED(PKT_TX_UDP_CKSUM) \
+		RTE_MBUF_F_TX_UDP_CKSUM
+#define PKT_TX_L4_MASK RTE_DEPRECATED(PKT_TX_L4_MASK) RTE_MBUF_F_TX_L4_MASK
 
 /**
- * Offload the IP checksum in the hardware. The flag PKT_TX_IPV4 should
+ * Offload the IP checksum in the hardware. The flag RTE_MBUF_F_TX_IPV4 should
  * also be set by the application, although a PMD will only check
- * PKT_TX_IP_CKSUM.
+ * RTE_MBUF_F_TX_IP_CKSUM.
  *  - fill the mbuf offload information: l2_len, l3_len
  */
-#define PKT_TX_IP_CKSUM      (1ULL << 54)
+#define RTE_MBUF_F_TX_IP_CKSUM      (1ULL << 54)
+#define PKT_TX_IP_CKSUM RTE_DEPRECATED(PKT_TX_IP_CKSUM) RTE_MBUF_F_TX_IP_CKSUM
 
 /**
  * Packet is IPv4. This flag must be set when using any offload feature
@@ -317,7 +422,8 @@ extern "C" {
  * packet. If the packet is a tunneled packet, this flag is related to
  * the inner headers.
  */
-#define PKT_TX_IPV4          (1ULL << 55)
+#define RTE_MBUF_F_TX_IPV4          (1ULL << 55)
+#define PKT_TX_IPV4 RTE_DEPRECATED(PKT_TX_IPV4) RTE_MBUF_F_TX_IPV4
 
 /**
  * Packet is IPv6. This flag must be set when using an offload feature
@@ -325,67 +431,77 @@ extern "C" {
  * packet. If the packet is a tunneled packet, this flag is related to
  * the inner headers.
  */
-#define PKT_TX_IPV6          (1ULL << 56)
+#define RTE_MBUF_F_TX_IPV6          (1ULL << 56)
+#define PKT_TX_IPV6 RTE_DEPRECATED(PKT_TX_IPV6) RTE_MBUF_F_TX_IPV6
 
 /**
  * VLAN tag insertion request to driver, driver may offload the insertion
  * based on the device capability.
  * mbuf 'vlan_tci' field must be valid when this flag is set.
  */
-#define PKT_TX_VLAN          (1ULL << 57)
-/* this old name is deprecated */
-#define PKT_TX_VLAN_PKT RTE_DEPRECATED(PKT_TX_VLAN_PKT) PKT_TX_VLAN
+#define RTE_MBUF_F_TX_VLAN          (1ULL << 57)
+#define PKT_TX_VLAN RTE_DEPRECATED(PKT_TX_VLAN) RTE_MBUF_F_TX_VLAN
+#define PKT_TX_VLAN_PKT RTE_DEPRECATED(PKT_TX_VLAN_PKT) RTE_MBUF_F_TX_VLAN
 
 /**
  * Offload the IP checksum of an external header in the hardware. The
- * flag PKT_TX_OUTER_IPV4 should also be set by the application, although
- * a PMD will only check PKT_TX_OUTER_IP_CKSUM.
+ * flag RTE_MBUF_F_TX_OUTER_IPV4 should also be set by the application, although
+ * a PMD will only check RTE_MBUF_F_TX_OUTER_IP_CKSUM.
  *  - fill the mbuf offload information: outer_l2_len, outer_l3_len
  */
-#define PKT_TX_OUTER_IP_CKSUM   (1ULL << 58)
+#define RTE_MBUF_F_TX_OUTER_IP_CKSUM   (1ULL << 58)
+#define PKT_TX_OUTER_IP_CKSUM RTE_DEPRECATED(PKT_TX_OUTER_IP_CKSUM) \
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM
 
 /**
  * Packet outer header is IPv4. This flag must be set when using any
  * outer offload feature (L3 or L4 checksum) to tell the NIC that the
  * outer header of the tunneled packet is an IPv4 packet.
  */
-#define PKT_TX_OUTER_IPV4   (1ULL << 59)
+#define RTE_MBUF_F_TX_OUTER_IPV4   (1ULL << 59)
+#define PKT_TX_OUTER_IPV4 RTE_DEPRECATED(PKT_TX_OUTER_IPV4) \
+		RTE_MBUF_F_TX_OUTER_IPV4
 
 /**
  * Packet outer header is IPv6. This flag must be set when using any
  * outer offload feature (L4 checksum) to tell the NIC that the outer
  * header of the tunneled packet is an IPv6 packet.
  */
-#define PKT_TX_OUTER_IPV6    (1ULL << 60)
+#define RTE_MBUF_F_TX_OUTER_IPV6    (1ULL << 60)
+#define PKT_TX_OUTER_IPV6 RTE_DEPRECATED(PKT_TX_OUTER_IPV6) \
+		RTE_MBUF_F_TX_OUTER_IPV6
 
 /**
  * Bitmask of all supported packet Tx offload features flags,
  * which can be set for packet.
  */
-#define PKT_TX_OFFLOAD_MASK (    \
-		PKT_TX_OUTER_IPV6 |	 \
-		PKT_TX_OUTER_IPV4 |	 \
-		PKT_TX_OUTER_IP_CKSUM |  \
-		PKT_TX_VLAN |        \
-		PKT_TX_IPV6 |		 \
-		PKT_TX_IPV4 |		 \
-		PKT_TX_IP_CKSUM |        \
-		PKT_TX_L4_MASK |         \
-		PKT_TX_IEEE1588_TMST |	 \
-		PKT_TX_TCP_SEG |         \
-		PKT_TX_QINQ |        \
-		PKT_TX_TUNNEL_MASK |	 \
-		PKT_TX_MACSEC |		 \
-		PKT_TX_SEC_OFFLOAD |	 \
-		PKT_TX_UDP_SEG |	 \
-		PKT_TX_OUTER_UDP_CKSUM)
+#define RTE_MBUF_F_TX_OFFLOAD_MASK (    \
+		RTE_MBUF_F_TX_OUTER_IPV6 |	 \
+		RTE_MBUF_F_TX_OUTER_IPV4 |	 \
+		RTE_MBUF_F_TX_OUTER_IP_CKSUM |  \
+		RTE_MBUF_F_TX_VLAN |        \
+		RTE_MBUF_F_TX_IPV6 |		 \
+		RTE_MBUF_F_TX_IPV4 |		 \
+		RTE_MBUF_F_TX_IP_CKSUM |        \
+		RTE_MBUF_F_TX_L4_MASK |         \
+		RTE_MBUF_F_TX_IEEE1588_TMST |	 \
+		RTE_MBUF_F_TX_TCP_SEG |         \
+		RTE_MBUF_F_TX_QINQ |        \
+		RTE_MBUF_F_TX_TUNNEL_MASK |	 \
+		RTE_MBUF_F_TX_MACSEC |		 \
+		RTE_MBUF_F_TX_SEC_OFFLOAD |	 \
+		RTE_MBUF_F_TX_UDP_SEG |	 \
+		RTE_MBUF_F_TX_OUTER_UDP_CKSUM)
+#define PKT_TX_OFFLOAD_MASK RTE_DEPRECATED(PKT_TX_OFFLOAD_MASK) RTE_MBUF_F_TX_OFFLOAD_MASK
 
 /**
  * Mbuf having an external buffer attached. shinfo in mbuf must be filled.
  */
-#define EXT_ATTACHED_MBUF    (1ULL << 61)
+#define RTE_MBUF_F_EXTERNAL    (1ULL << 61)
+#define EXT_ATTACHED_MBUF RTE_DEPRECATED(EXT_ATTACHED_MBUF) RTE_MBUF_F_EXTERNAL
 
-#define IND_ATTACHED_MBUF    (1ULL << 62) /**< Indirect attached mbuf */
+#define RTE_MBUF_F_INDIRECT    (1ULL << 62) /**< Indirect attached mbuf */
+#define IND_ATTACHED_MBUF RTE_DEPRECATED(IND_ATTACHED_MBUF) RTE_MBUF_F_INDIRECT
 
 /** Alignment constraint of mbuf private area. */
 #define RTE_MBUF_PRIV_ALIGN 8
@@ -532,7 +648,7 @@ struct rte_mbuf {
 
 	uint32_t pkt_len;         /**< Total pkt len: sum of all segments. */
 	uint16_t data_len;        /**< Amount of data in segment buffer. */
-	/** VLAN TCI (CPU order), valid if PKT_RX_VLAN is set. */
+	/** VLAN TCI (CPU order), valid if RTE_MBUF_F_RX_VLAN is set. */
 	uint16_t vlan_tci;
 
 	RTE_STD_C11
@@ -550,7 +666,7 @@ struct rte_mbuf {
 				};
 				uint32_t hi;
 				/**< First 4 flexible bytes or FD ID, dependent
-				 * on PKT_RX_FDIR_* flag in ol_flags.
+				 * on RTE_MBUF_F_RX_FDIR_* flag in ol_flags.
 				 */
 			} fdir;	/**< Filter identifier if FDIR enabled */
 			struct rte_mbuf_sched sched;
@@ -569,7 +685,7 @@ struct rte_mbuf {
 		} hash;                   /**< hash information */
 	};
 
-	/** Outer VLAN TCI (CPU order), valid if PKT_RX_QINQ is set. */
+	/** Outer VLAN TCI (CPU order), valid if RTE_MBUF_F_RX_QINQ is set. */
 	uint16_t vlan_tci_outer;
 
 	uint16_t buf_len;         /**< Length of segment buffer. */
@@ -659,14 +775,14 @@ struct rte_mbuf_ext_shared_info {
  * If a mbuf has its data in another mbuf and references it by mbuf
  * indirection, this mbuf can be defined as a cloned mbuf.
  */
-#define RTE_MBUF_CLONED(mb)     ((mb)->ol_flags & IND_ATTACHED_MBUF)
+#define RTE_MBUF_CLONED(mb)     ((mb)->ol_flags & RTE_MBUF_F_INDIRECT)
 
 /**
  * Returns TRUE if given mbuf has an external buffer, or FALSE otherwise.
  *
  * External buffer is a user-provided anonymous buffer.
  */
-#define RTE_MBUF_HAS_EXTBUF(mb) ((mb)->ol_flags & EXT_ATTACHED_MBUF)
+#define RTE_MBUF_HAS_EXTBUF(mb) ((mb)->ol_flags & RTE_MBUF_F_EXTERNAL)
 
 /**
  * Returns TRUE if given mbuf is direct, or FALSE otherwise.
@@ -675,7 +791,7 @@ struct rte_mbuf_ext_shared_info {
  * can be defined as a direct mbuf.
  */
 #define RTE_MBUF_DIRECT(mb) \
-	(!((mb)->ol_flags & (IND_ATTACHED_MBUF | EXT_ATTACHED_MBUF)))
+	(!((mb)->ol_flags & (RTE_MBUF_F_INDIRECT | RTE_MBUF_F_EXTERNAL)))
 
 /** Uninitialized or unspecified port. */
 #define RTE_MBUF_PORT_INVALID UINT16_MAX
diff --git a/lib/mbuf/rte_mbuf_dyn.c b/lib/mbuf/rte_mbuf_dyn.c
index d55e162a68..db8e020665 100644
--- a/lib/mbuf/rte_mbuf_dyn.c
+++ b/lib/mbuf/rte_mbuf_dyn.c
@@ -130,7 +130,7 @@ init_shared_mem(void)
 		mark_free(dynfield1);
 
 		/* init free_flags */
-		for (mask = PKT_FIRST_FREE; mask <= PKT_LAST_FREE; mask <<= 1)
+		for (mask = RTE_MBUF_F_FIRST_FREE; mask <= RTE_MBUF_F_LAST_FREE; mask <<= 1)
 			shm->free_flags |= mask;
 
 		process_score();
diff --git a/lib/net/rte_ether.h b/lib/net/rte_ether.h
index b83e0d3fce..2c7da55b6b 100644
--- a/lib/net/rte_ether.h
+++ b/lib/net/rte_ether.h
@@ -331,7 +331,7 @@ static inline int rte_vlan_strip(struct rte_mbuf *m)
 		return -1;
 
 	vh = (struct rte_vlan_hdr *)(eh + 1);
-	m->ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+	m->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 	m->vlan_tci = rte_be_to_cpu_16(vh->vlan_tci);
 
 	/* Copy ether header over rather than moving whole packet */
@@ -378,9 +378,9 @@ static inline int rte_vlan_insert(struct rte_mbuf **m)
 	vh = (struct rte_vlan_hdr *) (nh + 1);
 	vh->vlan_tci = rte_cpu_to_be_16((*m)->vlan_tci);
 
-	(*m)->ol_flags &= ~(PKT_RX_VLAN_STRIPPED | PKT_TX_VLAN);
+	(*m)->ol_flags &= ~(RTE_MBUF_F_RX_VLAN_STRIPPED | RTE_MBUF_F_TX_VLAN);
 
-	if ((*m)->ol_flags & PKT_TX_TUNNEL_MASK)
+	if ((*m)->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)
 		(*m)->outer_l2_len += sizeof(struct rte_vlan_hdr);
 	else
 		(*m)->l2_len += sizeof(struct rte_vlan_hdr);
diff --git a/lib/net/rte_ip.h b/lib/net/rte_ip.h
index b3d45e85db..c37cb04b60 100644
--- a/lib/net/rte_ip.h
+++ b/lib/net/rte_ip.h
@@ -334,7 +334,7 @@ rte_ipv4_phdr_cksum(const struct rte_ipv4_hdr *ipv4_hdr, uint64_t ol_flags)
 	psd_hdr.dst_addr = ipv4_hdr->dst_addr;
 	psd_hdr.zero = 0;
 	psd_hdr.proto = ipv4_hdr->next_proto_id;
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		psd_hdr.len = 0;
 	} else {
 		l3_len = rte_be_to_cpu_16(ipv4_hdr->total_length);
@@ -474,7 +474,7 @@ rte_ipv6_phdr_cksum(const struct rte_ipv6_hdr *ipv6_hdr, uint64_t ol_flags)
 	} psd_hdr;
 
 	psd_hdr.proto = (uint32_t)(ipv6_hdr->proto << 24);
-	if (ol_flags & PKT_TX_TCP_SEG) {
+	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
 		psd_hdr.len = 0;
 	} else {
 		psd_hdr.len = ipv6_hdr->payload_len;
diff --git a/lib/net/rte_net.h b/lib/net/rte_net.h
index f4460202c0..53a7f4d360 100644
--- a/lib/net/rte_net.h
+++ b/lib/net/rte_net.h
@@ -121,17 +121,17 @@ rte_net_intel_cksum_flags_prepare(struct rte_mbuf *m, uint64_t ol_flags)
 	 * Mainly it is required to avoid fragmented headers check if
 	 * no offloads are requested.
 	 */
-	if (!(ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK | PKT_TX_TCP_SEG |
-			  PKT_TX_OUTER_IP_CKSUM)))
+	if (!(ol_flags & (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_L4_MASK | RTE_MBUF_F_TX_TCP_SEG |
+			  RTE_MBUF_F_TX_OUTER_IP_CKSUM)))
 		return 0;
 
-	if (ol_flags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6)) {
+	if (ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IPV6)) {
 		inner_l3_offset += m->outer_l2_len + m->outer_l3_len;
 		/*
 		 * prepare outer IPv4 header checksum by setting it to 0,
 		 * in order to be computed by hardware NICs.
 		 */
-		if (ol_flags & PKT_TX_OUTER_IP_CKSUM) {
+		if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) {
 			ipv4_hdr = rte_pktmbuf_mtod_offset(m,
 					struct rte_ipv4_hdr *, m->outer_l2_len);
 			ipv4_hdr->hdr_checksum = 0;
@@ -147,16 +147,16 @@ rte_net_intel_cksum_flags_prepare(struct rte_mbuf *m, uint64_t ol_flags)
 		     inner_l3_offset + m->l3_len + m->l4_len))
 		return -ENOTSUP;
 
-	if (ol_flags & PKT_TX_IPV4) {
+	if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 		ipv4_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv4_hdr *,
 				inner_l3_offset);
 
-		if (ol_flags & PKT_TX_IP_CKSUM)
+		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
 			ipv4_hdr->hdr_checksum = 0;
 	}
 
-	if ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_UDP_CKSUM) {
-		if (ol_flags & PKT_TX_IPV4) {
+	if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_UDP_CKSUM) {
+		if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 			udp_hdr = (struct rte_udp_hdr *)((char *)ipv4_hdr +
 					m->l3_len);
 			udp_hdr->dgram_cksum = rte_ipv4_phdr_cksum(ipv4_hdr,
@@ -171,9 +171,9 @@ rte_net_intel_cksum_flags_prepare(struct rte_mbuf *m, uint64_t ol_flags)
 			udp_hdr->dgram_cksum = rte_ipv6_phdr_cksum(ipv6_hdr,
 					ol_flags);
 		}
-	} else if ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM ||
-			(ol_flags & PKT_TX_TCP_SEG)) {
-		if (ol_flags & PKT_TX_IPV4) {
+	} else if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_TCP_CKSUM ||
+			(ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
+		if (ol_flags & RTE_MBUF_F_TX_IPV4) {
 			/* non-TSO tcp or TSO */
 			tcp_hdr = (struct rte_tcp_hdr *)((char *)ipv4_hdr +
 					m->l3_len);
diff --git a/lib/pipeline/rte_table_action.c b/lib/pipeline/rte_table_action.c
index 4b0316bfed..ebab2444d3 100644
--- a/lib/pipeline/rte_table_action.c
+++ b/lib/pipeline/rte_table_action.c
@@ -2085,7 +2085,7 @@ pkt_work_tag(struct rte_mbuf *mbuf,
 	struct tag_data *data)
 {
 	mbuf->hash.fdir.hi = data->tag;
-	mbuf->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+	mbuf->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 }
 
 static __rte_always_inline void
@@ -2103,10 +2103,10 @@ pkt4_work_tag(struct rte_mbuf *mbuf0,
 	mbuf2->hash.fdir.hi = data2->tag;
 	mbuf3->hash.fdir.hi = data3->tag;
 
-	mbuf0->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
-	mbuf1->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
-	mbuf2->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
-	mbuf3->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+	mbuf0->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
+	mbuf1->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
+	mbuf2->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
+	mbuf3->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID;
 }
 
 /**
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 7eb9f109ae..eb1b38be4d 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -236,10 +236,10 @@ struct rte_security_ipsec_sa_options {
 	 * * 0: Inner packet IP header checksum is not computed/verified.
 	 *
 	 * The checksum verification status would be set in mbuf using
-	 * PKT_RX_IP_CKSUM_xxx flags.
+	 * RTE_MBUF_F_RX_IP_CKSUM_xxx flags.
 	 *
 	 * Inner IP checksum computation can also be enabled(per operation)
-	 * by setting the flag PKT_TX_IP_CKSUM in mbuf.
+	 * by setting the flag RTE_MBUF_F_TX_IP_CKSUM in mbuf.
 	 */
 	uint32_t ip_csum_enable : 1;
 
@@ -251,11 +251,11 @@ struct rte_security_ipsec_sa_options {
 	 * * 0: Inner packet L4 checksum is not computed/verified.
 	 *
 	 * The checksum verification status would be set in mbuf using
-	 * PKT_RX_L4_CKSUM_xxx flags.
+	 * RTE_MBUF_F_RX_L4_CKSUM_xxx flags.
 	 *
 	 * Inner L4 checksum computation can also be enabled(per operation)
-	 * by setting the flags PKT_TX_TCP_CKSUM or PKT_TX_SCTP_CKSUM or
-	 * PKT_TX_UDP_CKSUM or PKT_TX_L4_MASK in mbuf.
+	 * by setting the flags RTE_MBUF_F_TX_TCP_CKSUM or RTE_MBUF_F_TX_SCTP_CKSUM or
+	 * RTE_MBUF_F_TX_UDP_CKSUM or RTE_MBUF_F_TX_L4_MASK in mbuf.
 	 */
 	uint32_t l4_csum_enable : 1;
 };
diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index e481906113..b6140a643b 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -411,25 +411,25 @@ vhost_shadow_enqueue_single_packed(struct virtio_net *dev,
 static __rte_always_inline void
 virtio_enqueue_offload(struct rte_mbuf *m_buf, struct virtio_net_hdr *net_hdr)
 {
-	uint64_t csum_l4 = m_buf->ol_flags & PKT_TX_L4_MASK;
+	uint64_t csum_l4 = m_buf->ol_flags & RTE_MBUF_F_TX_L4_MASK;
 
-	if (m_buf->ol_flags & PKT_TX_TCP_SEG)
-		csum_l4 |= PKT_TX_TCP_CKSUM;
+	if (m_buf->ol_flags & RTE_MBUF_F_TX_TCP_SEG)
+		csum_l4 |= RTE_MBUF_F_TX_TCP_CKSUM;
 
 	if (csum_l4) {
 		net_hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
 		net_hdr->csum_start = m_buf->l2_len + m_buf->l3_len;
 
 		switch (csum_l4) {
-		case PKT_TX_TCP_CKSUM:
+		case RTE_MBUF_F_TX_TCP_CKSUM:
 			net_hdr->csum_offset = (offsetof(struct rte_tcp_hdr,
 						cksum));
 			break;
-		case PKT_TX_UDP_CKSUM:
+		case RTE_MBUF_F_TX_UDP_CKSUM:
 			net_hdr->csum_offset = (offsetof(struct rte_udp_hdr,
 						dgram_cksum));
 			break;
-		case PKT_TX_SCTP_CKSUM:
+		case RTE_MBUF_F_TX_SCTP_CKSUM:
 			net_hdr->csum_offset = (offsetof(struct rte_sctp_hdr,
 						cksum));
 			break;
@@ -441,7 +441,7 @@ virtio_enqueue_offload(struct rte_mbuf *m_buf, struct virtio_net_hdr *net_hdr)
 	}
 
 	/* IP cksum verification cannot be bypassed, then calculate here */
-	if (m_buf->ol_flags & PKT_TX_IP_CKSUM) {
+	if (m_buf->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
 		struct rte_ipv4_hdr *ipv4_hdr;
 
 		ipv4_hdr = rte_pktmbuf_mtod_offset(m_buf, struct rte_ipv4_hdr *,
@@ -450,15 +450,15 @@ virtio_enqueue_offload(struct rte_mbuf *m_buf, struct virtio_net_hdr *net_hdr)
 		ipv4_hdr->hdr_checksum = rte_ipv4_cksum(ipv4_hdr);
 	}
 
-	if (m_buf->ol_flags & PKT_TX_TCP_SEG) {
-		if (m_buf->ol_flags & PKT_TX_IPV4)
+	if (m_buf->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
+		if (m_buf->ol_flags & RTE_MBUF_F_TX_IPV4)
 			net_hdr->gso_type = VIRTIO_NET_HDR_GSO_TCPV4;
 		else
 			net_hdr->gso_type = VIRTIO_NET_HDR_GSO_TCPV6;
 		net_hdr->gso_size = m_buf->tso_segsz;
 		net_hdr->hdr_len = m_buf->l2_len + m_buf->l3_len
 					+ m_buf->l4_len;
-	} else if (m_buf->ol_flags & PKT_TX_UDP_SEG) {
+	} else if (m_buf->ol_flags & RTE_MBUF_F_TX_UDP_SEG) {
 		net_hdr->gso_type = VIRTIO_NET_HDR_GSO_UDP;
 		net_hdr->gso_size = m_buf->tso_segsz;
 		net_hdr->hdr_len = m_buf->l2_len + m_buf->l3_len +
@@ -2259,7 +2259,7 @@ parse_headers(struct rte_mbuf *m, uint8_t *l4_proto)
 		m->l3_len = rte_ipv4_hdr_len(ipv4_hdr);
 		if (data_len < m->l2_len + m->l3_len)
 			goto error;
-		m->ol_flags |= PKT_TX_IPV4;
+		m->ol_flags |= RTE_MBUF_F_TX_IPV4;
 		*l4_proto = ipv4_hdr->next_proto_id;
 		break;
 	case RTE_ETHER_TYPE_IPV6:
@@ -2268,7 +2268,7 @@ parse_headers(struct rte_mbuf *m, uint8_t *l4_proto)
 		ipv6_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv6_hdr *,
 				m->l2_len);
 		m->l3_len = sizeof(struct rte_ipv6_hdr);
-		m->ol_flags |= PKT_TX_IPV6;
+		m->ol_flags |= RTE_MBUF_F_TX_IPV6;
 		*l4_proto = ipv6_hdr->proto;
 		break;
 	default:
@@ -2323,17 +2323,17 @@ vhost_dequeue_offload_legacy(struct virtio_net_hdr *hdr, struct rte_mbuf *m)
 			case (offsetof(struct rte_tcp_hdr, cksum)):
 				if (l4_proto != IPPROTO_TCP)
 					goto error;
-				m->ol_flags |= PKT_TX_TCP_CKSUM;
+				m->ol_flags |= RTE_MBUF_F_TX_TCP_CKSUM;
 				break;
 			case (offsetof(struct rte_udp_hdr, dgram_cksum)):
 				if (l4_proto != IPPROTO_UDP)
 					goto error;
-				m->ol_flags |= PKT_TX_UDP_CKSUM;
+				m->ol_flags |= RTE_MBUF_F_TX_UDP_CKSUM;
 				break;
 			case (offsetof(struct rte_sctp_hdr, cksum)):
 				if (l4_proto != IPPROTO_SCTP)
 					goto error;
-				m->ol_flags |= PKT_TX_SCTP_CKSUM;
+				m->ol_flags |= RTE_MBUF_F_TX_SCTP_CKSUM;
 				break;
 			default:
 				goto error;
@@ -2355,14 +2355,14 @@ vhost_dequeue_offload_legacy(struct virtio_net_hdr *hdr, struct rte_mbuf *m)
 			tcp_len = (tcp_hdr->data_off & 0xf0) >> 2;
 			if (data_len < m->l2_len + m->l3_len + tcp_len)
 				goto error;
-			m->ol_flags |= PKT_TX_TCP_SEG;
+			m->ol_flags |= RTE_MBUF_F_TX_TCP_SEG;
 			m->tso_segsz = hdr->gso_size;
 			m->l4_len = tcp_len;
 			break;
 		case VIRTIO_NET_HDR_GSO_UDP:
 			if (l4_proto != IPPROTO_UDP)
 				goto error;
-			m->ol_flags |= PKT_TX_UDP_SEG;
+			m->ol_flags |= RTE_MBUF_F_TX_UDP_SEG;
 			m->tso_segsz = hdr->gso_size;
 			m->l4_len = sizeof(struct rte_udp_hdr);
 			break;
@@ -2396,7 +2396,7 @@ vhost_dequeue_offload(struct virtio_net_hdr *hdr, struct rte_mbuf *m,
 		return;
 	}
 
-	m->ol_flags |= PKT_RX_IP_CKSUM_UNKNOWN;
+	m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
 
 	ptype = rte_net_get_ptype(m, &hdr_lens, RTE_PTYPE_ALL_MASK);
 	m->packet_type = ptype;
@@ -2423,7 +2423,7 @@ vhost_dequeue_offload(struct virtio_net_hdr *hdr, struct rte_mbuf *m,
 
 		hdrlen = hdr_lens.l2_len + hdr_lens.l3_len + hdr_lens.l4_len;
 		if (hdr->csum_start <= hdrlen && l4_supported != 0) {
-			m->ol_flags |= PKT_RX_L4_CKSUM_NONE;
+			m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_NONE;
 		} else {
 			/* Unknown proto or tunnel, do sw cksum. We can assume
 			 * the cksum field is in the first segment since the
@@ -2453,13 +2453,13 @@ vhost_dequeue_offload(struct virtio_net_hdr *hdr, struct rte_mbuf *m,
 		case VIRTIO_NET_HDR_GSO_TCPV6:
 			if ((ptype & RTE_PTYPE_L4_MASK) != RTE_PTYPE_L4_TCP)
 				break;
-			m->ol_flags |= PKT_RX_LRO | PKT_RX_L4_CKSUM_NONE;
+			m->ol_flags |= RTE_MBUF_F_RX_LRO | RTE_MBUF_F_RX_L4_CKSUM_NONE;
 			m->tso_segsz = hdr->gso_size;
 			break;
 		case VIRTIO_NET_HDR_GSO_UDP:
 			if ((ptype & RTE_PTYPE_L4_MASK) != RTE_PTYPE_L4_UDP)
 				break;
-			m->ol_flags |= PKT_RX_LRO | PKT_RX_L4_CKSUM_NONE;
+			m->ol_flags |= RTE_MBUF_F_RX_LRO | RTE_MBUF_F_RX_L4_CKSUM_NONE;
 			m->tso_segsz = hdr->gso_size;
 			break;
 		default:
-- 
2.30.2


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/4] mbuf: remove duplicate definition of cksum offload flags
  2021-10-15 19:24   ` [dpdk-dev] [PATCH v2 1/4] mbuf: remove duplicate definition of cksum offload flags Olivier Matz
@ 2021-10-16  7:47     ` Andrew Rybchenko
  0 siblings, 0 replies; 20+ messages in thread
From: Andrew Rybchenko @ 2021-10-16  7:47 UTC (permalink / raw)
  To: Olivier Matz, dev; +Cc: David Marchand

On 10/15/21 10:24 PM, Olivier Matz wrote:
> The flags PKT_RX_L4_CKSUM_BAD and PKT_RX_IP_CKSUM_BAD are defined
> twice with the same value. Remove one of the occurence, which was
> marked as "deprecated".
> 
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>

Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/4] mbuf: mark old VLAN offload flags as deprecated
  2021-10-15 19:24   ` [dpdk-dev] [PATCH v2 2/4] mbuf: mark old VLAN offload flags as deprecated Olivier Matz
@ 2021-10-16  7:50     ` Andrew Rybchenko
  2021-10-17 14:46       ` Ajit Khaparde
  2021-10-18  8:10       ` Olivier Matz
  0 siblings, 2 replies; 20+ messages in thread
From: Andrew Rybchenko @ 2021-10-16  7:50 UTC (permalink / raw)
  To: Olivier Matz, dev; +Cc: David Marchand

On 10/15/21 10:24 PM, Olivier Matz wrote:
> The flags PKT_TX_VLAN_PKT and PKT_TX_QINQ_PKT are
> marked as deprecated since commit 380a7aab1ae2 ("mbuf: rename deprecated
> VLAN flags") (2017). But they were not using the RTE_DEPRECATED
> macro, because it did not exist at this time. Add it, and replace
> usage of these flags.
> 
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>

Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

I'd remove these flags completely. 4 years is definitely
enough. Yes, I realize that because of missing
RTE_DEPRECATED markup users were not warning on build.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH v2 4/4] mbuf: add rte prefix to offload flags
  2021-10-15 19:24   ` [dpdk-dev] [PATCH v2 4/4] mbuf: add rte prefix to " Olivier Matz
@ 2021-10-16  7:57     ` Andrew Rybchenko
  2021-10-17 14:45       ` Ajit Khaparde
  0 siblings, 1 reply; 20+ messages in thread
From: Andrew Rybchenko @ 2021-10-16  7:57 UTC (permalink / raw)
  To: Olivier Matz, dev; +Cc: David Marchand

On 10/15/21 10:24 PM, Olivier Matz wrote:
> Fix the mbuf offload flags namespace by adding an RTE_ prefix to the
> name. The old flags remain usable, but a deprecation warning is issued
> at compilation.
> 
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>

Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH v2 4/4] mbuf: add rte prefix to offload flags
  2021-10-16  7:57     ` Andrew Rybchenko
@ 2021-10-17 14:45       ` Ajit Khaparde
  2021-10-18 11:15         ` Somnath Kotur
  0 siblings, 1 reply; 20+ messages in thread
From: Ajit Khaparde @ 2021-10-17 14:45 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: Olivier Matz, dpdk-dev, David Marchand

On Sat, Oct 16, 2021 at 12:57 AM Andrew Rybchenko
<andrew.rybchenko@oktetlabs.ru> wrote:
>
> On 10/15/21 10:24 PM, Olivier Matz wrote:
> > Fix the mbuf offload flags namespace by adding an RTE_ prefix to the
> > name. The old flags remain usable, but a deprecation warning is issued
> > at compilation.
> >
> > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
>
> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/4] mbuf: mark old VLAN offload flags as deprecated
  2021-10-16  7:50     ` Andrew Rybchenko
@ 2021-10-17 14:46       ` Ajit Khaparde
  2021-10-18  8:10       ` Olivier Matz
  1 sibling, 0 replies; 20+ messages in thread
From: Ajit Khaparde @ 2021-10-17 14:46 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: Olivier Matz, dpdk-dev, David Marchand

On Sat, Oct 16, 2021 at 12:50 AM Andrew Rybchenko
<andrew.rybchenko@oktetlabs.ru> wrote:
>
> On 10/15/21 10:24 PM, Olivier Matz wrote:
> > The flags PKT_TX_VLAN_PKT and PKT_TX_QINQ_PKT are
> > marked as deprecated since commit 380a7aab1ae2 ("mbuf: rename deprecated
> > VLAN flags") (2017). But they were not using the RTE_DEPRECATED
> > macro, because it did not exist at this time. Add it, and replace
> > usage of these flags.
> >
> > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
>
> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>

>
> I'd remove these flags completely. 4 years is definitely
> enough. Yes, I realize that because of missing
> RTE_DEPRECATED markup users were not warning on build.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/4] mbuf: mark old VLAN offload flags as deprecated
  2021-10-16  7:50     ` Andrew Rybchenko
  2021-10-17 14:46       ` Ajit Khaparde
@ 2021-10-18  8:10       ` Olivier Matz
  2021-10-18 11:15         ` Somnath Kotur
  1 sibling, 1 reply; 20+ messages in thread
From: Olivier Matz @ 2021-10-18  8:10 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev, David Marchand

Hi Andrew,

On Sat, Oct 16, 2021 at 10:50:50AM +0300, Andrew Rybchenko wrote:
> On 10/15/21 10:24 PM, Olivier Matz wrote:
> > The flags PKT_TX_VLAN_PKT and PKT_TX_QINQ_PKT are
> > marked as deprecated since commit 380a7aab1ae2 ("mbuf: rename deprecated
> > VLAN flags") (2017). But they were not using the RTE_DEPRECATED
> > macro, because it did not exist at this time. Add it, and replace
> > usage of these flags.
> > 
> > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> 
> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> 
> I'd remove these flags completely. 4 years is definitely
> enough. Yes, I realize that because of missing
> RTE_DEPRECATED markup users were not warning on build.

I kept them because David pointed me that these flags are still used by
other projects. Adding an explicit RTE_DEPRECATED() would prevent this,
and... it doesn't bother to add one more year :)

Thank you for the review.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/4] mbuf: mark old VLAN offload flags as deprecated
  2021-10-18  8:10       ` Olivier Matz
@ 2021-10-18 11:15         ` Somnath Kotur
  0 siblings, 0 replies; 20+ messages in thread
From: Somnath Kotur @ 2021-10-18 11:15 UTC (permalink / raw)
  To: Olivier Matz; +Cc: Andrew Rybchenko, dev, David Marchand

On Mon, Oct 18, 2021 at 1:40 PM Olivier Matz <olivier.matz@6wind.com> wrote:
>
> Hi Andrew,
>
> On Sat, Oct 16, 2021 at 10:50:50AM +0300, Andrew Rybchenko wrote:
> > On 10/15/21 10:24 PM, Olivier Matz wrote:
> > > The flags PKT_TX_VLAN_PKT and PKT_TX_QINQ_PKT are
> > > marked as deprecated since commit 380a7aab1ae2 ("mbuf: rename deprecated
> > > VLAN flags") (2017). But they were not using the RTE_DEPRECATED
> > > macro, because it did not exist at this time. Add it, and replace
> > > usage of these flags.
> > >
> > > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> >
> > Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >
> > I'd remove these flags completely. 4 years is definitely
> > enough. Yes, I realize that because of missing
> > RTE_DEPRECATED markup users were not warning on build.
>
> I kept them because David pointed me that these flags are still used by
> other projects. Adding an explicit RTE_DEPRECATED() would prevent this,
> and... it doesn't bother to add one more year :)
>
> Thank you for the review.
Acked-by Somnath Kotur <somnath.kotur@broadcom.com>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH v2 4/4] mbuf: add rte prefix to offload flags
  2021-10-17 14:45       ` Ajit Khaparde
@ 2021-10-18 11:15         ` Somnath Kotur
  0 siblings, 0 replies; 20+ messages in thread
From: Somnath Kotur @ 2021-10-18 11:15 UTC (permalink / raw)
  To: Ajit Khaparde; +Cc: Andrew Rybchenko, Olivier Matz, dpdk-dev, David Marchand

On Sun, Oct 17, 2021 at 8:16 PM Ajit Khaparde
<ajit.khaparde@broadcom.com> wrote:
>
> On Sat, Oct 16, 2021 at 12:57 AM Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru> wrote:
> >
> > On 10/15/21 10:24 PM, Olivier Matz wrote:
> > > Fix the mbuf offload flags namespace by adding an RTE_ prefix to the
> > > name. The old flags remain usable, but a deprecation warning is issued
> > > at compilation.
> > >
> > > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> >
> > Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>
> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> >
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH v2 0/4] mbuf: offload flags namespace
  2021-10-15 19:24 ` [dpdk-dev] [PATCH v2 0/4] mbuf: offload flags namespace Olivier Matz
                     ` (3 preceding siblings ...)
  2021-10-15 19:24   ` [dpdk-dev] [PATCH v2 4/4] mbuf: add rte prefix to " Olivier Matz
@ 2021-10-24 11:43   ` David Marchand
  4 siblings, 0 replies; 20+ messages in thread
From: David Marchand @ 2021-10-24 11:43 UTC (permalink / raw)
  To: Olivier Matz; +Cc: dev, Thomas Monjalon, Yigit, Ferruh, Andrew Rybchenko

On Fri, Oct 15, 2021 at 9:24 PM Olivier Matz <olivier.matz@6wind.com> wrote:
>
> This patchset fixes the mbuf offload flags namespace by adding an RTE_
> prefix to the definitions. The old flags remain usable, but a deprecation
> warning is issued at compilation.
>
> v2
> * rebase
> * remove part about PKT_RX_EIP_CKSUM_BAD, already in 74a74bf98c
> * mark old vlan flags as deprecated instead of removing them
> * clarify that rx checksum flags are not really removed (just remove dup)
>
>
> Olivier Matz (4):
>   mbuf: remove duplicate definition of cksum offload flags
>   mbuf: mark old VLAN offload flags as deprecated
>   cocci: add cocci script to rename mbuf offload flags
>   mbuf: add rte prefix to offload flags

I had to fix quite some conflicts, hopefully, I did not break anything...
Series applied, thanks.


-- 
David Marchand


^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2021-10-24 11:43 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-29 21:48 [dpdk-dev] [PATCH 0/3] mbuf: offload flags namespace Olivier Matz
2021-09-29 21:48 ` [dpdk-dev] [PATCH 1/3] mbuf: remove deprecated offload flags Olivier Matz
2021-10-04  8:29   ` David Marchand
2021-10-04  9:46     ` Olivier Matz
2021-09-29 21:48 ` [dpdk-dev] [PATCH 2/3] cocci: add cocci script to rename mbuf " Olivier Matz
2021-09-29 21:48 ` [dpdk-dev] [PATCH 3/3] mbuf: add rte prefix to " Olivier Matz
2021-10-15 19:24 ` [dpdk-dev] [PATCH v2 0/4] mbuf: offload flags namespace Olivier Matz
2021-10-15 19:24   ` [dpdk-dev] [PATCH v2 1/4] mbuf: remove duplicate definition of cksum offload flags Olivier Matz
2021-10-16  7:47     ` Andrew Rybchenko
2021-10-15 19:24   ` [dpdk-dev] [PATCH v2 2/4] mbuf: mark old VLAN offload flags as deprecated Olivier Matz
2021-10-16  7:50     ` Andrew Rybchenko
2021-10-17 14:46       ` Ajit Khaparde
2021-10-18  8:10       ` Olivier Matz
2021-10-18 11:15         ` Somnath Kotur
2021-10-15 19:24   ` [dpdk-dev] [PATCH v2 3/4] cocci: add cocci script to rename mbuf offload flags Olivier Matz
2021-10-15 19:24   ` [dpdk-dev] [PATCH v2 4/4] mbuf: add rte prefix to " Olivier Matz
2021-10-16  7:57     ` Andrew Rybchenko
2021-10-17 14:45       ` Ajit Khaparde
2021-10-18 11:15         ` Somnath Kotur
2021-10-24 11:43   ` [dpdk-dev] [PATCH v2 0/4] mbuf: offload flags namespace David Marchand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).